WorldWideScience

Sample records for mixed logit model

  1. Modelling Stochastic Route Choice Behaviours with a Closed-Form Mixed Logit Model

    Directory of Open Access Journals (Sweden)

    Xinjun Lai

    2015-01-01

    Full Text Available A closed-form mixed Logit approach is proposed to model the stochastic route choice behaviours. It combines both the advantages of Probit and Logit to provide a flexible form in alternatives correlation and a tractable form in expression; besides, the heterogeneity in alternative variance can also be addressed. Paths are compared by pairs where the superiority of the binary Probit can be fully used. The Probit-based aggregation is also used for a nested Logit structure. Case studies on both numerical and empirical examples demonstrate that the new method is valid and practical. This paper thus provides an operational solution to incorporate the normal distribution in route choice with an analytical expression.

  2. Mixed logit model of intended residential mobility in renovated historical blocks in China

    NARCIS (Netherlands)

    Jiang, W.; Timmermans, H.J.P.; Li, H.; Feng, T.

    2016-01-01

    Using data from 8 historical blocks in China, the influence of socialdemographic characteristics and residential satisfaction on intended residentialmobility is analysed. The results of a mixed logit model indicate that higher residential satisfaction will lead to a lower intention to move house,

  3. Associating Crash Avoidance Maneuvers with Driver Attributes and Accident Characteristics: A Mixed Logit Model Approach

    DEFF Research Database (Denmark)

    Kaplan, Sigal; Prato, Carlo Giacomo

    2012-01-01

    as from the key role of the ability of drivers to perform effective corrective maneuvers for the success of automated in-vehicle warning and driver assistance systems. The analysis is conducted by means of a mixed logit model that accommodates correlations across alternatives and heteroscedasticity. Data...

  4. Valuing Non-market Benefits of Rehabilitation of Hydrologic Cycle Improvements in the Anyangcheon Watershed: Using Mixed Logit Models

    Science.gov (United States)

    Yoo, J.; Kong, K.

    2010-12-01

    This research the findings from a discrete-choice experiment designed to estimate the economic benefits associated with the Anyangcheon watershed improvements in Rep. of Korea. The Anyangcheon watershed has suffered from streamflow depletion and poor stream quality, which often negatively affect instream and near-stream ecologic integrity, as well as water supply. Such distortions in the hydrologic cycle mainly result from rapid increase of impermeable area due to urbanization, decreases of baseflow runoff due to groundwater pumping, and reduced precipitation inputs driven by climate forcing. As well, combined sewer overflows and increase of non-point source pollution from urban regions decrease water quality. The appeal of choice experiments (CE) in economic analysis is that it is based on random utility theory (McFadden, 1974; Ben-Akiva and Lerman, 1985). In contrast to contingent valuation method (CVM), which asks people to choose between a base case and a specific alternative, CE asks people to choice between cases that are described by attributes. The attributes of this study were selected from hydrologic vulnerability components that represent flood damage possibility, instreamflow depletion, water quality deterioration, form of the watershed and tax. Their levels were divided into three grades include status quo. Two grades represented the ideal conditions. These scenarios were constructed from a 35 orthogonal main effect design. This design resulted in twenty-seven choice sets. The design had nine different choice scenarios presented to each respondent. The most popular choice models in use are the conditional logit (CNL). This model provides closed-form choice probability calculation. The shortcoming of CNL comes from irrelevant alternatives (IIA). In this paper, the mixed logit (ML) is applied to allow the coefficient’s variation for random taste heterogeneity in the population. The mixed logit model(with normal distributions for the attributes) fit the

  5. Analysis of hourly crash likelihood using unbalanced panel data mixed logit model and real-time driving environmental big data.

    Science.gov (United States)

    Chen, Feng; Chen, Suren; Ma, Xiaoxiang

    2018-06-01

    Driving environment, including road surface conditions and traffic states, often changes over time and influences crash probability considerably. It becomes stretched for traditional crash frequency models developed in large temporal scales to capture the time-varying characteristics of these factors, which may cause substantial loss of critical driving environmental information on crash prediction. Crash prediction models with refined temporal data (hourly records) are developed to characterize the time-varying nature of these contributing factors. Unbalanced panel data mixed logit models are developed to analyze hourly crash likelihood of highway segments. The refined temporal driving environmental data, including road surface and traffic condition, obtained from the Road Weather Information System (RWIS), are incorporated into the models. Model estimation results indicate that the traffic speed, traffic volume, curvature and chemically wet road surface indicator are better modeled as random parameters. The estimation results of the mixed logit models based on unbalanced panel data show that there are a number of factors related to crash likelihood on I-25. Specifically, weekend indicator, November indicator, low speed limit and long remaining service life of rutting indicator are found to increase crash likelihood, while 5-am indicator and number of merging ramps per lane per mile are found to decrease crash likelihood. The study underscores and confirms the unique and significant impacts on crash imposed by the real-time weather, road surface, and traffic conditions. With the unbalanced panel data structure, the rich information from real-time driving environmental big data can be well incorporated. Copyright © 2018 National Safety Council and Elsevier Ltd. All rights reserved.

  6. Time-varying mixed logit model for vehicle merging behavior in work zone merging areas.

    Science.gov (United States)

    Weng, Jinxian; Du, Gang; Li, Dan; Yu, Yao

    2018-08-01

    This study aims to develop a time-varying mixed logit model for the vehicle merging behavior in work zone merging areas during the merging implementation period from the time of starting a merging maneuver to that of completing the maneuver. From the safety perspective, vehicle crash probability and severity between the merging vehicle and its surrounding vehicles are regarded as major factors influencing vehicle merging decisions. Model results show that the model with the use of vehicle crash risk probability and severity could provide higher prediction accuracy than previous models with the use of vehicle speeds and gap sizes. It is found that lead vehicle type, through lead vehicle type, through lag vehicle type, crash probability of the merging vehicle with respect to the through lag vehicle, crash severities of the merging vehicle with respect to the through lead and lag vehicles could exhibit time-varying effects on the merging behavior. One important finding is that the merging vehicle could become more and more aggressive in order to complete the merging maneuver as quickly as possible over the elapsed time, even if it has high vehicle crash risk with respect to the through lead and lag vehicles. Copyright © 2018 Elsevier Ltd. All rights reserved.

  7. Efficiency Loss of Mixed Equilibrium Associated with Altruistic Users and Logit-based Stochastic Users in Transportation Network

    Directory of Open Access Journals (Sweden)

    Xiao-Jun Yu

    2014-02-01

    Full Text Available The efficiency loss of mixed equilibrium associated with two categories of users is investigated in this paper. The first category of users are altruistic users (AU who have the same altruism coefficient and try to minimize their own perceived cost that assumed to be a linear combination of selfish com­ponent and altruistic component. The second category of us­ers are Logit-based stochastic users (LSU who choose the route according to the Logit-based stochastic user equilib­rium (SUE principle. The variational inequality (VI model is used to formulate the mixed route choice behaviours associ­ated with AU and LSU. The efficiency loss caused by the two categories of users is analytically derived and the relations to some network parameters are discussed. The numerical tests validate our analytical results. Our result takes the re­sults in the existing literature as its special cases.

  8. Sequential and Simultaneous Logit: A Nested Model.

    NARCIS (Netherlands)

    van Ophem, J.C.M.; Schram, A.J.H.C.

    1997-01-01

    A nested model is presented which has both the sequential and the multinomial logit model as special cases. This model provides a simple test to investigate the validity of these specifications. Some theoretical properties of the model are discussed. In the analysis a distribution function is

  9. Investigating the Differences of Single-Vehicle and Multivehicle Accident Probability Using Mixed Logit Model

    Directory of Open Access Journals (Sweden)

    Bowen Dong

    2018-01-01

    Full Text Available Road traffic accidents are believed to be associated with not only road geometric feature and traffic characteristic, but also weather condition. To address these safety issues, it is of paramount importance to understand how these factors affect the occurrences of the crashes. Existing studies have suggested that the mechanisms of single-vehicle (SV accidents and multivehicle (MV accidents can be very different. Few studies were conducted to examine the difference of SV and MV accident probability by addressing unobserved heterogeneity at the same time. To investigate the different contributing factors on SV and MV, a mixed logit model is employed using disaggregated data with the response variable categorized as no accidents, SV accidents, and MV accidents. The results indicate that, in addition to speed gap, length of segment, and wet road surfaces which are significant for both SV and MV accidents, most of other variables are significant only for MV accidents. Traffic, road, and surface characteristics are main influence factors of SV and MV accident possibility. Hourly traffic volume, inside shoulder width, and wet road surface are found to produce statistically significant random parameters. Their effects on the possibility of SV and MV accident vary across different road segments.

  10. Interpreting Results from the Multinomial Logit Model

    DEFF Research Database (Denmark)

    Wulff, Jesper

    2015-01-01

    This article provides guidelines and illustrates practical steps necessary for an analysis of results from the multinomial logit model (MLM). The MLM is a popular model in the strategy literature because it allows researchers to examine strategic choices with multiple outcomes. However, there see...... suitable for both interpretation and communication of results. The pratical steps are illustrated through an application of the MLM to the choice of foreign market entry mode.......This article provides guidelines and illustrates practical steps necessary for an analysis of results from the multinomial logit model (MLM). The MLM is a popular model in the strategy literature because it allows researchers to examine strategic choices with multiple outcomes. However, there seem...... to be systematic issues with regard to how researchers interpret their results when using the MLM. In this study, I present a set of guidelines critical to analyzing and interpreting results from the MLM. The procedure involves intuitive graphical representations of predicted probabilities and marginal effects...

  11. Interpreting and Understanding Logits, Probits, and other Non-Linear Probability Models

    DEFF Research Database (Denmark)

    Breen, Richard; Karlson, Kristian Bernt; Holm, Anders

    2018-01-01

    Methods textbooks in sociology and other social sciences routinely recommend the use of the logit or probit model when an outcome variable is binary, an ordered logit or ordered probit when it is ordinal, and a multinomial logit when it has more than two categories. But these methodological...... guidelines take little or no account of a body of work that, over the past 30 years, has pointed to problematic aspects of these nonlinear probability models and, particularly, to difficulties in interpreting their parameters. In this chapterreview, we draw on that literature to explain the problems, show...

  12. Unobserved Heterogeneity in the Binary Logit Model with Cross-Sectional Data and Short Panels

    DEFF Research Database (Denmark)

    Holm, Anders; Jæger, Mads Meier; Pedersen, Morten

    This paper proposes a new approach to dealing with unobserved heterogeneity in applied research using the binary logit model with cross-sectional data and short panels. Unobserved heterogeneity is particularly important in non-linear regression models such as the binary logit model because, unlike...... in linear regression models, estimates of the effects of observed independent variables are biased even when omitted independent variables are uncorrelated with the observed independent variables. We propose an extension of the binary logit model based on a finite mixture approach in which we conceptualize...

  13. A nested recursive logit model for route choice analysis

    DEFF Research Database (Denmark)

    Mai, Tien; Frejinger, Emma; Fosgerau, Mogens

    2015-01-01

    choices and the model does not require any sampling of choice sets. Furthermore, the model can be consistently estimated and efficiently used for prediction.A key challenge lies in the computation of the value functions, i.e. the expected maximum utility from any position in the network to a destination....... The value functions are the solution to a system of non-linear equations. We propose an iterative method with dynamic accuracy that allows to efficiently solve these systems.We report estimation results and a cross-validation study for a real network. The results show that the NRL model yields sensible......We propose a route choice model that relaxes the independence from irrelevant alternatives property of the logit model by allowing scale parameters to be link specific. Similar to the recursive logit (RL) model proposed by Fosgerau et al. (2013), the choice of path is modeled as a sequence of link...

  14. DETEKSI DINI KRISIS PERBANKAN INDONESIA: IDENTIFIKASI VARIABEL MAKRO DENGAN MODEL LOGIT

    Directory of Open Access Journals (Sweden)

    Shanty Oktavilia

    2012-01-01

    Full Text Available Indonesia suffered from banking crisis for several times. It was the effect of the worst crisis occurredin 1997. Actually, Bath Thailand which plunged into 27,8% at the third quarter of the year 1997 was thebeginning problem that caused Asia currency crisis. This study analyzes the influence of macro indicatoras an early warning system by using logit econometrics model for predicting the possibilities of bankingcrisis that may occur in Indonesia.Kewords: Banking Crisis, macro economic indicator, EWS-logit model

  15. A mixed logit analysis of two-vehicle crash severities involving a motorcycle.

    Science.gov (United States)

    Shaheed, Mohammad Saad B; Gkritza, Konstantina; Zhang, Wei; Hans, Zachary

    2013-12-01

    Using motorcycle crash data for Iowa from 2001 to 2008, this paper estimates a mixed logit model to investigate the factors that affect crash severity outcomes in a collision between a motorcycle and another vehicle. These include crash-specific factors (such as manner of collision, motorcycle rider and non-motorcycle driver and vehicle actions), roadway and environmental conditions, location and time, motorcycle rider and non-motorcycle driver and vehicle attributes. The methodological approach allows the parameters to vary across observations as opposed to a single parameter representing all observations. Our results showed non-uniform effects of rear-end collisions on minor injury crashes, as well as of the roadway speed limit greater or equal to 55mph, the type of area (urban), the riding season (summer) and motorcyclist's gender on low severity crashes. We also found significant effects of the roadway surface condition, clear vision (not obscured by moving vehicles, trees, buildings, or other), light conditions, speed limit, and helmet use on severe injury outcomes. Copyright © 2013 Elsevier Ltd. All rights reserved.

  16. Spatial age-length key modelling using continuation ratio logits

    DEFF Research Database (Denmark)

    Berg, Casper W.; Kristensen, Kasper

    2012-01-01

    -called age-length key (ALK) is then used to obtain the age distribution. Regional differences in ALKs are not uncommon, but stratification is often problematic due to a small number of samples. Here, we combine generalized additive modelling with continuation ratio logits to model the probability of age...

  17. Analyzing Korean consumers’ latent preferences for electricity generation sources with a hierarchical Bayesian logit model in a discrete choice experiment

    International Nuclear Information System (INIS)

    Byun, Hyunsuk; Lee, Chul-Yong

    2017-01-01

    Generally, consumers use electricity without considering the source the electricity was generated from. Since different energy sources exert varying effects on society, it is necessary to analyze consumers’ latent preference for electricity generation sources. The present study estimates Korean consumers’ marginal utility and an appropriate generation mix is derived using the hierarchical Bayesian logit model in a discrete choice experiment. The results show that consumers consider the danger posed by the source of electricity as the most important factor among the effects of electricity generation sources. Additionally, Korean consumers wish to reduce the contribution of nuclear power from the existing 32–11%, and increase that of renewable energy from the existing 4–32%. - Highlights: • We derive an electricity mix reflecting Korean consumers’ latent preferences. • We use the discrete choice experiment and hierarchical Bayesian logit model. • The danger posed by the generation source is the most important attribute. • The consumers wish to increase the renewable energy proportion from 4.3% to 32.8%. • Korea's cost-oriented energy supply policy and consumers’ preference differ markedly.

  18. Total, Direct, and Indirect Effects in Logit Models

    DEFF Research Database (Denmark)

    Karlson, Kristian Bernt; Holm, Anders; Breen, Richard

    It has long been believed that the decomposition of the total effect of one variable on another into direct and indirect effects, while feasible in linear models, is not possible in non-linear probability models such as the logit and probit. In this paper we present a new and simple method...... average partial effects, as defined by Wooldridge (2002). We present the method graphically and illustrate it using the National Educational Longitudinal Study of 1988...

  19. STAS and Logit Modeling of Advertising and Promotion Effects

    DEFF Research Database (Denmark)

    Hansen, Flemming; Yssing Hansen, Lotte; Grønholdt, Lars

    2002-01-01

    This paper describes the preliminary studies of the effect of advertising and promotion on purchases using the British single-source database Adlab. STAS and logit modeling are the two measures studied. Results from the two measures have been compared to determine the extent to which, they give...

  20. Associating crash avoidance maneuvers with driver attributes and accident characteristics: a mixed logit model approach.

    Science.gov (United States)

    Kaplan, Sigal; Prato, Carlo Giacomo

    2012-01-01

    The current study focuses on the propensity of drivers to engage in crash avoidance maneuvers in relation to driver attributes, critical events, crash characteristics, vehicles involved, road characteristics, and environmental conditions. The importance of avoidance maneuvers derives from the key role of proactive and state-aware road users within the concept of sustainable safety systems, as well as from the key role of effective corrective maneuvers in the success of automated in-vehicle warning and driver assistance systems. The analysis is conducted by means of a mixed logit model that represents the selection among 5 emergency lateral and speed control maneuvers (i.e., "no avoidance maneuvers," "braking," "steering," "braking and steering," and "other maneuvers) while accommodating correlations across maneuvers and heteroscedasticity. Data for the analysis were retrieved from the General Estimates System (GES) crash database for the year 2009 by considering drivers for which crash avoidance maneuvers are known. The results show that (1) the nature of the critical event that made the crash imminent greatly influences the choice of crash avoidance maneuvers, (2) women and elderly have a relatively lower propensity to conduct crash avoidance maneuvers, (3) drowsiness and fatigue have a greater negative marginal effect on the tendency to engage in crash avoidance maneuvers than alcohol and drug consumption, (4) difficult road conditions increase the propensity to perform crash avoidance maneuvers, and (5) visual obstruction and artificial illumination decrease the probability to carry out crash avoidance maneuvers. The results emphasize the need for public awareness campaigns to promote safe driving style for senior drivers and warning about the risks of driving under fatigue and distraction being comparable to the risks of driving under the influence of alcohol and drugs. Moreover, the results suggest the need to educate drivers about hazard perception, designing

  1. Analysis of RIA standard curve by log-logistic and cubic log-logit models

    International Nuclear Information System (INIS)

    Yamada, Hideo; Kuroda, Akira; Yatabe, Tami; Inaba, Taeko; Chiba, Kazuo

    1981-01-01

    In order to improve goodness-of-fit in RIA standard analysis, programs for computing log-logistic and cubic log-logit were written in BASIC using personal computer P-6060 (Olivetti). Iterative least square method of Taylor series was applied for non-linear estimation of logistic and log-logistic. Hear ''log-logistic'' represents Y = (a - d)/(1 + (log(X)/c)sup(b)) + d As weights either 1, 1/var(Y) or 1/σ 2 were used in logistic or log-logistic and either Y 2 (1 - Y) 2 , Y 2 (1 - Y) 2 /var(Y), or Y 2 (1 - Y) 2 /σ 2 were used in quadratic or cubic log-logit. The term var(Y) represents squares of pure error and σ 2 represents estimated variance calculated using a following equation log(σ 2 + 1) = log(A) + J log(y). As indicators for goodness-of-fit, MSL/S sub(e)sup(2), CMD% and WRV (see text) were used. Better regression was obtained in case of alpha-fetoprotein by log-logistic than by logistic. Cortisol standard curve was much better fitted with cubic log-logit than quadratic log-logit. Predicted precision of AFP standard curve was below 5% in log-logistic in stead of 8% in logistic analysis. Predicted precision obtained using cubic log-logit was about five times lower than that with quadratic log-logit. Importance of selecting good models in RIA data processing was stressed in conjunction with intrinsic precision of radioimmunoassay system indicated by predicted precision. (author)

  2. Assessment of Poisson, logit, and linear models for genetic analysis of clinical mastitis in Norwegian Red cows.

    Science.gov (United States)

    Vazquez, A I; Gianola, D; Bates, D; Weigel, K A; Heringstad, B

    2009-02-01

    Clinical mastitis is typically coded as presence/absence during some period of exposure, and records are analyzed with linear or binary data models. Because presence includes cows with multiple episodes, there is loss of information when a count is treated as a binary response. The Poisson model is designed for counting random variables, and although it is used extensively in epidemiology of mastitis, it has rarely been used for studying the genetics of mastitis. Many models have been proposed for genetic analysis of mastitis, but they have not been formally compared. The main goal of this study was to compare linear (Gaussian), Bernoulli (with logit link), and Poisson models for the purpose of genetic evaluation of sires for mastitis in dairy cattle. The response variables were clinical mastitis (CM; 0, 1) and number of CM cases (NCM; 0, 1, 2, ..). Data consisted of records on 36,178 first-lactation daughters of 245 Norwegian Red sires distributed over 5,286 herds. Predictive ability of models was assessed via a 3-fold cross-validation using mean squared error of prediction (MSEP) as the end-point. Between-sire variance estimates for NCM were 0.065 in Poisson and 0.007 in the linear model. For CM the between-sire variance was 0.093 in logit and 0.003 in the linear model. The ratio between herd and sire variances for the models with NCM response was 4.6 and 3.5 for Poisson and linear, respectively, and for model for CM was 3.7 in both logit and linear models. The MSEP for all cows was similar. However, within healthy animals, MSEP was 0.085 (Poisson), 0.090 (linear for NCM), 0.053 (logit), and 0.056 (linear for CM). For mastitic animals the MSEP values were 1.206 (Poisson), 1.185 (linear for NCM response), 1.333 (logit), and 1.319 (linear for CM response). The models for count variables had a better performance when predicting diseased animals and also had a similar performance between them. Logit and linear models for CM had better predictive ability for healthy

  3. Logit Estimation of a Gravity Model of the College Enrollment Decision.

    Science.gov (United States)

    Leppel, Karen

    1993-01-01

    A study investigated the factors influencing students' decisions about attending a college to which they had been admitted. Logit analysis confirmed gravity model predictions that geographic distance and student ability would most influence the enrollment decision and found other variables, although affecting earlier stages of decision making, did…

  4. Interpreting Marginal Effects in the Multinomial Logit Model

    DEFF Research Database (Denmark)

    Wulff, Jesper

    2014-01-01

    with a substantial increase in the probability of entering a foreign market using a joint venture, while increases in the unpredictability in the host country environment are associated with a lower probability of wholly owned subsidiaries and a higher probability of exporting entries....... that have entered foreign markets. Through the application of a multinomial logit model, careful analysis of the marginal effects is performed through graphical representations, marginal effects at the mean, average marginal effects and elasticities. I show that increasing cultural distance is associated......This paper presents the challenges when researchers interpret results about relationships between variables from discrete choice models with multiple outcomes. The recommended approach is demonstrated by testing predictions from transaction cost theory on a sample of 246 Scandinavian firms...

  5. Another Look at the Method of Y-Standardization in Logit and Probit Models

    DEFF Research Database (Denmark)

    Karlson, Kristian Bernt

    2015-01-01

    This paper takes another look at the derivation of the method of Y-standardization used in sociological analysis involving comparisons of coefficients across logit or probit models. It shows that the method can be derived under less restrictive assumptions than hitherto suggested. Rather than...

  6. An integrated Markov decision process and nested logit consumer response model of air ticket pricing

    NARCIS (Netherlands)

    Lu, J.; Feng, T.; Timmermans, H.P.J.; Yang, Z.

    2017-01-01

    The paper attempts to propose an optimal air ticket pricing model during the booking horizon by taking into account passengers' purchasing behavior of air tickets. A Markov decision process incorporating a nested logit consumer response model is established to modeling the dynamic pricing process.

  7. Analysis of Internet Usage Intensity in Iraq: An Ordered Logit Model

    OpenAIRE

    Almas Heshmati; Firas H. Al-Hammadany; Ashraf Bany-Mohammed

    2013-01-01

    Intensity of Internet use is significantly influenced by government policies, people’s levels of income, education, employment and general development and economic conditions. Iraq has very low Internet usage levels compared to the region and the world. This study uses an ordered logit model to analyse the intensity of Internet use in Iraq. The results showed that economic reasons (internet cost and income level) were key cause for low level usage intensity rates. About 68% of the population ...

  8. Study on Emission Measurement of Vehicle on Road Based on Binomial Logit Model

    OpenAIRE

    Aly, Sumarni Hamid; Selintung, Mary; Ramli, Muhammad Isran; Sumi, Tomonori

    2011-01-01

    This research attempts to evaluate emission measurement of on road vehicle. In this regard, the research develops failure probability model of vehicle emission test for passenger car which utilize binomial logit model. The model focuses on failure of CO and HC emission test for gasoline cars category and Opacity emission test for diesel-fuel cars category as dependent variables, while vehicle age, engine size, brand and type of the cars as independent variables. In order to imp...

  9. Ordered LOGIT Model approach for the determination of financial distress.

    Science.gov (United States)

    Kinay, B

    2010-01-01

    Nowadays, as a result of the global competition encountered, numerous companies come up against financial distresses. To predict and take proactive approaches for those problems is quite important. Thus, the prediction of crisis and financial distress is essential in terms of revealing the financial condition of companies. In this study, financial ratios relating to 156 industrial firms that are quoted in the Istanbul Stock Exchange are used and probabilities of financial distress are predicted by means of an ordered logit regression model. By means of Altman's Z Score, the dependent variable is composed by scaling the level of risk. Thus, a model that can compose an early warning system and predict financial distress is proposed.

  10. Essays on pricing dynamics, price dispersion, and nested logit modelling

    Science.gov (United States)

    Verlinda, Jeremy Alan

    The body of this dissertation comprises three standalone essays, presented in three respective chapters. Chapter One explores the possibility that local market power contributes to the asymmetric relationship observed between wholesale costs and retail prices in gasoline markets. I exploit an original data set of weekly gas station prices in Southern California from September 2002 to May 2003, and take advantage of highly detailed station and local market-level characteristics to determine the extent to which spatial differentiation influences price-response asymmetry. I find that brand identity, proximity to rival stations, bundling and advertising, operation type, and local market features and demographics each influence a station's predicted asymmetric relationship between prices and wholesale costs. Chapter Two extends the existing literature on the effect of market structure on price dispersion in airline fares by modeling the effect at the disaggregate ticket level. Whereas past studies rely on aggregate measures of price dispersion such as the Gini coefficient or the standard deviation of fares, this paper estimates the entire empirical distribution of airline fares and documents how the shape of the distribution is determined by market structure. Specifically, I find that monopoly markets favor a wider distribution of fares with more mass in the tails while duopoly and competitive markets exhibit a tighter fare distribution. These findings indicate that the dispersion of airline fares may result from the efforts of airlines to practice second-degree price discrimination. Chapter Three adopts a Bayesian approach to the problem of tree structure specification in nested logit modelling, which requires a heavy computational burden in calculating marginal likelihoods. I compare two different techniques for estimating marginal likelihoods: (1) the Laplace approximation, and (2) reversible jump MCMC. I apply the techniques to both a simulated and a travel mode

  11. Logit and probit model in toll sensitivity analysis of Solo-Ngawi, Kartasura-Palang Joglo segment based on Willingness to Pay (WTP)

    Science.gov (United States)

    Handayani, Dewi; Cahyaning Putri, Hera; Mahmudah, AMH

    2017-12-01

    Solo-Ngawi toll road project is part of the mega project of the Trans Java toll road development initiated by the government and is still under construction until now. PT Solo Ngawi Jaya (SNJ) as the Solo-Ngawi toll management company needs to determine the toll fare that is in accordance with the business plan. The determination of appropriate toll rates will affect progress in regional economic sustainability and decrease the traffic congestion. These policy instruments is crucial for achieving environmentally sustainable transport. Therefore, the objective of this research is to find out how the toll fare sensitivity of Solo-Ngawi toll road based on Willingness To Pay (WTP). Primary data was obtained by distributing stated preference questionnaires to four wheeled vehicle users in Kartasura-Palang Joglo artery road segment. Further data obtained will be analysed with logit and probit model. Based on the analysis, it is found that the effect of fare change on the amount of WTP on the binomial logit model is more sensitive than the probit model on the same travel conditions. The range of tariff change against values of WTP on the binomial logit model is 20% greater than the range of values in the probit model . On the other hand, the probability results of the binomial logit model and the binary probit have no significant difference (less than 1%).

  12. The importance of examining movements within the US health care system: sequential logit modeling

    Directory of Open Access Journals (Sweden)

    Lee Chioun

    2010-09-01

    Full Text Available Abstract Background Utilization of specialty care may not be a discrete, isolated behavior but rather, a behavior of sequential movements within the health care system. Although patients may often visit their primary care physician and receive a referral before utilizing specialty care, prior studies have underestimated the importance of accounting for these sequential movements. Methods The sample included 6,772 adults aged 18 years and older who participated in the 2001 Survey on Disparities in Quality of Care, sponsored by the Commonwealth Fund. A sequential logit model was used to account for movement in all stages of utilization: use of any health services (i.e., first stage, having a perceived need for specialty care (i.e., second stage, and utilization of specialty care (i.e., third stage. In the sequential logit model, all stages are nested within the previous stage. Results Gender, race/ethnicity, education and poor health had significant explanatory effects with regard to use of any health services and having a perceived need for specialty care, however racial/ethnic, gender, and educational disparities were not present in utilization of specialty care. After controlling for use of any health services and having a perceived need for specialty care, inability to pay for specialty care via income (AOR = 1.334, CI = 1.10 to 1.62 or health insurance (unstable insurance: AOR = 0.26, CI = 0.14 to 0.48; no insurance: AOR = 0.12, CI = 0.07 to 0.20 were significant barriers to utilization of specialty care. Conclusions Use of a sequential logit model to examine utilization of specialty care resulted in a detailed representation of utilization behaviors and patient characteristics that impact these behaviors at all stages within the health care system. After controlling for sequential movements within the health care system, the biggest barrier to utilizing specialty care is the inability to pay, while racial, gender, and educational disparities

  13. An Empirical Analysis of Television Commercial Ratings in Alternative Competitive Environments Using Multinomial Logit Model

    Directory of Open Access Journals (Sweden)

    Dilek ALTAŞ

    2013-05-01

    Full Text Available Watching the commercials depends on the choice of the viewer. Most of the television viewing takes place during “Prime-Time” unfortunately; many viewers opt to zap to other channels when commercials start. The television viewers’ demographic characteristics may indicate the likelihood of the zapping frequency. Analysis made by using Multinomial Logit Model indicates how effective the demographic variables are in the watching rate of the first minute of the television commercials.

  14. Un procedimiento para selección de los modelos Logit Mixtos

    OpenAIRE

    Ruíz Gallegos, José de Jesús

    2004-01-01

    En el presente trabajo se hace una revisión de dos modelos que han tenido una fuerte aplicabilidad en los problemas de elecciones discretas: El modelo Logit y el modelo Logit Mixto. Además, se propone el uso del estadístico de Cox para seleccionar modelos, en el modelo Logit Mixto.

  15. Airport Choice in Sao Paulo Metropolitan Area: An Application of the Conditional Logit Model

    Science.gov (United States)

    Moreno, Marcelo Baena; Muller, Carlos

    2003-01-01

    Using the conditional LOGIT model, this paper addresses the airport choice in the Sao Paulo Metropolitan Area. In this region, Guarulhos International Airport (GRU) and Congonhas Airport (CGH) compete for passengers flying to several domestic destinations. The airport choice is believed to be a result of the tradeoff passengers perform considering airport access characteristics, airline level of service characteristics and passenger experience with the analyzed airports. It was found that access time to the airports better explain the airport choice than access distance, whereas direct flight frequencies gives better explanation to the airport choice than the indirect (connections and stops) and total (direct plus indirect) flight frequencies. Out of 15 tested variables, passenger experience with the analyzed airports was the variable that best explained the airport choice in the region. Model specifications considering 1, 2 or 3 variables were tested. The model specification most adjusted to the observed data considered access time, direct flight frequencies in the travel period (morning or afternoon peak) and passenger experience with the analyzed airports. The influence of these variables was therefore analyzed across market segments according to departure airport and flight duration criteria. The choice of GRU (located neighboring Sao Paulo city) is not well explained by the rationality of access time economy and the increase of the supply of direct flight frequencies, while the choice of CGH (located inside Sao Paulo city) is. Access time was found to be more important to passengers flying shorter distances while direct flight frequencies in the travel period were more significant to those flying longer distances. Keywords: Airport choice, Multiple airport region, Conditional LOGIT model, Access time, Flight frequencies, Passenger experience with the analyzed airports, Transportation planning

  16. How bicycle level of traffic stress correlate with reported cyclist accidents injury severities: A geospatial and mixed logit analysis.

    Science.gov (United States)

    Chen, Chen; Anderson, Jason C; Wang, Haizhong; Wang, Yinhai; Vogt, Rachel; Hernandez, Salvador

    2017-11-01

    Transportation agencies need efficient methods to determine how to reduce bicycle accidents while promoting cycling activities and prioritizing safety improvement investments. Many studies have used standalone methods, such as level of traffic stress (LTS) and bicycle level of service (BLOS), to better understand bicycle mode share and network connectivity for a region. However, in most cases, other studies rely on crash severity models to explain what variables contribute to the severity of bicycle related crashes. This research uniquely correlates bicycle LTS with reported bicycle crash locations for four cities in New Hampshire through geospatial mapping. LTS measurements and crash locations are compared visually using a GIS framework. Next, a bicycle injury severity model, that incorporates LTS measurements, is created through a mixed logit modeling framework. Results of the visual analysis show some geospatial correlation between higher LTS roads and "Injury" type bicycle crashes. It was determined, statistically, that LTS has an effect on the severity level of bicycle crashes and high LTS can have varying effects on severity outcome. However, it is recommended that further analyses be conducted to better understand the statistical significance and effect of LTS on injury severity. As such, this research will validate the use of LTS as a proxy for safety risk regardless of the recorded bicycle crash history. This research will help identify the clustering patterns of bicycle crashes on high-risk corridors and, therefore, assist with bicycle route planning and policy making. This paper also suggests low-cost countermeasures or treatments that can be implemented to address high-risk areas. Specifically, with the goal of providing safer routes for cyclists, such countermeasures or treatments have the potential to substantially reduce the number of fatalities and severe injuries. Published by Elsevier Ltd.

  17. Street Choice Logit Model for Visitors in Shopping Districts

    Science.gov (United States)

    Kawada, Ko; Yamada, Takashi; Kishimoto, Tatsuya

    2014-01-01

    In this study, we propose two models for predicting people’s activity. The first model is the pedestrian distribution prediction (or postdiction) model by multiple regression analysis using space syntax indices of urban fabric and people distribution data obtained from a field survey. The second model is a street choice model for visitors using multinomial logit model. We performed a questionnaire survey on the field to investigate the strolling routes of 46 visitors and obtained a total of 1211 street choices in their routes. We proposed a utility function, sum of weighted space syntax indices, and other indices, and estimated the parameters for weights on the basis of maximum likelihood. These models consider both street networks, distance from destination, direction of the street choice and other spatial compositions (numbers of pedestrians, cars, shops, and elevation). The first model explains the characteristics of the street where many people tend to walk or stay. The second model explains the mechanism underlying the street choice of visitors and clarifies the differences in the weights of street choice parameters among the various attributes, such as gender, existence of destinations, number of people, etc. For all the attributes considered, the influences of DISTANCE and DIRECTION are strong. On the other hand, the influences of Int.V, SHOPS, CARS, ELEVATION, and WIDTH are different for each attribute. People with defined destinations tend to choose streets that “have more shops, and are wider and lower”. In contrast, people with undefined destinations tend to choose streets of high Int.V. The choice of males is affected by Int.V, SHOPS, WIDTH (positive) and CARS (negative). Females prefer streets that have many shops, and couples tend to choose downhill streets. The behavior of individual persons is affected by all variables. The behavior of people visiting in groups is affected by SHOP and WIDTH (positive). PMID:25379274

  18. Street Choice Logit Model for Visitors in Shopping Districts

    Directory of Open Access Journals (Sweden)

    Ko Kawada

    2014-07-01

    Full Text Available In this study, we propose two models for predicting people’s activity. The first model is the pedestrian distribution prediction (or postdiction model by multiple regression analysis using space syntax indices of urban fabric and people distribution data obtained from a field survey. The second model is a street choice model for visitors using multinomial logit model. We performed a questionnaire survey on the field to investigate the strolling routes of 46 visitors and obtained a total of 1211 street choices in their routes. We proposed a utility function, sum of weighted space syntax indices, and other indices, and estimated the parameters for weights on the basis of maximum likelihood. These models consider both street networks, distance from destination, direction of the street choice and other spatial compositions (numbers of pedestrians, cars, shops, and elevation. The first model explains the characteristics of the street where many people tend to walk or stay. The second model explains the mechanism underlying the street choice of visitors and clarifies the differences in the weights of street choice parameters among the various attributes, such as gender, existence of destinations, number of people, etc. For all the attributes considered, the influences of DISTANCE and DIRECTION are strong. On the other hand, the influences of Int.V, SHOPS, CARS, ELEVATION, and WIDTH are different for each attribute. People with defined destinations tend to choose streets that “have more shops, and are wider and lower”. In contrast, people with undefined destinations tend to choose streets of high Int.V. The choice of males is affected by Int.V, SHOPS, WIDTH (positive and CARS (negative. Females prefer streets that have many shops, and couples tend to choose downhill streets. The behavior of individual persons is affected by all variables. The behavior of people visiting in groups is affected by SHOP and WIDTH (positive.

  19. Predicting longitudinal trajectories of health probabilities with random-effects multinomial logit regression.

    Science.gov (United States)

    Liu, Xian; Engel, Charles C

    2012-12-20

    Researchers often encounter longitudinal health data characterized with three or more ordinal or nominal categories. Random-effects multinomial logit models are generally applied to account for potential lack of independence inherent in such clustered data. When parameter estimates are used to describe longitudinal processes, however, random effects, both between and within individuals, need to be retransformed for correctly predicting outcome probabilities. This study attempts to go beyond existing work by developing a retransformation method that derives longitudinal growth trajectories of unbiased health probabilities. We estimated variances of the predicted probabilities by using the delta method. Additionally, we transformed the covariates' regression coefficients on the multinomial logit function, not substantively meaningful, to the conditional effects on the predicted probabilities. The empirical illustration uses the longitudinal data from the Asset and Health Dynamics among the Oldest Old. Our analysis compared three sets of the predicted probabilities of three health states at six time points, obtained from, respectively, the retransformation method, the best linear unbiased prediction, and the fixed-effects approach. The results demonstrate that neglect of retransforming random errors in the random-effects multinomial logit model results in severely biased longitudinal trajectories of health probabilities as well as overestimated effects of covariates on the probabilities. Copyright © 2012 John Wiley & Sons, Ltd.

  20. Radiation effects on cancer mortality among A-bomb survivors, 1950-72. Comparison of some statistical models and analysis based on the additive logit model

    Energy Technology Data Exchange (ETDEWEB)

    Otake, M [Hiroshima Univ. (Japan). Faculty of Science

    1976-12-01

    Various statistical models designed to determine the effects of radiation dose on mortality of atomic bomb survivors in Hiroshima and Nagasaki from specific cancers were evaluated on the basis of a basic k(age) x c(dose) x 2 contingency table. From the aspects of application and fits of different models, analysis based on the additive logit model was applied to the mortality experience of this population during the 22year period from 1 Oct. 1950 to 31 Dec. 1972. The advantages and disadvantages of the additive logit model were demonstrated. Leukemia mortality showed a sharp rise with an increase in dose. The dose response relationship suggests a possible curvature or a log linear model, particularly if the dose estimated to be more than 600 rad were set arbitrarily at 600 rad, since the average dose in the 200+ rad group would then change from 434 to 350 rad. In the 22year period from 1950 to 1972, a high mortality risk due to radiation was observed in survivors with doses of 200 rad and over for all cancers except leukemia. On the other hand, during the latest period from 1965 to 1972 a significant risk was noted also for stomach and breast cancers. Survivors who were 9 year old or less at the time of the bomb and who were exposed to high doses of 200+ rad appeared to show a high mortality risk for all cancers except leukemia, although the number of observed deaths is yet small. A number of interesting areas are discussed from the statistical and epidemiological standpoints, i.e., the numerical comparison of risks in various models, the general evaluation of cancer mortality by the additive logit model, the dose response relationship, the relative risk in the high dose group, the time period of radiation induced cancer mortality, the difference of dose response between Hiroshima and Nagasaki and the relative biological effectiveness of neutrons.

  1. Model-based Clustering of Categorical Time Series with Multinomial Logit Classification

    Science.gov (United States)

    Frühwirth-Schnatter, Sylvia; Pamminger, Christoph; Winter-Ebmer, Rudolf; Weber, Andrea

    2010-09-01

    A common problem in many areas of applied statistics is to identify groups of similar time series in a panel of time series. However, distance-based clustering methods cannot easily be extended to time series data, where an appropriate distance-measure is rather difficult to define, particularly for discrete-valued time series. Markov chain clustering, proposed by Pamminger and Frühwirth-Schnatter [6], is an approach for clustering discrete-valued time series obtained by observing a categorical variable with several states. This model-based clustering method is based on finite mixtures of first-order time-homogeneous Markov chain models. In order to further explain group membership we present an extension to the approach of Pamminger and Frühwirth-Schnatter [6] by formulating a probabilistic model for the latent group indicators within the Bayesian classification rule by using a multinomial logit model. The parameters are estimated for a fixed number of clusters within a Bayesian framework using an Markov chain Monte Carlo (MCMC) sampling scheme representing a (full) Gibbs-type sampler which involves only draws from standard distributions. Finally, an application to a panel of Austrian wage mobility data is presented which leads to an interesting segmentation of the Austrian labour market.

  2. A Subpath-based Logit Model to Capture the Correlation of Routes

    Directory of Open Access Journals (Sweden)

    Xinjun Lai

    2016-06-01

    Full Text Available A subpath-based methodology is proposed to capture the travellers’ route choice behaviours and their perceptual correlation of routes, because the original link-based style may not be suitable in application: (1 travellers do not process road network information and construct the chosen route by a link-by-link style; (2 observations from questionnaires and GPS data, however, are not always link-specific. Subpaths are defined as important portions of the route, such as major roads and landmarks. The cross-nested Logit (CNL structure is used for its tractable closed-form and its capability to explicitly capture the routes correlation. Nests represent subpaths other than links so that the number of nests is significantly reduced. Moreover, the proposed method simplifies the original link-based CNL model; therefore, it alleviates the estimation and computation difficulties. The estimation and forecast validation with real data are presented, and the results suggest that the new method is practical.

  3. Para Krizleri Öngörüsünde Logit Model ve Sinyal Yaklaşımının Değeri: Türkiye Tecrübesi

    OpenAIRE

    Kaya, Vedat; Yilmaz, Omer

    2007-01-01

    Logit model and the signal approach are two analysis methods being commonly used to forecast and explain currency crises. Logit model is successful to determine explaining variables of crisis and to calculate the probability of crisis in particular during the period experienced with a crisis. On the other hand, the signal approach aims at determining any possible currency crisis in advance, following some variables showing unusual change over the periods of economic fluctuation and thus it pr...

  4. Comparing Johnson’s SBB, Weibull and Logit-Logistic bivariate distributions for modeling tree diameters and heights using copulas

    Energy Technology Data Exchange (ETDEWEB)

    Cardil Forradellas, A.; Molina Terrén, D.M.; Oliveres, J.; Castellnou, M.

    2016-07-01

    Aim of study: In this study we compare the accuracy of three bivariate distributions: Johnson’s SBB, Weibull-2P and LL-2P functions for characterizing the joint distribution of tree diameters and heights. Area of study: North-West of Spain. Material and methods: Diameter and height measurements of 128 plots of pure and even-aged Tasmanian blue gum (Eucalyptus globulus Labill.) stands located in the North-west of Spain were considered in the present study. The SBB bivariate distribution was obtained from SB marginal distributions using a Normal Copula based on a four-parameter logistic transformation. The Plackett Copula was used to obtain the bivariate models from the Weibull and Logit-logistic univariate marginal distributions. The negative logarithm of the maximum likelihood function was used to compare the results and the Wilcoxon signed-rank test was used to compare the related samples of these logarithms calculated for each sample plot and each distribution. Main results: The best results were obtained by using the Plackett copula and the best marginal distribution was the Logit-logistic. Research highlights: The copulas used in this study have shown a good performance for modeling the joint distribution of tree diameters and heights. They could be easily extended for modelling multivariate distributions involving other tree variables, such as tree volume or biomass. (Author)

  5. Stability of Mixed-Strategy-Based Iterative Logit Quantal Response Dynamics in Game Theory

    Science.gov (United States)

    Zhuang, Qian; Di, Zengru; Wu, Jinshan

    2014-01-01

    Using the Logit quantal response form as the response function in each step, the original definition of static quantal response equilibrium (QRE) is extended into an iterative evolution process. QREs remain as the fixed points of the dynamic process. However, depending on whether such fixed points are the long-term solutions of the dynamic process, they can be classified into stable (SQREs) and unstable (USQREs) equilibriums. This extension resembles the extension from static Nash equilibriums (NEs) to evolutionary stable solutions in the framework of evolutionary game theory. The relation between SQREs and other solution concepts of games, including NEs and QREs, is discussed. Using experimental data from other published papers, we perform a preliminary comparison between SQREs, NEs, QREs and the observed behavioral outcomes of those experiments. For certain games, we determine that SQREs have better predictive power than QREs and NEs. PMID:25157502

  6. —: A Multicategory Brand Equity Model and Its Application at Allstate

    OpenAIRE

    Venkatesh Shankar; Pablo Azar; Matthew Fuller

    2008-01-01

    We develop a robust model for estimating, tracking, and managing brand equity for multicategory brands based on customer survey and financial measures. This model has two components: (1) offering value (computed from discounted cash flow analysis) and (2) relative brand importance (computed from brand choice models such as multinomial logit, heteroscedastic extreme value, and mixed logit). We apply this model to estimate the brand equity of Allstate—a leading insurance company—and its leading...

  7. Determination of the Factors Influencing Store Preference in Erzurum by a Multinomial Logit Model

    Directory of Open Access Journals (Sweden)

    Hüseyin ÖZER

    2008-12-01

    Full Text Available The main objective of this study is to determine factors influencing store preference of the store costumers in Erzurum in terms of some characteristics of the store and its product and costumers’ demographic characteristics (sex, age, marital status, level of education and their income level. In order to carry out this objective, Pearson chi-square test is applied to determine whether there is a relationship between the store preference and customer, stores, and some characteristics of products and a multinominal logit model is fitted by stepwise regression method to the cross-section data compiled from a questionnaire applied to 384 store costumers in the center of Erzurum province. According to the model estimation and test results, the variables of marital status (married, education (primary and cheapness (unimportant for Migros; education (middle for Özmar and marital status (married for the other stores are determined as statistically significant at the level of 5 percent

  8. Modeling pedestrian shopping behavior using principles of bounded rationality: model comparison and validation

    Science.gov (United States)

    Zhu, Wei; Timmermans, Harry

    2011-06-01

    Models of geographical choice behavior have been dominantly based on rational choice models, which assume that decision makers are utility-maximizers. Rational choice models may be less appropriate as behavioral models when modeling decisions in complex environments in which decision makers may simplify the decision problem using heuristics. Pedestrian behavior in shopping streets is an example. We therefore propose a modeling framework for pedestrian shopping behavior incorporating principles of bounded rationality. We extend three classical heuristic rules (conjunctive, disjunctive and lexicographic rule) by introducing threshold heterogeneity. The proposed models are implemented using data on pedestrian behavior in Wang Fujing Street, the city center of Beijing, China. The models are estimated and compared with multinomial logit models and mixed logit models. Results show that the heuristic models are the best for all the decisions that are modeled. Validation tests are carried out through multi-agent simulation by comparing simulated spatio-temporal agent behavior with the observed pedestrian behavior. The predictions of heuristic models are slightly better than those of the multinomial logit models.

  9. Numerical proceessing of radioimmunoassay results using logit-log transformation method

    International Nuclear Information System (INIS)

    Textoris, R.

    1983-01-01

    The mathematical model and algorithm are described of the numerical processing of the results of a radioimmunoassay by the logit-log transformation method and by linear regression with weight factors. The limiting value of the curve for zero concentration is optimized with regard to the residual sum by the iterative method by multiple repeats of the linear regression. Typical examples are presented of the approximation of calibration curves. The method proved suitable for all hitherto used RIA sets and is well suited for small computers with internal memory of min. 8 Kbyte. (author)

  10. Modeling Left-Turn Driving Behavior at Signalized Intersections with Mixed Traffic Conditions

    Directory of Open Access Journals (Sweden)

    Hong Li

    2016-01-01

    Full Text Available In many developing countries, mixed traffic is the most common type of urban transportation; traffic of this type faces many major problems in traffic engineering, such as conflicts, inefficiency, and security issues. This paper focuses on the traffic engineering concerns on the driving behavior of left-turning vehicles caused by different degrees of pedestrian violations. The traffic characteristics of left-turning vehicles and pedestrians in the affected region at a signalized intersection were analyzed and a cellular-automata-based “following-conflict” driving behavior model that mainly addresses four basic behavior modes was proposed to study the conflict and behavior mechanisms of left-turning vehicles by mathematic methodologies. Four basic driving behavior modes were reproduced in computer simulations, and a logit model of the behavior mode choice was also developed to analyze the relative share of each behavior mode. Finally, the microscopic characteristics of driving behaviors and the macroscopic parameters of traffic flow in the affected region were all determined. These data are important reference for geometry and capacity design for signalized intersections. The simulation results show that the proposed models are valid and can be used to represent the behavior of left-turning vehicles in the case of conflicts with illegally crossing pedestrians. These results will have potential applications on improving traffic safety and traffic capacity at signalized intersections with mixed traffic conditions.

  11. Predicting Dropouts of University Freshmen: A Logit Regression Analysis.

    Science.gov (United States)

    Lam, Y. L. Jack

    1984-01-01

    Stepwise discriminant analysis coupled with logit regression analysis of freshmen data from Brandon University (Manitoba) indicated that six tested variables drawn from research on university dropouts were useful in predicting attrition: student status, residence, financial sources, distance from home town, goal fulfillment, and satisfaction with…

  12. Getting the right balance? A mixed logit analysis of the relationship between UK training doctors' characteristics and their specialties using the 2013 National Training Survey.

    Science.gov (United States)

    Rodriguez Santana, Idaira; Chalkley, Martin

    2017-08-11

    To analyse how training doctors' demographic and socioeconomic characteristics vary according to the specialty that they are training for. Descriptive statistics and mixed logistic regression analysis of cross-sectional survey data to quantify evidence of systematic relationships between doctors' characteristics and their specialty. Doctors in training in the United Kingdom in 2013. 27 530 doctors in training but not in their foundation year who responded to the National Training Survey 2013. Mixed logit regression estimates and the corresponding odds ratios (calculated separately for all doctors in training and a subsample comprising those educated in the UK), relating gender, age, ethnicity, place of studies, socioeconomic background and parental education to the probability of training for a particular specialty. Being female and being white British increase the chances of being in general practice with respect to any other specialty, while coming from a better-off socioeconomic background and having parents with tertiary education have the opposite effect. Mixed results are found for age and place of studies. For example, the difference between men and women is greatest for surgical specialties for which a man is 12.121 times more likely to be training to a surgical specialty (relative to general practice) than a woman (p-valuevalue<0.01). There are systematic and substantial differences between specialties in respect of training doctors' gender, ethnicity, age and socioeconomic background. The persistent underrepresentation in some specialties of women, minority ethnic groups and of those coming from disadvantaged backgrounds will impact on the representativeness of the profession into the future. Further research is needed to understand how the processes of selection and the self-selection of applicants into specialties gives rise to these observed differences. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article

  13. Recreation Value of Water to Wetlands in the San Joaquin Valley: Linked Multinomial Logit and Count Data Trip Frequency Models

    Science.gov (United States)

    Creel, Michael; Loomis, John

    1992-10-01

    The recreational benefits from providing increased quantities of water to wildlife and fisheries habitats is estimated using linked multinomial logit site selection models and count data trip frequency models. The study encompasses waterfowl hunting, fishing and wildlife viewing at 14 recreational resources in the San Joaquin Valley, including the National Wildlife Refuges, the State Wildlife Management Areas, and six river destinations. The economic benefits of increasing water supplies to wildlife refuges were also examined by using the estimated models to predict changing patterns of site selection and overall participation due to increases in water allocations. Estimates of the dollar value per acre foot of water are calculated for increases in water to refuges. The resulting model is a flexible and useful tool for estimating the economic benefits of alternative water allocation policies for wildlife habitat and rivers.

  14. Risk factors associated with bus accident severity in the United States: A generalized ordered logit model

    DEFF Research Database (Denmark)

    Kaplan, Sigal; Prato, Carlo Giacomo

    2012-01-01

    of 2011. Method: The current study investigates the underlying risk factors of bus accident severity in the United States by estimating a generalized ordered logit model. Data for the analysis are retrieved from the General Estimates System (GES) database for the years 2005–2009. Results: Results show...... that accident severity increases: (i) for young bus drivers under the age of 25; (ii) for drivers beyond the age of 55, and most prominently for drivers over 65 years old; (iii) for female drivers; (iv) for very high (over 65 mph) and very low (under 20 mph) speed limits; (v) at intersections; (vi) because......Introduction: Recent years have witnessed a growing interest in improving bus safety operations worldwide. While in the United States buses are considered relatively safe, the number of bus accidents is far from being negligible, triggering the introduction of the Motor-coach Enhanced Safety Act...

  15. Environmental regulations and plant exit: A logit analysis based on established panel data

    Energy Technology Data Exchange (ETDEWEB)

    Bioern, E; Golombek, R; Raknerud, A

    1995-12-01

    This publication uses a model to study the relationship between environmental regulations and plant exit. It has the main characteristics of a multinomial qualitative response model of the logit type, but also has elements of a Markov chain model. The model uses Norwegian panel data for establishments in three manufacturing sectors with high shares of units which have been under strict environmental regulations. In two of the sectors, the exit probability of non-regulated establishments is about three times higher than for regulated ones. It is also found that the probability of changing regulation status from non-regulated to regulated depends significantly on economic factors. In particular, establishments with weak profitability are the most likely to become subject to environmental regulation. 12 refs., 2 figs., 6 tabs.

  16. Patient choice modelling: how do patients choose their hospitals?

    Science.gov (United States)

    Smith, Honora; Currie, Christine; Chaiwuttisak, Pornpimol; Kyprianou, Andreas

    2018-06-01

    As an aid to predicting future hospital admissions, we compare use of the Multinomial Logit and the Utility Maximising Nested Logit models to describe how patients choose their hospitals. The models are fitted to real data from Derbyshire, United Kingdom, which lists the postcodes of more than 200,000 admissions to six different local hospitals. Both elective and emergency admissions are analysed for this mixed urban/rural area. For characteristics that may affect a patient's choice of hospital, we consider the distance of the patient from the hospital, the number of beds at the hospital and the number of car parking spaces available at the hospital, as well as several statistics publicly available on National Health Service (NHS) websites: an average waiting time, the patient survey score for ward cleanliness, the patient safety score and the inpatient survey score for overall care. The Multinomial Logit model is successfully fitted to the data. Results obtained with the Utility Maximising Nested Logit model show that nesting according to city or town may be invalid for these data; in other words, the choice of hospital does not appear to be preceded by choice of city. In all of the analysis carried out, distance appears to be one of the main influences on a patient's choice of hospital rather than statistics available on the Internet.

  17. Multiple equilibria and limit cycles in evolutonary games with Logit Dynamics

    NARCIS (Netherlands)

    Hommes, C.H.; Ochea, M.I.

    2012-01-01

    This note shows, by means of two simple, three-strategy games, the existence of stable periodic orbits and of multiple, interior steady states in a smooth version of the Best-Response Dynamics, the Logit Dynamics. The main finding is that, unlike Replicator Dynamics, generic Hopf bifurcation and

  18. Application of LogitBoost Classifier for Traceability Using SNP Chip Data.

    Science.gov (United States)

    Kim, Kwondo; Seo, Minseok; Kang, Hyunsung; Cho, Seoae; Kim, Heebal; Seo, Kang-Seok

    2015-01-01

    Consumer attention to food safety has increased rapidly due to animal-related diseases; therefore, it is important to identify their places of origin (POO) for safety purposes. However, only a few studies have addressed this issue and focused on machine learning-based approaches. In the present study, classification analyses were performed using a customized SNP chip for POO prediction. To accomplish this, 4,122 pigs originating from 104 farms were genotyped using the SNP chip. Several factors were considered to establish the best prediction model based on these data. We also assessed the applicability of the suggested model using a kinship coefficient-filtering approach. Our results showed that the LogitBoost-based prediction model outperformed other classifiers in terms of classification performance under most conditions. Specifically, a greater level of accuracy was observed when a higher kinship-based cutoff was employed. These results demonstrated the applicability of a machine learning-based approach using SNP chip data for practical traceability.

  19. Associating crash avoidance maneuvers with driver attributes and accident characteristics: a mixed logit model approach

    DEFF Research Database (Denmark)

    Kaplan, Sigal; Prato, Carlo Giacomo

    2012-01-01

    from the key role of proactive and state-aware road users within the concept of sustainable safety systems, as well as from the key role of effective corrective maneuvers in the success of automated in-vehicle warning and driver assistance systems. Methods: The analysis is conducted by means of a mixed...... about the risks of driving under fatigue and distraction being comparable to the risks of driving under the influence of alcohol and drugs. Moreover, the results suggest the need to educate drivers about hazard perception, designing a forgiving infrastructure within a sustainable safety systems......Objective: The current study focuses on the propensity of drivers to engage in crash avoidance maneuvers in relation to driver attributes, critical events, crash characteristics, vehicles involved, road characteristics, and environmental conditions. The importance of avoidance maneuvers derives...

  20. Assessing the value of museums with a combined discrete choice/ count data model

    NARCIS (Netherlands)

    Rouwendal, J.; Boter, J.

    2009-01-01

    This article assesses the value of Dutch museums using information about destination choice as well as about the number of trips undertaken by an actor. Destination choice is analysed by means of a mixed logit model, and a count data model is used to explain trip generation. We use a

  1. An econometric analysis of changes in arable land utilization using multinomial logit model in Pinggu district, Beijing, China.

    Science.gov (United States)

    Xu, Yueqing; McNamara, Paul; Wu, Yanfang; Dong, Yue

    2013-10-15

    Arable land in China has been decreasing as a result of rapid population growth and economic development as well as urban expansion, especially in developed regions around cities where quality farmland quickly disappears. This paper analyzed changes in arable land utilization during 1993-2008 in the Pinggu district, Beijing, China, developed a multinomial logit (MNL) model to determine spatial driving factors influencing arable land-use change, and simulated arable land transition probabilities. Land-use maps, as well as social-economic and geographical data were used in the study. The results indicated that arable land decreased significantly between 1993 and 2008. Lost arable land shifted into orchard, forestland, settlement, and transportation land. Significant differences existed for arable land transitions among different landform areas. Slope, elevation, population density, urbanization rate, distance to settlements, and distance to roadways were strong drivers influencing arable land transition to other uses. The MNL model was proved effective for predicting transition probabilities in land use from arable land to other land-use types, thus can be used for scenario analysis to develop land-use policies and land-management measures in this metropolitan area. Copyright © 2013 Elsevier Ltd. All rights reserved.

  2. A Monte Carlo simulation study comparing linear regression, beta regression, variable-dispersion beta regression and fractional logit regression at recovering average difference measures in a two sample design.

    Science.gov (United States)

    Meaney, Christopher; Moineddin, Rahim

    2014-01-24

    In biomedical research, response variables are often encountered which have bounded support on the open unit interval--(0,1). Traditionally, researchers have attempted to estimate covariate effects on these types of response data using linear regression. Alternative modelling strategies may include: beta regression, variable-dispersion beta regression, and fractional logit regression models. This study employs a Monte Carlo simulation design to compare the statistical properties of the linear regression model to that of the more novel beta regression, variable-dispersion beta regression, and fractional logit regression models. In the Monte Carlo experiment we assume a simple two sample design. We assume observations are realizations of independent draws from their respective probability models. The randomly simulated draws from the various probability models are chosen to emulate average proportion/percentage/rate differences of pre-specified magnitudes. Following simulation of the experimental data we estimate average proportion/percentage/rate differences. We compare the estimators in terms of bias, variance, type-1 error and power. Estimates of Monte Carlo error associated with these quantities are provided. If response data are beta distributed with constant dispersion parameters across the two samples, then all models are unbiased and have reasonable type-1 error rates and power profiles. If the response data in the two samples have different dispersion parameters, then the simple beta regression model is biased. When the sample size is small (N0 = N1 = 25) linear regression has superior type-1 error rates compared to the other models. Small sample type-1 error rates can be improved in beta regression models using bias correction/reduction methods. In the power experiments, variable-dispersion beta regression and fractional logit regression models have slightly elevated power compared to linear regression models. Similar results were observed if the

  3. Analysis of Salmonella sp bacterial contamination on Vannamei Shrimp using binary logit model approach

    Science.gov (United States)

    Oktaviana, P. P.; Fithriasari, K.

    2018-04-01

    Mostly Indonesian citizen consume vannamei shrimp as their food. Vannamei shrimp also is one of Indonesian exports comodities mainstay. Vannamei shrimp in the ponds and markets could be contaminated by Salmonella sp bacteria. This bacteria will endanger human health. Salmonella sp bacterial contamination on vannamei shrimp could be affected by many factors. This study is intended to identify what factors that supposedly influence the Salmonella sp bacterial contamination on vannamei shrimp. The researchers used the testing result of Salmonella sp bacterial contamination on vannamei shrimp as response variable. This response variable has two categories: 0 = if testing result indicate that there is no Salmonella sp on vannamei shrimp; 1 = if testing result indicate that there is Salmonella sp on vannamei shrimp. There are four factors that supposedly influence the Salmonella sp bacterial contamination on vannamei shrimp, which are the testing result of Salmonella sp bacterial contamination on farmer hand swab; the subdistrict of vannamei shrimp ponds; the fish processing unit supplied by; and the pond are in hectare. This four factors used as predictor variables. The analysis used is Binary Logit Model Approach according to the response variable that has two categories. The analysis result indicates that the factors or predictor variables which is significantly affect the Salmonella sp bacterial contamination on vannamei shrimp are the testing result of Salmonella sp bacterial contamination on farmer hand swab and the subdistrict of vannamei shrimp ponds.

  4. Exploratory multinomial logit model-based driver injury severity analyses for teenage and adult drivers in intersection-related crashes.

    Science.gov (United States)

    Wu, Qiong; Zhang, Guohui; Ci, Yusheng; Wu, Lina; Tarefder, Rafiqul A; Alcántara, Adélamar Dely

    2016-05-18

    Teenage drivers are more likely to be involved in severely incapacitating and fatal crashes compared to adult drivers. Moreover, because two thirds of urban vehicle miles traveled are on signal-controlled roadways, significant research efforts are needed to investigate intersection-related teenage driver injury severities and their contributing factors in terms of driver behavior, vehicle-infrastructure interactions, environmental characteristics, roadway geometric features, and traffic compositions. Therefore, this study aims to explore the characteristic differences between teenage and adult drivers in intersection-related crashes, identify the significant contributing attributes, and analyze their impacts on driver injury severities. Using crash data collected in New Mexico from 2010 to 2011, 2 multinomial logit regression models were developed to analyze injury severities for teenage and adult drivers, respectively. Elasticity analyses and transferability tests were conducted to better understand the quantitative impacts of these factors and the teenage driver injury severity model's generality. The results showed that although many of the same contributing factors were found to be significant in the both teenage and adult driver models, certain different attributes must be distinguished to specifically develop effective safety solutions for the 2 driver groups. The research findings are helpful to better understand teenage crash uniqueness and develop cost-effective solutions to reduce intersection-related teenage injury severities and facilitate driver injury mitigation research.

  5. The Market for Ph.D. Holders in Greece: Probit and Multinomial Logit Analysis of their Employment Status

    OpenAIRE

    Joan Daouli; Eirini Konstantina Nikolatou

    2015-01-01

    The objective of this paper is to investigate the factors influencing the probability that a Ph.D. holder in Greece will work in the academic sector, as well as the probability of his or her choosing employment in various sectors of industry and occupational categories. Probit/multinomial logit models are employed using the 2001 Census data. The empirical results indicate that being young, married, having a Ph.D. in Natural Sciences and/or in Engineering, granted by a Greek university, increa...

  6. Mixed-mode modelling mixing methodologies for organisational intervention

    CERN Document Server

    Clarke, Steve; Lehaney, Brian

    2001-01-01

    The 1980s and 1990s have seen a growing interest in research and practice in the use of methodologies within problem contexts characterised by a primary focus on technology, human issues, or power. During the last five to ten years, this has given rise to challenges regarding the ability of a single methodology to address all such contexts, and the consequent development of approaches which aim to mix methodologies within a single problem situation. This has been particularly so where the situation has called for a mix of technological (the so-called 'hard') and human­ centred (so-called 'soft') methods. The approach developed has been termed mixed-mode modelling. The area of mixed-mode modelling is relatively new, with the phrase being coined approximately four years ago by Brian Lehaney in a keynote paper published at the 1996 Annual Conference of the UK Operational Research Society. Mixed-mode modelling, as suggested above, is a new way of considering problem situations faced by organisations. Traditional...

  7. Maximum Simulated Likelihood and Expectation-Maximization Methods to Estimate Random Coefficients Logit with Panel Data

    DEFF Research Database (Denmark)

    Cherchi, Elisabetta; Guevara, Cristian

    2012-01-01

    with cross-sectional or with panel data, and (d) EM systematically attained more efficient estimators than the MSL method. The results imply that if the purpose of the estimation is only to determine the ratios of the model parameters (e.g., the value of time), the EM method should be preferred. For all......The random coefficients logit model allows a more realistic representation of agents' behavior. However, the estimation of that model may involve simulation, which may become impractical with many random coefficients because of the curse of dimensionality. In this paper, the traditional maximum...... simulated likelihood (MSL) method is compared with the alternative expectation- maximization (EM) method, which does not require simulation. Previous literature had shown that for cross-sectional data, MSL outperforms the EM method in the ability to recover the true parameters and estimation time...

  8. Analysis of the liquidity risk in credit unions: a logit multinomial approach

    Directory of Open Access Journals (Sweden)

    Rosiane Maria Lima Gonçalves

    2008-10-01

    Full Text Available Liquidity risk in financial institutions is associated to balance between working capital and financial demands. Other factors that affect credit union liquidity are an unanticipated increase of withdrawals without an offsetting amount of new deposits, and the lack of ability in promoting the product geographical diversification. The objective of this study is to analyze Minas Gerais state credit union liquidity risk and its factor determinants. Financial ratios and the multinomial logit model are used. The cooperatives were classified in five categories of liquidity risk: very low, low, medium, high and very high. The empirical results indicate that high levels of liquidity are related to smaller values of the outsourcing capital use, immobilization of the turnover capital, and provision ratios. So, they are associated to larger values of the deposit total/credit operations, and asset growth ratios.

  9. Revealing additional preference heterogeneity with an extended random parameter logit model: the case of extra virgin olive oil

    Directory of Open Access Journals (Sweden)

    Ahmed Yangui

    2014-07-01

    Full Text Available Methods that account for preference heterogeneity have received a significant amount of attention in recent literature. Most of them have focused on preference heterogeneity around the mean of the random parameters, which has been specified as a function of socio-demographic characteristics. This paper aims at analyzing consumers’ preferences towards extra-virgin olive oil in Catalonia using a methodological framework with two novelties over past studies: 1 it accounts for both preference heterogeneity around the mean and the variance; and 2 it considers both socio-demographic characteristics of consumers as well as their attitudinal factors. Estimated coefficients and moments of willingness to pay (WTP distributions are compared with those obtained from alternative Random Parameter Logit (RPL models. Results suggest that the proposed framework increases the goodness-of-fit and provides more useful insights for policy analysis. The most important attributes affecting consumers’ preferences towards extra virgin olive oil are the price and the product’s origin. The consumers perceive the organic olive oil attribute negatively, as they think that it is not worth paying a premium for a product that is healthy in nature.

  10. Multiple steady states, limit cycles and chaotic attractors in evolutionary games with Logit Dynamics

    NARCIS (Netherlands)

    Hommes, C.H.; Ochea, M.I.

    2010-01-01

    This paper investigates, by means of simple, three and four strategy games, the occurrence of periodic and chaotic behaviour in a smooth version of the Best Response Dynamics, the Logit Dynamics. The main finding is that, unlike Replicator Dynamics, generic Hopf bifurcation and thus, stable limit

  11. Modeling Intercity Mode Choice and Airport Choice in the United States

    OpenAIRE

    Ashiabor, Senanu Y.

    2007-01-01

    The aim of this study was to develop a framework to model travel choice behavior in order to estimate intercity travel demand at nation-level in the United States. Nested and mixed logit models were developed to study national-level intercity transportation in the United States. A separate General Aviation airport choice model to estimates General Aviation person-trips and number of aircraft operations though more than 3000 airports was also developed. The combination of the General Aviati...

  12. Generalized, Linear, and Mixed Models

    CERN Document Server

    McCulloch, Charles E; Neuhaus, John M

    2011-01-01

    An accessible and self-contained introduction to statistical models-now in a modernized new editionGeneralized, Linear, and Mixed Models, Second Edition provides an up-to-date treatment of the essential techniques for developing and applying a wide variety of statistical models. The book presents thorough and unified coverage of the theory behind generalized, linear, and mixed models and highlights their similarities and differences in various construction, application, and computational aspects.A clear introduction to the basic ideas of fixed effects models, random effects models, and mixed m

  13. Mixed models for predictive modeling in actuarial science

    NARCIS (Netherlands)

    Antonio, K.; Zhang, Y.

    2012-01-01

    We start with a general discussion of mixed (also called multilevel) models and continue with illustrating specific (actuarial) applications of this type of models. Technical details on (linear, generalized, non-linear) mixed models follow: model assumptions, specifications, estimation techniques

  14. A Lagrangian mixing frequency model for transported PDF modeling

    Science.gov (United States)

    Turkeri, Hasret; Zhao, Xinyu

    2017-11-01

    In this study, a Lagrangian mixing frequency model is proposed for molecular mixing models within the framework of transported probability density function (PDF) methods. The model is based on the dissipations of mixture fraction and progress variables obtained from Lagrangian particles in PDF methods. The new model is proposed as a remedy to the difficulty in choosing the optimal model constant parameters when using conventional mixing frequency models. The model is implemented in combination with the Interaction by exchange with the mean (IEM) mixing model. The performance of the new model is examined by performing simulations of Sandia Flame D and the turbulent premixed flame from the Cambridge stratified flame series. The simulations are performed using the pdfFOAM solver which is a LES/PDF solver developed entirely in OpenFOAM. A 16-species reduced mechanism is used to represent methane/air combustion, and in situ adaptive tabulation is employed to accelerate the finite-rate chemistry calculations. The results are compared with experimental measurements as well as with the results obtained using conventional mixing frequency models. Dynamic mixing frequencies are predicted using the new model without solving additional transport equations, and good agreement with experimental data is observed.

  15. Mixed-effects regression models in linguistics

    CERN Document Server

    Heylen, Kris; Geeraerts, Dirk

    2018-01-01

    When data consist of grouped observations or clusters, and there is a risk that measurements within the same group are not independent, group-specific random effects can be added to a regression model in order to account for such within-group associations. Regression models that contain such group-specific random effects are called mixed-effects regression models, or simply mixed models. Mixed models are a versatile tool that can handle both balanced and unbalanced datasets and that can also be applied when several layers of grouping are present in the data; these layers can either be nested or crossed.  In linguistics, as in many other fields, the use of mixed models has gained ground rapidly over the last decade. This methodological evolution enables us to build more sophisticated and arguably more realistic models, but, due to its technical complexity, also introduces new challenges. This volume brings together a number of promising new evolutions in the use of mixed models in linguistics, but also addres...

  16. Linear mixed models for longitudinal data

    CERN Document Server

    Molenberghs, Geert

    2000-01-01

    This paperback edition is a reprint of the 2000 edition. This book provides a comprehensive treatment of linear mixed models for continuous longitudinal data. Next to model formulation, this edition puts major emphasis on exploratory data analysis for all aspects of the model, such as the marginal model, subject-specific profiles, and residual covariance structure. Further, model diagnostics and missing data receive extensive treatment. Sensitivity analysis for incomplete data is given a prominent place. Several variations to the conventional linear mixed model are discussed (a heterogeity model, conditional linear mixed models). This book will be of interest to applied statisticians and biomedical researchers in industry, public health organizations, contract research organizations, and academia. The book is explanatory rather than mathematically rigorous. Most analyses were done with the MIXED procedure of the SAS software package, and many of its features are clearly elucidated. However, some other commerc...

  17. Age and pedestrian injury severity in motor-vehicle crashes: a heteroskedastic logit analysis.

    Science.gov (United States)

    Kim, Joon-Ki; Ulfarsson, Gudmundur F; Shankar, Venkataraman N; Kim, Sungyop

    2008-09-01

    This research explores the injury severity of pedestrians in motor-vehicle crashes. It is hypothesized that the variance of unobserved pedestrian characteristics increases with age. In response, a heteroskedastic generalized extreme value model is used. The analysis links explanatory factors with four injury outcomes: fatal, incapacitating, non-incapacitating, and possible or no injury. Police-reported crash data between 1997 and 2000 from North Carolina, USA, are used. The results show that pedestrian age induces heteroskedasticity which affects the probability of fatal injury. The effect grows more pronounced with increasing age past 65. The heteroskedastic model provides a better fit than the multinomial logit model. Notable factors increasing the probability of fatal pedestrian injury: increasing pedestrian age, male driver, intoxicated driver (2.7 times greater probability of fatality), traffic sign, commercial area, darkness with or without streetlights (2-4 times greater probability of fatality), sport-utility vehicle, truck, freeway, two-way divided roadway, speeding-involved, off roadway, motorist turning or backing, both driver and pedestrian at fault, and pedestrian only at fault. Conversely, the probability of a fatal injury decreased: with increasing driver age, during the PM traffic peak, with traffic signal control, in inclement weather, on a curved roadway, at a crosswalk, and when walking along roadway.

  18. A Mixed Logit Model of Homeowner Preferences for Wildfire Hazard Reduction

    Science.gov (United States)

    Thomas P. Holmes; John Loomis; Armando Gonzalez-Caban

    2010-01-01

    People living in the wildland-urban interface (WUI) are at greater risk of suffering major losses of property and life from wildfires. Over the past several decades the prevailing view has been that wildfire risk in rural areas was exogenous to the activities of homeowners. In response to catastrophic fires in the WUI over the past few years, recent approaches to fire...

  19. Mixed multinomial logit model for out-of-home leisure activity choice

    NARCIS (Netherlands)

    Grigolon, A.B.; Kemperman, A.D.A.M.; Timmermans, H.J.P.

    2013-01-01

    This paper documents the design and results of a study on the factors influencing the choice of out-of-home leisure activities. Influencing factors seem related to socio-demographic characteristics, personal preferences, characteristics of the built environment and other aspects of the activities

  20. Linear and Generalized Linear Mixed Models and Their Applications

    CERN Document Server

    Jiang, Jiming

    2007-01-01

    This book covers two major classes of mixed effects models, linear mixed models and generalized linear mixed models, and it presents an up-to-date account of theory and methods in analysis of these models as well as their applications in various fields. The book offers a systematic approach to inference about non-Gaussian linear mixed models. Furthermore, it has included recently developed methods, such as mixed model diagnostics, mixed model selection, and jackknife method in the context of mixed models. The book is aimed at students, researchers and other practitioners who are interested

  1. ADVANCED MIXING MODELS

    International Nuclear Information System (INIS)

    Lee, S; Richard Dimenna, R; David Tamburello, D

    2008-01-01

    The process of recovering the waste in storage tanks at the Savannah River Site (SRS) typically requires mixing the contents of the tank with one to four dual-nozzle jet mixers located within the tank. The typical criteria to establish a mixed condition in a tank are based on the number of pumps in operation and the time duration of operation. To ensure that a mixed condition is achieved, operating times are set conservatively long. This approach results in high operational costs because of the long mixing times and high maintenance and repair costs for the same reason. A significant reduction in both of these costs might be realized by reducing the required mixing time based on calculating a reliable indicator of mixing with a suitably validated computer code. The work described in this report establishes the basis for further development of the theory leading to the identified mixing indicators, the benchmark analyses demonstrating their consistency with widely accepted correlations, and the application of those indicators to SRS waste tanks to provide a better, physically based estimate of the required mixing time. Waste storage tanks at SRS contain settled sludge which varies in height from zero to 10 ft. The sludge has been characterized and modeled as micron-sized solids, typically 1 to 5 microns, at weight fractions as high as 20 to 30 wt%, specific gravities to 1.4, and viscosities up to 64 cp during motion. The sludge is suspended and mixed through the use of submersible slurry jet pumps. To suspend settled sludge, water is added to the tank as a slurry medium and stirred with the jet pump. Although there is considerable technical literature on mixing and solid suspension in agitated tanks, very little literature has been published on jet mixing in a large-scale tank. If shorter mixing times can be shown to support Defense Waste Processing Facility (DWPF) or other feed requirements, longer pump lifetimes can be achieved with associated operational cost and

  2. ADVANCED MIXING MODELS

    Energy Technology Data Exchange (ETDEWEB)

    Lee, S; Richard Dimenna, R; David Tamburello, D

    2008-11-13

    The process of recovering the waste in storage tanks at the Savannah River Site (SRS) typically requires mixing the contents of the tank with one to four dual-nozzle jet mixers located within the tank. The typical criteria to establish a mixed condition in a tank are based on the number of pumps in operation and the time duration of operation. To ensure that a mixed condition is achieved, operating times are set conservatively long. This approach results in high operational costs because of the long mixing times and high maintenance and repair costs for the same reason. A significant reduction in both of these costs might be realized by reducing the required mixing time based on calculating a reliable indicator of mixing with a suitably validated computer code. The work described in this report establishes the basis for further development of the theory leading to the identified mixing indicators, the benchmark analyses demonstrating their consistency with widely accepted correlations, and the application of those indicators to SRS waste tanks to provide a better, physically based estimate of the required mixing time. Waste storage tanks at SRS contain settled sludge which varies in height from zero to 10 ft. The sludge has been characterized and modeled as micron-sized solids, typically 1 to 5 microns, at weight fractions as high as 20 to 30 wt%, specific gravities to 1.4, and viscosities up to 64 cp during motion. The sludge is suspended and mixed through the use of submersible slurry jet pumps. To suspend settled sludge, water is added to the tank as a slurry medium and stirred with the jet pump. Although there is considerable technical literature on mixing and solid suspension in agitated tanks, very little literature has been published on jet mixing in a large-scale tank. If shorter mixing times can be shown to support Defense Waste Processing Facility (DWPF) or other feed requirements, longer pump lifetimes can be achieved with associated operational cost and

  3. Analisis Faktor yang Mempengaruhi Tingkat Kesehatan Bank dengan Regresi Logit

    Directory of Open Access Journals (Sweden)

    Titik Aryati

    2007-09-01

    Full Text Available The article aims to find the probability effects of bank’s health level using CAMEL ratio analysis. The statistic method used to test on the research hypothesis was logit regression. The dependent variable used in this research was bank’s health level and independent variables were CAMEL financial ratios consisting of CAR, NPL, ROA, ROE, LDR, and NIM. The report data were extracted from bank’s financial from financial report, which had been published and accumulated by Infobank research bureau with valuation, based on bank Indonesia policy. The sample consisted of 60 healthy banks and 14 unhealthy banks in 2005 and 2006. The empirical result of this research indicates that the Non Performing Loan is the significant variable affecting bank health level.

  4. Inference of ICF Implosion Core Mix using Experimental Data and Theoretical Mix Modeling

    International Nuclear Information System (INIS)

    Welser-Sherrill, L.; Haynes, D.A.; Mancini, R.C.; Cooley, J.H.; Tommasini, R.; Golovkin, I.E.; Sherrill, M.E.; Haan, S.W.

    2009-01-01

    The mixing between fuel and shell materials in Inertial Confinement Fusion (ICF) implosion cores is a current topic of interest. The goal of this work was to design direct-drive ICF experiments which have varying levels of mix, and subsequently to extract information on mixing directly from the experimental data using spectroscopic techniques. The experimental design was accomplished using hydrodynamic simulations in conjunction with Haan's saturation model, which was used to predict the mix levels of candidate experimental configurations. These theoretical predictions were then compared to the mixing information which was extracted from the experimental data, and it was found that Haan's mix model performed well in predicting trends in the width of the mix layer. With these results, we have contributed to an assessment of the range of validity and predictive capability of the Haan saturation model, as well as increased our confidence in the methods used to extract mixing information from experimental data.

  5. Theoretical Models of Neutrino Mixing Recent Developments

    CERN Document Server

    Altarelli, Guido

    2009-01-01

    The data on neutrino mixing are at present compatible with Tri-Bimaximal (TB) mixing. If one takes this indication seriously then the models that lead to TB mixing in first approximation are particularly interesting and A4 models are prominent in this list. However, the agreement of TB mixing with the data could still be an accident. We discuss a recent model based on S4 where Bimaximal mixing is instead valid at leading order and the large corrections needed to reproduce the data arise from the diagonalization of charged leptons. The value of $\\theta_{13}$ could distinguish between the two alternatives.

  6. Estimation of social value of statistical life using willingness-to-pay method in Nanjing, China.

    Science.gov (United States)

    Yang, Zhao; Liu, Pan; Xu, Xin

    2016-10-01

    Rational decision making regarding the safety related investment programs greatly depends on the economic valuation of traffic crashes. The primary objective of this study was to estimate the social value of statistical life in the city of Nanjing in China. A stated preference survey was conducted to investigate travelers' willingness to pay for traffic risk reduction. Face-to-face interviews were conducted at stations, shopping centers, schools, and parks in different districts in the urban area of Nanjing. The respondents were categorized into two groups, including motorists and non-motorists. Both the binary logit model and mixed logit model were developed for the two groups of people. The results revealed that the mixed logit model is superior to the fixed coefficient binary logit model. The factors that significantly affect people's willingness to pay for risk reduction include income, education, gender, age, drive age (for motorists), occupation, whether the charged fees were used to improve private vehicle equipment (for motorists), reduction in fatality rate, and change in travel cost. The Monte Carlo simulation method was used to generate the distribution of value of statistical life (VSL). Based on the mixed logit model, the VSL had a mean value of 3,729,493 RMB ($586,610) with a standard deviation of 2,181,592 RMB ($343,142) for motorists; and a mean of 3,281,283 RMB ($505,318) with a standard deviation of 2,376,975 RMB ($366,054) for non-motorists. Using the tax system to illustrate the contribution of different income groups to social funds, the social value of statistical life was estimated. The average social value of statistical life was found to be 7,184,406 RMB ($1,130,032). Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Statistical models of global Langmuir mixing

    Science.gov (United States)

    Li, Qing; Fox-Kemper, Baylor; Breivik, Øyvind; Webb, Adrean

    2017-05-01

    The effects of Langmuir mixing on the surface ocean mixing may be parameterized by applying an enhancement factor which depends on wave, wind, and ocean state to the turbulent velocity scale in the K-Profile Parameterization. Diagnosing the appropriate enhancement factor online in global climate simulations is readily achieved by coupling with a prognostic wave model, but with significant computational and code development expenses. In this paper, two alternatives that do not require a prognostic wave model, (i) a monthly mean enhancement factor climatology, and (ii) an approximation to the enhancement factor based on the empirical wave spectra, are explored and tested in a global climate model. Both appear to reproduce the Langmuir mixing effects as estimated using a prognostic wave model, with nearly identical and substantial improvements in the simulated mixed layer depth and intermediate water ventilation over control simulations, but significantly less computational cost. Simpler approaches, such as ignoring Langmuir mixing altogether or setting a globally constant Langmuir number, are found to be deficient. Thus, the consequences of Stokes depth and misaligned wind and waves are important.

  8. Modelling mixed forest growth : a review of models for forest management

    NARCIS (Netherlands)

    Porte, A.; Bartelink, H.H.

    2002-01-01

    Most forests today are multi-specific and heterogeneous forests (`mixed forests'). However, forest modelling has been focusing on mono-specific stands for a long time, only recently have models been developed for mixed forests. Previous reviews of mixed forest modelling were restricted to certain

  9. Modelling rainfall amounts using mixed-gamma model for Kuantan district

    Science.gov (United States)

    Zakaria, Roslinazairimah; Moslim, Nor Hafizah

    2017-05-01

    An efficient design of flood mitigation and construction of crop growth models depend upon good understanding of the rainfall process and characteristics. Gamma distribution is usually used to model nonzero rainfall amounts. In this study, the mixed-gamma model is applied to accommodate both zero and nonzero rainfall amounts. The mixed-gamma model presented is for the independent case. The formulae of mean and variance are derived for the sum of two and three independent mixed-gamma variables, respectively. Firstly, the gamma distribution is used to model the nonzero rainfall amounts and the parameters of the distribution (shape and scale) are estimated using the maximum likelihood estimation method. Then, the mixed-gamma model is defined for both zero and nonzero rainfall amounts simultaneously. The formulae of mean and variance for the sum of two and three independent mixed-gamma variables derived are tested using the monthly rainfall amounts from rainfall stations within Kuantan district in Pahang Malaysia. Based on the Kolmogorov-Smirnov goodness of fit test, the results demonstrate that the descriptive statistics of the observed sum of rainfall amounts is not significantly different at 5% significance level from the generated sum of independent mixed-gamma variables. The methodology and formulae demonstrated can be applied to find the sum of more than three independent mixed-gamma variables.

  10. Mixed Hitting-Time Models

    NARCIS (Netherlands)

    Abbring, J.H.

    2009-01-01

    We study mixed hitting-time models, which specify durations as the first time a Levy process (a continuous-time process with stationary and independent increments) crosses a heterogeneous threshold. Such models of substantial interest because they can be reduced from optimal-stopping models with

  11. System equivalent model mixing

    Science.gov (United States)

    Klaassen, Steven W. B.; van der Seijs, Maarten V.; de Klerk, Dennis

    2018-05-01

    This paper introduces SEMM: a method based on Frequency Based Substructuring (FBS) techniques that enables the construction of hybrid dynamic models. With System Equivalent Model Mixing (SEMM) frequency based models, either of numerical or experimental nature, can be mixed to form a hybrid model. This model follows the dynamic behaviour of a predefined weighted master model. A large variety of applications can be thought of, such as the DoF-space expansion of relatively small experimental models using numerical models, or the blending of different models in the frequency spectrum. SEMM is outlined, both mathematically and conceptually, based on a notation commonly used in FBS. A critical physical interpretation of the theory is provided next, along with a comparison to similar techniques; namely DoF expansion techniques. SEMM's concept is further illustrated by means of a numerical example. It will become apparent that the basic method of SEMM has some shortcomings which warrant a few extensions to the method. One of the main applications is tested in a practical case, performed on a validated benchmark structure; it will emphasize the practicality of the method.

  12. Modeling of particle mixing in the atmosphere

    International Nuclear Information System (INIS)

    Zhu, Shupeng

    2015-01-01

    This thesis presents a newly developed size-composition resolved aerosol model (SCRAM), which is able to simulate the dynamics of externally-mixed particles in the atmosphere, and evaluates its performance in three-dimensional air-quality simulations. The main work is split into four parts. First, the research context of external mixing and aerosol modelling is introduced. Secondly, the development of the SCRAM box model is presented along with validation tests. Each particle composition is defined by the combination of mass-fraction sections of its chemical components or aggregates of components. The three main processes involved in aerosol dynamic (nucleation, coagulation, condensation/ evaporation) are included in SCRAM. The model is first validated by comparisons with published reference solutions for coagulation and condensation/evaporation of internally-mixed particles. The particle mixing state is investigated in a 0-D simulation using data representative of air pollution at a traffic site in Paris. The relative influence on the mixing state of the different aerosol processes and of the algorithm used to model condensation/evaporation (dynamic evolution or bulk equilibrium between particles and gas) is studied. Then, SCRAM is integrated into the Polyphemus air quality platform and used to conduct simulations over Greater Paris during the summer period of 2009. This evaluation showed that SCRAM gives satisfactory results for both PM2.5/PM10 concentrations and aerosol optical depths, as assessed from comparisons to observations. Besides, the model allows us to analyze the particle mixing state, as well as the impact of the mixing state assumption made in the modelling on particle formation, aerosols optical properties, and cloud condensation nuclei activation. Finally, two simulations are conducted during the winter campaign of MEGAPOLI (Megacities: Emissions, urban, regional and Global Atmospheric Pollution and climate effects, and Integrated tools for

  13. Development of two mix model postprocessors for the investigation of shell mix in indirect drive implosion cores

    International Nuclear Information System (INIS)

    Welser-Sherrill, L.; Mancini, R. C.; Haynes, D. A.; Haan, S. W.; Koch, J. A.; Izumi, N.; Tommasini, R.; Golovkin, I. E.; MacFarlane, J. J.; Radha, P. B.; Delettrez, J. A.; Regan, S. P.; Smalyuk, V. A.

    2007-01-01

    The presence of shell mix in inertial confinement fusion implosion cores is an important characteristic. Mixing in this experimental regime is primarily due to hydrodynamic instabilities, such as Rayleigh-Taylor and Richtmyer-Meshkov, which can affect implosion dynamics. Two independent theoretical mix models, Youngs' model and the Haan saturation model, were used to estimate the level of Rayleigh-Taylor mixing in a series of indirect drive experiments. The models were used to predict the radial width of the region containing mixed fuel and shell materials. The results for Rayleigh-Taylor mixing provided by Youngs' model are considered to be a lower bound for the mix width, while those generated by Haan's model incorporate more experimental characteristics and consequently have larger mix widths. These results are compared with an independent experimental analysis, which infers a larger mix width based on all instabilities and effects captured in the experimental data

  14. ADVANCED MIXING MODELS

    Energy Technology Data Exchange (ETDEWEB)

    Lee, S; Dimenna, R; Tamburello, D

    2011-02-14

    height from zero to 10 ft. The sludge has been characterized and modeled as micron-sized solids, typically 1 to 5 microns, at weight fractions as high as 20 to 30 wt%, specific gravities to 1.4, and viscosities up to 64 cp during motion. The sludge is suspended and mixed through the use of submersible slurry jet pumps. To suspend settled sludge, water is added to the tank as a slurry medium and stirred with the jet pump. Although there is considerable technical literature on mixing and solid suspension in agitated tanks, very little literature has been published on jet mixing in a large-scale tank. One of the main objectives in the waste processing is to provide feed of a uniform slurry composition at a certain weight percentage (e.g. typically {approx}13 wt% at SRS) over an extended period of time. In preparation of the sludge for slurrying, several important questions have been raised with regard to sludge suspension and mixing of the solid suspension in the bulk of the tank: (1) How much time is required to prepare a slurry with a uniform solid composition? (2) How long will it take to suspend and mix the sludge for uniform composition in any particular waste tank? (3) What are good mixing indicators to answer the questions concerning sludge mixing stated above in a general fashion applicable to any waste tank/slurry pump geometry and fluid/sludge combination?

  15. Pricing and lot sizing optimization in a two-echelon supply chain with a constrained Logit demand function

    Directory of Open Access Journals (Sweden)

    Yeison Díaz-Mateus

    2017-07-01

    Full Text Available Decision making in supply chains is influenced by demand variations, and hence sales, purchase orders and inventory levels are therefore concerned. This paper presents a non-linear optimization model for a two-echelon supply chain, for a unique product. In addition, the model includes the consumers’ maximum willingness to pay, taking socioeconomic differences into account. To do so, the constrained multinomial logit for discrete choices is used to estimate demand levels. Then, a metaheuristic approach based on particle swarm optimization is proposed to determine the optimal product sales price and inventory coordination variables. To validate the proposed model, a supply chain of a technological product was chosen and three scenarios are analyzed: discounts, demand segmentation and demand overestimation. Results are analyzed on the basis of profits, lotsizing and inventory turnover and market share. It can be concluded that the maximum willingness to pay must be taken into consideration, otherwise fictitious profits may mislead decision making, and although the market share would seem to improve, overall profits are not in fact necessarily better.

  16. Cluster Correlation in Mixed Models

    Science.gov (United States)

    Gardini, A.; Bonometto, S. A.; Murante, G.; Yepes, G.

    2000-10-01

    We evaluate the dependence of the cluster correlation length, rc, on the mean intercluster separation, Dc, for three models with critical matter density, vanishing vacuum energy (Λ=0), and COBE normalization: a tilted cold dark matter (tCDM) model (n=0.8) and two blue mixed models with two light massive neutrinos, yielding Ωh=0.26 and 0.14 (MDM1 and MDM2, respectively). All models approach the observational value of σ8 (and hence the observed cluster abundance) and are consistent with the observed abundance of damped Lyα systems. Mixed models have a motivation in recent results of neutrino physics; they also agree with the observed value of the ratio σ8/σ25, yielding the spectral slope parameter Γ, and nicely fit Las Campanas Redshift Survey (LCRS) reconstructed spectra. We use parallel AP3M simulations, performed in a wide box (of side 360 h-1 Mpc) and with high mass and distance resolution, enabling us to build artificial samples of clusters, whose total number and mass range allow us to cover the same Dc interval inspected through Automatic Plate Measuring Facility (APM) and Abell cluster clustering data. We find that the tCDM model performs substantially better than n=1 critical density CDM models. Our main finding, however, is that mixed models provide a surprisingly good fit to cluster clustering data.

  17. Statistical Tests for Mixed Linear Models

    CERN Document Server

    Khuri, André I; Sinha, Bimal K

    2011-01-01

    An advanced discussion of linear models with mixed or random effects. In recent years a breakthrough has occurred in our ability to draw inferences from exact and optimum tests of variance component models, generating much research activity that relies on linear models with mixed and random effects. This volume covers the most important research of the past decade as well as the latest developments in hypothesis testing. It compiles all currently available results in the area of exact and optimum tests for variance component models and offers the only comprehensive treatment for these models a

  18. Application of the Fokker-Planck molecular mixing model to turbulent scalar mixing using moment methods

    Science.gov (United States)

    Madadi-Kandjani, E.; Fox, R. O.; Passalacqua, A.

    2017-06-01

    An extended quadrature method of moments using the β kernel density function (β -EQMOM) is used to approximate solutions to the evolution equation for univariate and bivariate composition probability distribution functions (PDFs) of a passive scalar for binary and ternary mixing. The key element of interest is the molecular mixing term, which is described using the Fokker-Planck (FP) molecular mixing model. The direct numerical simulations (DNSs) of Eswaran and Pope ["Direct numerical simulations of the turbulent mixing of a passive scalar," Phys. Fluids 31, 506 (1988)] and the amplitude mapping closure (AMC) of Pope ["Mapping closures for turbulent mixing and reaction," Theor. Comput. Fluid Dyn. 2, 255 (1991)] are taken as reference solutions to establish the accuracy of the FP model in the case of binary mixing. The DNSs of Juneja and Pope ["A DNS study of turbulent mixing of two passive scalars," Phys. Fluids 8, 2161 (1996)] are used to validate the results obtained for ternary mixing. Simulations are performed with both the conditional scalar dissipation rate (CSDR) proposed by Fox [Computational Methods for Turbulent Reacting Flows (Cambridge University Press, 2003)] and the CSDR from AMC, with the scalar dissipation rate provided as input and obtained from the DNS. Using scalar moments up to fourth order, the ability of the FP model to capture the evolution of the shape of the PDF, important in turbulent mixing problems, is demonstrated. Compared to the widely used assumed β -PDF model [S. S. Girimaji, "Assumed β-pdf model for turbulent mixing: Validation and extension to multiple scalar mixing," Combust. Sci. Technol. 78, 177 (1991)], the β -EQMOM solution to the FP model more accurately describes the initial mixing process with a relatively small increase in computational cost.

  19. Lagrangian mixed layer modeling of the western equatorial Pacific

    Science.gov (United States)

    Shinoda, Toshiaki; Lukas, Roger

    1995-01-01

    Processes that control the upper ocean thermohaline structure in the western equatorial Pacific are examined using a Lagrangian mixed layer model. The one-dimensional bulk mixed layer model of Garwood (1977) is integrated along the trajectories derived from a nonlinear 1 1/2 layer reduced gravity model forced with actual wind fields. The Global Precipitation Climatology Project (GPCP) data are used to estimate surface freshwater fluxes for the mixed layer model. The wind stress data which forced the 1 1/2 layer model are used for the mixed layer model. The model was run for the period 1987-1988. This simple model is able to simulate the isothermal layer below the mixed layer in the western Pacific warm pool and its variation. The subduction mechanism hypothesized by Lukas and Lindstrom (1991) is evident in the model results. During periods of strong South Equatorial Current, the warm and salty mixed layer waters in the central Pacific are subducted below the fresh shallow mixed layer in the western Pacific. However, this subduction mechanism is not evident when upwelling Rossby waves reach the western equatorial Pacific or when a prominent deepening of the mixed layer occurs in the western equatorial Pacific or when a prominent deepening of the mixed layer occurs in the western equatorial Pacific due to episodes of strong wind and light precipitation associated with the El Nino-Southern Oscillation. Comparison of the results between the Lagrangian mixed layer model and a locally forced Eulerian mixed layer model indicated that horizontal advection of salty waters from the central Pacific strongly affects the upper ocean salinity variation in the western Pacific, and that this advection is necessary to maintain the upper ocean thermohaline structure in this region.

  20. Relating masses and mixing angles. A model-independent model

    Energy Technology Data Exchange (ETDEWEB)

    Hollik, Wolfgang Gregor [DESY, Hamburg (Germany); Saldana-Salazar, Ulises Jesus [CINVESTAV (Mexico)

    2016-07-01

    In general, mixing angles and fermion masses are seen to be independent parameters of the Standard Model. However, exploiting the observed hierarchy in the masses, it is viable to construct the mixing matrices for both quarks and leptons in terms of the corresponding mass ratios only. A closer view on the symmetry properties leads to potential realizations of that approach in extensions of the Standard Model. We discuss the application in the context of flavored multi-Higgs models.

  1. Scotogenic model for co-bimaximal mixing

    Energy Technology Data Exchange (ETDEWEB)

    Ferreira, P.M. [Instituto Superior de Engenharia de Lisboa - ISEL,1959-007 Lisboa (Portugal); Centro de Física Teórica e Computacional - FCUL, Universidade de Lisboa,R. Ernesto de Vasconcelos, 1749-016 Lisboa (Portugal); Grimus, W. [Faculty of Physics, University of Vienna,Boltzmanngasse 5, A-1090 Wien (Austria); Jurčiukonis, D. [Institute of Theoretical Physics and Astronomy, Vilnius University,Saul?etekio ave. 3, LT-10222 Vilnius (Lithuania); Lavoura, L. [CFTP, Instituto Superior Técnico, Universidade de Lisboa,1049-001 Lisboa (Portugal)

    2016-07-04

    We present a scotogenic model, i.e. a one-loop neutrino mass model with dark right-handed neutrino gauge singlets and one inert dark scalar gauge doublet η, which has symmetries that lead to co-bimaximal mixing, i.e. to an atmospheric mixing angle θ{sub 23}=45{sup ∘} and to a CP-violating phase δ=±π/2, while the mixing angle θ{sub 13} remains arbitrary. The symmetries consist of softly broken lepton numbers L{sub α} (α=e,μ,τ), a non-standard CP symmetry, and three ℤ{sub 2} symmetries. We indicate two possibilities for extending the model to the quark sector. Since the model has, besides η, three scalar gauge doublets, we perform a thorough discussion of its scalar sector. We demonstrate that it can accommodate a Standard Model-like scalar with mass 125 GeV, with all the other charged and neutral scalars having much higher masses.

  2. Flapping model of scalar mixing in turbulence

    International Nuclear Information System (INIS)

    Kerstein, A.R.

    1991-01-01

    Motivated by the fluctuating plume model of turbulent mixing downstream of a point source, a flapping model is formulated for application to other configurations. For the scalar mixing layer, simple expressions for single-point scalar fluctuation statistics are obtained that agree with measurements. For a spatially homogeneous scalar mixing field, the family of probability density functions previously derived using mapping closure is reproduced. It is inferred that single-point scalar statistics may depend primarily on large-scale flapping motions in many cases of interest, and thus that multipoint statistics may be the principal indicators of finer-scale mixing effects

  3. A detailed aerosol mixing state model for investigating interactions between mixing state, semivolatile partitioning, and coagulation

    Directory of Open Access Journals (Sweden)

    J. Lu

    2010-04-01

    Full Text Available A new method for describing externally mixed particles, the Detailed Aerosol Mixing State (DAMS representation, is presented in this study. This novel method classifies aerosols by both composition and size, using a user-specified mixing criterion to define boundaries between compositional populations. Interactions between aerosol mixing state, semivolatile partitioning, and coagulation are investigated with a Lagrangian box model that incorporates the DAMS approach. Model results predict that mixing state affects the amount and types of semivolatile organics that partition to available aerosol phases, causing external mixtures to produce a more size-varying composition than internal mixtures. Both coagulation and condensation contribute to the mixing of emitted particles, producing a collection of multiple compositionally distinct aerosol populations that exists somewhere between the extremes of a strictly external or internal mixture. The selection of mixing criteria has a significant impact on the size and type of individual populations that compose the modeled aerosol mixture. Computational demands for external mixture modeling are significant and can be controlled by limiting the number of aerosol populations used in the model.

  4. Comparison between the SIMPLE and ENERGY mixing models

    International Nuclear Information System (INIS)

    Burns, K.J.; Todreas, N.E.

    1980-07-01

    The SIMPLE and ENERGY mixing models were compared in order to investigate the limitations of SIMPLE's analytically formulated mixing parameter, relative to the experimentally calibrated ENERGY mixing parameters. For interior subchannels, it was shown that when the SIMPLE and ENERGY parameters are reduced to a common form, there is good agreement between the two models for a typical fuel geometry. However, large discrepancies exist for typical blanket (lower P/D) geometries. Furthermore, the discrepancies between the mixing parameters result in significant differences in terms of the temperature profiles generated by the ENERGY code utilizing these mixing parameters as input. For edge subchannels, the assumptions made in the development of the SIMPLE model were extended to the rectangular edge subchannel geometry used in ENERGY. The resulting effective eddy diffusivities (used by the ENERGY code) associated with the SIMPLE model are again closest to those of the ENERGY model for the fuel assembly geometry. Finally, the SIMPLE model's neglect of a net swirl effect in the edge region is most limiting for assemblies exhibiting relatively large radial power skews

  5. Advances in nonmarket valuation econometrics: Spatial heterogeneity in hedonic pricing models and preference heterogeneity in stated preference models

    Science.gov (United States)

    Yoo, Jin Woo

    In my 1st essay, the study explores Pennsylvania residents. willingness to pay for development of renewable energy technologies such as solar power, wind power, biomass electricity, and other renewable energy using a choice experiment method. Principle component analysis identified 3 independent attitude components that affect the variation of preference, a desire for renewable energy and environmental quality and concern over cost. The results show that urban residents have a higher desire for environmental quality and concern less about cost than rural residents and consequently have a higher willingness to pay to increase renewable energy production. The results of sub-sample analysis show that a representative respondent in rural (urban) Pennsylvania is willing to pay 3.8(5.9) and 4.1(5.7)/month for increasing the share of Pennsylvania electricity generated from wind power and other renewable energy by 1 percent point, respectively. Mean WTP for solar and biomass electricity was not significantly different from zero. In my second essay, heterogeneity of individual WTP for various renewable energy technologies is investigated using several different variants of the multinomial logit model: a simple MNL with interaction terms, a latent class choice model, a random parameter mixed logit choice model, and a random parameter-latent class choice model. The results of all models consistently show that respondents. preference for individual renewable technology is heterogeneous, but the degree of heterogeneity differs for different renewable technologies. In general, the random parameter logit model with interactions and a hybrid random parameter logit-latent class model fit better than other models and better capture respondents. heterogeneity of preference for renewable energy. The impact of the land under agricultural conservation easement (ACE) contract on the values of nearby residential properties is investigated using housing sales data in two Pennsylvania

  6. Modeling molecular mixing in a spatially inhomogeneous turbulent flow

    Science.gov (United States)

    Meyer, Daniel W.; Deb, Rajdeep

    2012-02-01

    Simulations of spatially inhomogeneous turbulent mixing in decaying grid turbulence with a joint velocity-concentration probability density function (PDF) method were conducted. The inert mixing scenario involves three streams with different compositions. The mixing model of Meyer ["A new particle interaction mixing model for turbulent dispersion and turbulent reactive flows," Phys. Fluids 22(3), 035103 (2010)], the interaction by exchange with the mean (IEM) model and its velocity-conditional variant, i.e., the IECM model, were applied. For reference, the direct numerical simulation data provided by Sawford and de Bruyn Kops ["Direct numerical simulation and lagrangian modeling of joint scalar statistics in ternary mixing," Phys. Fluids 20(9), 095106 (2008)] was used. It was found that velocity conditioning is essential to obtain accurate concentration PDF predictions. Moreover, the model of Meyer provides significantly better results compared to the IECM model at comparable computational expense.

  7. A detailed aerosol mixing state model for investigating interactions between mixing state, semivolatile partitioning, and coagulation

    OpenAIRE

    J. Lu; F. M. Bowman

    2010-01-01

    A new method for describing externally mixed particles, the Detailed Aerosol Mixing State (DAMS) representation, is presented in this study. This novel method classifies aerosols by both composition and size, using a user-specified mixing criterion to define boundaries between compositional populations. Interactions between aerosol mixing state, semivolatile partitioning, and coagulation are investigated with a Lagrangian box model that incorporates the DAMS approach. Model results predict th...

  8. Stochastic modeling of consumer preferences for health care institutions.

    Science.gov (United States)

    Malhotra, N K

    1983-01-01

    This paper proposes a stochastic procedure for modeling consumer preferences via LOGIT analysis. First, a simple, non-technical exposition of the use of a stochastic approach in health care marketing is presented. Second, a study illustrating the application of the LOGIT model in assessing consumer preferences for hospitals is given. The paper concludes with several implications of the proposed approach.

  9. Sample size determination for logistic regression on a logit-normal distribution.

    Science.gov (United States)

    Kim, Seongho; Heath, Elisabeth; Heilbrun, Lance

    2017-06-01

    Although the sample size for simple logistic regression can be readily determined using currently available methods, the sample size calculation for multiple logistic regression requires some additional information, such as the coefficient of determination ([Formula: see text]) of a covariate of interest with other covariates, which is often unavailable in practice. The response variable of logistic regression follows a logit-normal distribution which can be generated from a logistic transformation of a normal distribution. Using this property of logistic regression, we propose new methods of determining the sample size for simple and multiple logistic regressions using a normal transformation of outcome measures. Simulation studies and a motivating example show several advantages of the proposed methods over the existing methods: (i) no need for [Formula: see text] for multiple logistic regression, (ii) available interim or group-sequential designs, and (iii) much smaller required sample size.

  10. Prediction of stock markets by the evolutionary mix-game model

    Science.gov (United States)

    Chen, Fang; Gou, Chengling; Guo, Xiaoqian; Gao, Jieping

    2008-06-01

    This paper presents the efforts of using the evolutionary mix-game model, which is a modified form of the agent-based mix-game model, to predict financial time series. Here, we have carried out three methods to improve the original mix-game model by adding the abilities of strategy evolution to agents, and then applying the new model referred to as the evolutionary mix-game model to forecast the Shanghai Stock Exchange Composite Index. The results show that these modifications can improve the accuracy of prediction greatly when proper parameters are chosen.

  11. Early Warning Models for Systemic Banking Crises in Montenegro

    Directory of Open Access Journals (Sweden)

    Željka Asanović

    2013-06-01

    Full Text Available The purpose of this research is to create an adequate early warning model for systemic banking crises in Montenegro. The probability of banking crisis occurrence is calculated using discrete dependent variable models, more precisely, estimating logit regression. Afterwards, seven simple logit regressions that individually have two explanatory variables are estimated. Adequate weights have been assigned to all seven regressions using the technique of Bayesian model averaging. The advantage of this technique is that it takes into account the model uncertainty by considering various combinations of models in order to minimize the author’s subjective judgment when determining reliable early warning indicators. The results of Bayesian model averaging largely coincide with the results of a previously estimated dynamic logit model. Indicators of credit expansion, thanks to their performances, have a dominant role in early warning models for systemic banking crises in Montenegro. The results have also shown that the Montenegrin banking system is significantly exposed to trends on the global level.

  12. Mathematical study of mixing models

    International Nuclear Information System (INIS)

    Lagoutiere, F.; Despres, B.

    1999-01-01

    This report presents the construction and the study of a class of models that describe the behavior of compressible and non-reactive Eulerian fluid mixtures. Mixture models can have two different applications. Either they are used to describe physical mixtures, in the case of a true zone of extensive mixing (but then this modelization is incomplete and must be considered only as a point of departure for the elaboration of models of mixtures actually relevant). Either they are used to solve the problem of the numerical mixture. This problem appears during the discretization of an interface which separates fluids having laws of different state: the zone of numerical mixing is the set of meshes which cover the interface. The attention is focused on numerical mixtures, for which the hypothesis of non-miscibility (physics) will bring two equations (the sixth and the eighth of the system). It is important to emphasize that even in the case of the only numerical mixture, the presence in one and same place (same mesh) of several fluids have to be taken into account. This will be formalized by the possibility for mass fractions to take all values between 0 and 1. This is not at odds with the equations that derive from the hypothesis of non-miscibility. One way of looking at things is to consider that there are two scales of observation: the physical scale at which one observes the separation of fluids, and the numerical scale, given by the fineness of the mesh, to which a mixture appears. In this work, mixtures are considered from the mathematical angle (both in the elaboration phase and during their study). In particular, Chapter 5 shows a result of model degeneration for a non-extended mixing zone (case of an interface): this justifies the use of models in the case of numerical mixing. All these models are based on the classical model of non-viscous compressible fluids recalled in Chapter 2. In Chapter 3, the central point of the elaboration of the class of models is

  13. Modeling route choice criteria from home to major streets: A discrete choice approach

    Directory of Open Access Journals (Sweden)

    Jose Osiris Vidana-Bencomo

    2018-03-01

    Full Text Available A discrete choice model that consists of three sub-models was developed to investigates the route choice criteria of drivers who travel from their homes in the morning to the access point along the major streets that bound the Traffic Analysis Zones (TAZs. The first sub-model is a Nested Logit Model (NLM that estimates the probability of a driver has or has no multiple routes, and if the driver has multiple routes, the route selection criteria are based on the access point’s intersection control type or other factors. The second sub-model is a Mixed Logit (MXL model. It estimates the probabilities of the type of intersection control preferred by a driver. The third sub-model is a NLM that estimates the probabilities of a driver selecting his/her route for its shortest travel time or to avoid pedestrian, and if the aim is to take the fastest route, the decision criteria is based on the shortest distance or minimum stops and turns. Data gathered in a questionnaire survey were used to estimate the sub-models. The attributes of the utility functions of the sub-models are the driver’s demographic and trip characteristics. The model provides a means for transportation planners to distribute the total number of home-based trips generated within a TAZ to the access points along the major streets that bound the TAZ.

  14. Mixed models theory and applications with R

    CERN Document Server

    Demidenko, Eugene

    2013-01-01

    Mixed modeling is one of the most promising and exciting areas of statistical analysis, enabling the analysis of nontraditional, clustered data that may come in the form of shapes or images. This book provides in-depth mathematical coverage of mixed models' statistical properties and numerical algorithms, as well as applications such as the analysis of tumor regrowth, shape, and image. The new edition includes significant updating, over 300 exercises, stimulating chapter projects and model simulations, inclusion of R subroutines, and a revised text format. The target audience continues to be g

  15. Computer modeling of jet mixing in INEL waste tanks

    International Nuclear Information System (INIS)

    Meyer, P.A.

    1994-01-01

    The objective of this study is to examine the feasibility of using submerged jet mixing pumps to mobilize and suspend settled sludge materials in INEL High Level Radioactive Waste Tanks. Scenarios include removing the heel (a shallow liquid and sludge layer remaining after tank emptying processes) and mobilizing and suspending solids in full or partially full tanks. The approach used was to (1) briefly review jet mixing theory, (2) review erosion literature in order to identify and estimate important sludge characterization parameters (3) perform computer modeling of submerged liquid mixing jets in INEL tank geometries, (4) develop analytical models from which pump operating conditions and mixing times can be estimated, and (5) analyze model results to determine overall feasibility of using jet mixing pumps and make design recommendations

  16. The consumer’s choice among television displays: A multinomial logit approach

    Directory of Open Access Journals (Sweden)

    Carlos Giovanni González Espitia

    2013-07-01

    Full Text Available The consumer’s choice over a bundle of products depends on observable and unobservable characteristics of goods and consumers. This choice is made in order to maximize utility subject to a budget constraint. At the same time, firms make product differentiation decisions to maximize profit. Quality is a form of differentiation. An example of this occurs in the TV market, where several displays are developed. Our objective is to determine the probability for a consumer of choosing a type of display from among five kinds: standard tube, LCD, plasma, projection and LED. Using a multinomial logit approach, we find that electronic appliances like DVDs and audio systems, as well as socioeconomic status, increase the probability of choosing a high-tech television display. Our empirical approximation contributes to further understanding rational consumer behavior through the theory of utility maximization and highlights the importance of studying market structure and analyzing changes in welfare and efficiency.

  17. A dynamic analysis of interfuel substitution for Swedish heating plants

    International Nuclear Information System (INIS)

    Braennlund, Runar; Lundgren, Tommy

    2004-01-01

    This paper estimates a dynamic model of interfuel substitution for Swedish heating plants. We use the cost share linear logit model developed by Considine and Mount [Considine, T.J., Mount, T.D., 1984. The use of linear logit models for dynamic input demand systems. Review of Economics and Statistics 66, 434-443]. All estimated own-price elasticities are negative and all cross-price elasticities are positive. The estimated dynamic adjustment rate parameter is small, however, increasing with the size of the plant and time, indicating fast adjustments in the fuel mix when changing relative fuel prices. The estimated model is used to illustrate the effects of two different policy changes

  18. Log-normal frailty models fitted as Poisson generalized linear mixed models.

    Science.gov (United States)

    Hirsch, Katharina; Wienke, Andreas; Kuss, Oliver

    2016-12-01

    The equivalence of a survival model with a piecewise constant baseline hazard function and a Poisson regression model has been known since decades. As shown in recent studies, this equivalence carries over to clustered survival data: A frailty model with a log-normal frailty term can be interpreted and estimated as a generalized linear mixed model with a binary response, a Poisson likelihood, and a specific offset. Proceeding this way, statistical theory and software for generalized linear mixed models are readily available for fitting frailty models. This gain in flexibility comes at the small price of (1) having to fix the number of pieces for the baseline hazard in advance and (2) having to "explode" the data set by the number of pieces. In this paper we extend the simulations of former studies by using a more realistic baseline hazard (Gompertz) and by comparing the model under consideration with competing models. Furthermore, the SAS macro %PCFrailty is introduced to apply the Poisson generalized linear mixed approach to frailty models. The simulations show good results for the shared frailty model. Our new %PCFrailty macro provides proper estimates, especially in case of 4 events per piece. The suggested Poisson generalized linear mixed approach for log-normal frailty models based on the %PCFrailty macro provides several advantages in the analysis of clustered survival data with respect to more flexible modelling of fixed and random effects, exact (in the sense of non-approximate) maximum likelihood estimation, and standard errors and different types of confidence intervals for all variance parameters. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  19. Extending existing structural identifiability analysis methods to mixed-effects models.

    Science.gov (United States)

    Janzén, David L I; Jirstrand, Mats; Chappell, Michael J; Evans, Neil D

    2018-01-01

    The concept of structural identifiability for state-space models is expanded to cover mixed-effects state-space models. Two methods applicable for the analytical study of the structural identifiability of mixed-effects models are presented. The two methods are based on previously established techniques for non-mixed-effects models; namely the Taylor series expansion and the input-output form approach. By generating an exhaustive summary, and by assuming an infinite number of subjects, functions of random variables can be derived which in turn determine the distribution of the system's observation function(s). By considering the uniqueness of the analytical statistical moments of the derived functions of the random variables, the structural identifiability of the corresponding mixed-effects model can be determined. The two methods are applied to a set of examples of mixed-effects models to illustrate how they work in practice. Copyright © 2017 Elsevier Inc. All rights reserved.

  20. Testing Benefits Transfer of Forest Recreation Values over a 20-year time Horizon

    DEFF Research Database (Denmark)

    Zandersen, Marianne; Termansen, Mette; Jensen, F.S.

    2007-01-01

    We conduct a functional benefit transfer over 20 years of total willingness to pay based on car-borne forest recreation in 52 forests, using a mixed logit specification of a random utility model and geographic information systems to allow heterogeneous preferences across the population and for he......We conduct a functional benefit transfer over 20 years of total willingness to pay based on car-borne forest recreation in 52 forests, using a mixed logit specification of a random utility model and geographic information systems to allow heterogeneous preferences across the population...... and for heterogeneity over space. Results show that preferences for some forest attributes, such as species diversity and age, as well as transport mode have changed significantly over the period. Updating the transfer model with present total demand for recreation improves the error margins by an average of 282......%. Average errors of the best transfer model remain 25%....

  1. Multivariate generalized linear mixed models using R

    CERN Document Server

    Berridge, Damon Mark

    2011-01-01

    Multivariate Generalized Linear Mixed Models Using R presents robust and methodologically sound models for analyzing large and complex data sets, enabling readers to answer increasingly complex research questions. The book applies the principles of modeling to longitudinal data from panel and related studies via the Sabre software package in R. A Unified Framework for a Broad Class of Models The authors first discuss members of the family of generalized linear models, gradually adding complexity to the modeling framework by incorporating random effects. After reviewing the generalized linear model notation, they illustrate a range of random effects models, including three-level, multivariate, endpoint, event history, and state dependence models. They estimate the multivariate generalized linear mixed models (MGLMMs) using either standard or adaptive Gaussian quadrature. The authors also compare two-level fixed and random effects linear models. The appendices contain additional information on quadrature, model...

  2. Facet-based analysis of vacation planning process : a binary mixed logit panel model

    NARCIS (Netherlands)

    Grigolon, A.B.; Kemperman, A.D.A.M.; Timmermans, H.J.P.

    2013-01-01

    This article documents the design and results of a study on vacation planning processes with a particular focus on aggregate relationships between the probability that a certain facet of the vacation decision has been decided at a particular point in time, as a function of lead time to the actual

  3. Facet-based analysis of vacation planning processes : a binary mixed logit panel model

    NARCIS (Netherlands)

    Grigolon, Anna; Kemperman, Astrid; Timmermans, Harry

    2012-01-01

    This article documents the design and results of a study on vacation planning processes with a particular focus on aggregate relationships between the probability that a certain facet of the vacation decision has been decided at a particular point in time, as a function of lead time to the actual

  4. Modeling Stochastic Route Choice Behaviors with Equivalent Impedance

    Directory of Open Access Journals (Sweden)

    Jun Li

    2015-01-01

    Full Text Available A Logit-based route choice model is proposed to address the overlapping and scaling problems in the traditional multinomial Logit model. The nonoverlapping links are defined as a subnetwork, and its equivalent impedance is explicitly calculated in order to simply network analyzing. The overlapping links are repeatedly merged into subnetworks with Logit-based equivalent travel costs. The choice set at each intersection comprises only the virtual equivalent route without overlapping. In order to capture heterogeneity in perception errors of different sizes of networks, different scale parameters are assigned to subnetworks and they are linked to the topological relationships to avoid estimation burden. The proposed model provides an alternative method to model the stochastic route choice behaviors without the overlapping and scaling problems, and it still maintains the simple and closed-form expression from the MNL model. A link-based loading algorithm based on Dial’s algorithm is proposed to obviate route enumeration and it is suitable to be applied on large-scale networks. Finally a comparison between the proposed model and other route choice models is given by numerical examples.

  5. A Note on the Identifiability of Generalized Linear Mixed Models

    DEFF Research Database (Denmark)

    Labouriau, Rodrigo

    2014-01-01

    I present here a simple proof that, under general regularity conditions, the standard parametrization of generalized linear mixed model is identifiable. The proof is based on the assumptions of generalized linear mixed models on the first and second order moments and some general mild regularity...... conditions, and, therefore, is extensible to quasi-likelihood based generalized linear models. In particular, binomial and Poisson mixed models with dispersion parameter are identifiable when equipped with the standard parametrization...

  6. Three novel approaches to structural identifiability analysis in mixed-effects models.

    Science.gov (United States)

    Janzén, David L I; Jirstrand, Mats; Chappell, Michael J; Evans, Neil D

    2016-05-06

    Structural identifiability is a concept that considers whether the structure of a model together with a set of input-output relations uniquely determines the model parameters. In the mathematical modelling of biological systems, structural identifiability is an important concept since biological interpretations are typically made from the parameter estimates. For a system defined by ordinary differential equations, several methods have been developed to analyse whether the model is structurally identifiable or otherwise. Another well-used modelling framework, which is particularly useful when the experimental data are sparsely sampled and the population variance is of interest, is mixed-effects modelling. However, established identifiability analysis techniques for ordinary differential equations are not directly applicable to such models. In this paper, we present and apply three different methods that can be used to study structural identifiability in mixed-effects models. The first method, called the repeated measurement approach, is based on applying a set of previously established statistical theorems. The second method, called the augmented system approach, is based on augmenting the mixed-effects model to an extended state-space form. The third method, called the Laplace transform mixed-effects extension, is based on considering the moment invariants of the systems transfer function as functions of random variables. To illustrate, compare and contrast the application of the three methods, they are applied to a set of mixed-effects models. Three structural identifiability analysis methods applicable to mixed-effects models have been presented in this paper. As method development of structural identifiability techniques for mixed-effects models has been given very little attention, despite mixed-effects models being widely used, the methods presented in this paper provides a way of handling structural identifiability in mixed-effects models previously not

  7. Modeling tides and vertical tidal mixing: A reality check

    International Nuclear Information System (INIS)

    Robertson, Robin

    2010-01-01

    Recently, there has been a great interest in the tidal contribution to vertical mixing in the ocean. In models, vertical mixing is estimated using parameterization of the sub-grid scale processes. Estimates of the vertical mixing varied widely depending on which vertical mixing parameterization was used. This study investigated the performance of ten different vertical mixing parameterizations in a terrain-following ocean model when simulating internal tides. The vertical mixing parameterization was found to have minor effects on the velocity fields at the tidal frequencies, but large effects on the estimates of vertical diffusivity of temperature. Although there was no definitive best performer for the vertical mixing parameterization, several parameterizations were eliminated based on comparison of the vertical diffusivity estimates with observations. The best performers were the new generic coefficients for the generic length scale schemes and Mellor-Yamada's 2.5 level closure scheme.

  8. Reliability assessment of competing risks with generalized mixed shock models

    International Nuclear Information System (INIS)

    Rafiee, Koosha; Feng, Qianmei; Coit, David W.

    2017-01-01

    This paper investigates reliability modeling for systems subject to dependent competing risks considering the impact from a new generalized mixed shock model. Two dependent competing risks are soft failure due to a degradation process, and hard failure due to random shocks. The shock process contains fatal shocks that can cause hard failure instantaneously, and nonfatal shocks that impact the system in three different ways: 1) damaging the unit by immediately increasing the degradation level, 2) speeding up the deterioration by accelerating the degradation rate, and 3) weakening the unit strength by reducing the hard failure threshold. While the first impact from nonfatal shocks comes from each individual shock, the other two impacts are realized when the condition for a new generalized mixed shock model is satisfied. Unlike most existing mixed shock models that consider a combination of two shock patterns, our new generalized mixed shock model includes three classic shock patterns. According to the proposed generalized mixed shock model, the degradation rate and the hard failure threshold can simultaneously shift multiple times, whenever the condition for one of these three shock patterns is satisfied. An example using micro-electro-mechanical systems devices illustrates the effectiveness of the proposed approach with sensitivity analysis. - Highlights: • A rich reliability model for systems subject to dependent failures is proposed. • The degradation rate and the hard failure threshold can shift simultaneously. • The shift is triggered by a new generalized mixed shock model. • The shift can occur multiple times under the generalized mixed shock model.

  9. Modelling Preference Heterogeneity for Theatre Tickets

    DEFF Research Database (Denmark)

    Baldin, Andrea; Bille, Trine

    2018-01-01

    This article analyses the behavioural choice for theatre tickets using a rich data set for 2010–2013 from the sale system of the Royal Danish National Theatre. A consumer who decides to attend a theatre production faces multiple sources of price variation that involves a choice by the consumer...... among different ticket alternatives. Three modelling approaches are proposed in order to model ticket purchases: conditional logit with socio-demographic characteristics, nested logit and latent class. These models allow us explicitly to take into account consumers’ preference heterogeneity with respect...... of behaviour in the choice of theatre ticket....

  10. FACTORS THAT AFFECT TRANSPORT MODE PREFERENCE FOR GRADUATE STUDENTS IN THE NATIONAL UNIVERSITY OF MALAYSIA BY LOGIT METHOD

    Directory of Open Access Journals (Sweden)

    ALI AHMED MOHAMMED

    2013-06-01

    Full Text Available A study was carried out to examine the perceptions and preferences of students on choosing the type of transportation for their travels in university campus. This study focused on providing personal transport users road transport alternatives as a countermeasure aimed at shifting car users to other modes of transportation. Overall 456 questionnaires were conducted to develop a choice of transportation mode preferences. Consequently, Logit model and SPSS were used to identify the factors that affect the determination of the choice of transportation mode. Results indicated that by reducing travel time by 70% the amount of private cars users will be reduced by 84%, while reduction the travel cost was found to be highly improving the public modes of utilization. This study revealed positive aspects is needed to shift travellers from private modes to public. The positive aspect contributes to travel time and travel cost reduction, hence improving the services, whereby contributing to sustainability.

  11. ADOPT: A Historically Validated Light Duty Vehicle Consumer Choice Model

    Energy Technology Data Exchange (ETDEWEB)

    Brooker, A.; Gonder, J.; Lopp, S.; Ward, J.

    2015-05-04

    The Automotive Deployment Option Projection Tool (ADOPT) is a light-duty vehicle consumer choice and stock model supported by the U.S. Department of Energy’s Vehicle Technologies Office. It estimates technology improvement impacts on U.S. light-duty vehicles sales, petroleum use, and greenhouse gas emissions. ADOPT uses techniques from the multinomial logit method and the mixed logit method estimate sales. Specifically, it estimates sales based on the weighted value of key attributes including vehicle price, fuel cost, acceleration, range and usable volume. The average importance of several attributes changes nonlinearly across its range and changes with income. For several attributes, a distribution of importance around the average value is used to represent consumer heterogeneity. The majority of existing vehicle makes, models, and trims are included to fully represent the market. The Corporate Average Fuel Economy regulations are enforced. The sales feed into the ADOPT stock model. It captures key aspects for summing petroleum use and greenhouse gas emissions This includes capturing the change in vehicle miles traveled by vehicle age, the creation of new model options based on the success of existing vehicles, new vehicle option introduction rate limits, and survival rates by vehicle age. ADOPT has been extensively validated with historical sales data. It matches in key dimensions including sales by fuel economy, acceleration, price, vehicle size class, and powertrain across multiple years. A graphical user interface provides easy and efficient use. It manages the inputs, simulation, and results.

  12. Cognitive overload? An exploration of the potential impact of cognitive functioning in discrete choice experiments with older people in health care.

    Science.gov (United States)

    Milte, Rachel; Ratcliffe, Julie; Chen, Gang; Lancsar, Emily; Miller, Michelle; Crotty, Maria

    2014-07-01

    This exploratory study sought to investigate the effect of cognitive functioning on the consistency of individual responses to a discrete choice experiment (DCE) study conducted exclusively with older people. A DCE to investigate preferences for multidisciplinary rehabilitation was administered to a consenting sample of older patients (aged 65 years and older) after surgery to repair a fractured hip (N = 84). Conditional logit, mixed logit, heteroscedastic conditional logit, and generalized multinomial logit regression models were used to analyze the DCE data and to explore the relationship between the level of cognitive functioning (specifically the absence or presence of mild cognitive impairment as assessed by the Mini-Mental State Examination) and preference and scale heterogeneity. Both the heteroscedastic conditional logit and generalized multinomial logit models indicated that the presence of mild cognitive impairment did not have a significant effect on the consistency of responses to the DCE. This study provides important preliminary evidence relating to the effect of mild cognitive impairment on DCE responses for older people. It is important that further research be conducted in larger samples and more diverse populations to further substantiate the findings from this exploratory study and to assess the practicality and validity of the DCE approach with populations of older people. Copyright © 2014 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  13. Valoração de contingente pelas modelagens logit e análise multivariada: um estudo de caso da disposição a aceitar compensação dos cafeicultores vinculados ao PRO-CAFÉ de Viçosa - MG Contingent valuation with modeling logit and multivariate analyses: a case study of the willingness of coffee planters linked to the PRO-COFFEE of Viçosa - MG to accept compensation

    Directory of Open Access Journals (Sweden)

    Pedro Silveira Máximo

    2009-12-01

    Full Text Available O objetivo deste estudo foi, justamente, identificar, entre os métodos LOGIT e a análise multivariada, qual a mais eficaz para estimar a Disposição a Aceitar Compensação (DAC dos cafeicultores quando o viés da utilidade marginal é passível de ocorrência. Para tal, foi elaborado um formulário com 33 perguntas envolvendo informações sobre características socioeconômicas dos cafeicultores, o uso da metodologia de valoração de contingente (MVC e do veículo de pagamento dos "Jogos de Lances", que revelou a Disposição a Aceitar uma Compensação (DAC na troca de um hectare de café por um hectare de mata. Como esperado, por causa do viés da utilidade marginal o método LOGIT foi incapaz de produzir resultados consistentes. Já a estimação da DAC pela análise multivariada mostrou que, caso o governo estivesse disposto a aumentar a provisão de mata em 70 ha, ele deveria despender 254.200 reais por ano, tratando apenas dos cafeicultores vinculados ao programa do PRO-CAFÉ.The object of this study was to identify which method, either LOGIT or multivariate analyses, was the most efficient to estimate the coffee planters' Willingness to Accept a Compensation, when there was a possibility of occurrence of marginal utility. For such, a questionnaire was formulated, with 33 questions involving information on coffee planters' socio - economic characteristics, the use of the methodology of contingent valuation (MCV, and the payment of the "offer game" that reveled the willingness to accept a compensation (WAC, by exchanging a hectare of coffee by a hectare of forest. As expected, because of the marginal utility's bias, the LOGIT method was unable to produce consistent results. However, when the WAC was estimated by multivariate analyses, the results showed that if the government is willing to increase the provision of forest to 70 hectares, it should pay out 254,200 reais (around 116,000 dollars, dealing only with the coffee planters

  14. Kriging with mixed effects models

    Directory of Open Access Journals (Sweden)

    Alessio Pollice

    2007-10-01

    Full Text Available In this paper the effectiveness of the use of mixed effects models for estimation and prediction purposes in spatial statistics for continuous data is reviewed in the classical and Bayesian frameworks. A case study on agricultural data is also provided.

  15. Estimating the numerical diapycnal mixing in an eddy-permitting ocean model

    Science.gov (United States)

    Megann, Alex

    2018-01-01

    Constant-depth (or "z-coordinate") ocean models such as MOM4 and NEMO have become the de facto workhorse in climate applications, having attained a mature stage in their development and are well understood. A generic shortcoming of this model type, however, is a tendency for the advection scheme to produce unphysical numerical diapycnal mixing, which in some cases may exceed the explicitly parameterised mixing based on observed physical processes, and this is likely to have effects on the long-timescale evolution of the simulated climate system. Despite this, few quantitative estimates have been made of the typical magnitude of the effective diapycnal diffusivity due to numerical mixing in these models. GO5.0 is a recent ocean model configuration developed jointly by the UK Met Office and the National Oceanography Centre. It forms the ocean component of the GC2 climate model, and is closely related to the ocean component of the UKESM1 Earth System Model, the UK's contribution to the CMIP6 model intercomparison. GO5.0 uses version 3.4 of the NEMO model, on the ORCA025 global tripolar grid. An approach to quantifying the numerical diapycnal mixing in this model, based on the isopycnal watermass analysis of Lee et al. (2002), is described, and the estimates thereby obtained of the effective diapycnal diffusivity in GO5.0 are compared with the values of the explicit diffusivity used by the model. It is shown that the effective mixing in this model configuration is up to an order of magnitude higher than the explicit mixing in much of the ocean interior, implying that mixing in the model below the mixed layer is largely dominated by numerical mixing. This is likely to have adverse consequences for the representation of heat uptake in climate models intended for decadal climate projections, and in particular is highly relevant to the interpretation of the CMIP6 class of climate models, many of which use constant-depth ocean models at ¼° resolution

  16. Simplified models of mixed dark matter

    International Nuclear Information System (INIS)

    Cheung, Clifford; Sanford, David

    2014-01-01

    We explore simplified models of mixed dark matter (DM), defined here to be a stable relic composed of a singlet and an electroweak charged state. Our setup describes a broad spectrum of thermal DM candidates that can naturally accommodate the observed DM abundance but are subject to substantial constraints from current and upcoming direct detection experiments. We identify ''blind spots'' at which the DM-Higgs coupling is identically zero, thus nullifying direct detection constraints on spin independent scattering. Furthermore, we characterize the fine-tuning in mixing angles, i.e. well-tempering, required for thermal freeze-out to accommodate the observed abundance. Present and projected limits from LUX and XENON1T force many thermal relic models into blind spot tuning, well-tempering, or both. This simplified model framework generalizes bino-Higgsino DM in the MSSM, singlino-Higgsino DM in the NMSSM, and scalar DM candidates that appear in models of extended Higgs sectors

  17. Combining RP and SP data while accounting for large choice sets and travel mode

    DEFF Research Database (Denmark)

    Abildtrup, Jens; Olsen, Søren Bøye; Stenger, Anne

    2015-01-01

    set used for site selection modelling when the actual choice set considered is potentially large and unknown to the analyst. Easy access to forests also implies that around half of the visitors walk or bike to the forest. We apply an error-component mixed-logit model to simultaneously model the travel...

  18. Easy and flexible mixture distributions

    DEFF Research Database (Denmark)

    Fosgerau, Mogens; Mabit, Stefan L.

    2013-01-01

    We propose a method to generate flexible mixture distributions that are useful for estimating models such as the mixed logit model using simulation. The method is easy to implement, yet it can approximate essentially any mixture distribution. We test it with good results in a simulation study...

  19. Modeling of Salt Solubilities in Mixed Solvents

    DEFF Research Database (Denmark)

    Chiavone-Filho, O.; Rasmussen, Peter

    2000-01-01

    A method to correlate and predict salt solubilities in mixed solvents using a UNIQUAC+Debye-Huckel model is developed. The UNIQUAC equation is applied in a form with temperature-dependent parameters. The Debye-Huckel model is extended to mixed solvents by properly evaluating the dielectric...... constants and the liquid densities of the solvent media. To normalize the activity coefficients, the symmetric convention is adopted. Thermochemical properties of the salt are used to estimate the solubility product. It is shown that the proposed procedure can describe with good accuracy a series of salt...

  20. Functional Mixed Effects Model for Small Area Estimation.

    Science.gov (United States)

    Maiti, Tapabrata; Sinha, Samiran; Zhong, Ping-Shou

    2016-09-01

    Functional data analysis has become an important area of research due to its ability of handling high dimensional and complex data structures. However, the development is limited in the context of linear mixed effect models, and in particular, for small area estimation. The linear mixed effect models are the backbone of small area estimation. In this article, we consider area level data, and fit a varying coefficient linear mixed effect model where the varying coefficients are semi-parametrically modeled via B-splines. We propose a method of estimating the fixed effect parameters and consider prediction of random effects that can be implemented using a standard software. For measuring prediction uncertainties, we derive an analytical expression for the mean squared errors, and propose a method of estimating the mean squared errors. The procedure is illustrated via a real data example, and operating characteristics of the method are judged using finite sample simulation studies.

  1. Think twice before you book? Modelling the choice of public vs private dentist in a choice experiment.

    Science.gov (United States)

    Kiiskinen, Urpo; Suominen-Taipale, Anna Liisa; Cairns, John

    2010-06-01

    This study concerns the choice of primary dental service provider by consumers. If the health service delivery system allows individuals to choose between public-care providers or if complementary private services are available, it is typically assumed that utilisation is a three-stage decision process. The patient first makes a decision to seek care, and then chooses the service provider. The final stage, involving decisions over the amount and form of treatment, is not considered here. The paper reports a discrete choice experiment (DCE) designed to evaluate attributes affecting individuals' choice of dental-care provider. The feasibility of the DCE approach in modelling consumers' choice in the context of non-acute need for dental care is assessed. The aim is to test whether a separate two-stage logit, a multinomial logit, or a nested logit best fits the choice process of consumers. A nested logit model of indirect utility functions is estimated and inclusive value (IV) constraints are tested for modelling implications. The results show that non-trading behaviour has an impact on the choice of appropriate modelling technique, but is to some extent dependent on the choice of scenarios offered. It is concluded that for traders multinomial logit is appropriate, whereas for non-traders and on average the nested logit is the method supported by the analyses. The consistent finding in all subgroup analyses is that the traditional two-stage decision process is found to be implausible in the context of consumer's choice of dental-care provider.

  2. Heterogeneity in the WTP for recreational access

    DEFF Research Database (Denmark)

    Campbell, Danny; Vedel, Suzanne Elizabeth; Thorsen, Bo Jellesmark

    2014-01-01

    In this study we have addressed appropriate modelling of heterogeneity in willingness to pay (WTP) for environmental goods, and have demonstrated its importance using a case of forest access in Denmark. We compared WTP distributions for four models: (1) a multinomial logit model, (2) a mixed logit...... model assuming a univariate Normal distribution, (3) or assuming a multivariate Normal distribution allowing for correlation across attributes, and (4) a mixture of two truncated Normal distributions, allowing for correlation among attributes. In the first two models mean WTP for enhanced access...... was negative. However, models accounting for preference heterogeneity found a positive mean WTP, but a large sub-group with negative WTP. Accounting for preference heterogeneity can alter overall conclusions, which highlights the importance of this for policy recommendations....

  3. Are mixed explicit/implicit solvation models reliable for studying phosphate hydrolysis? A comparative study of continuum, explicit and mixed solvation models.

    Energy Technology Data Exchange (ETDEWEB)

    Kamerlin, Shina C. L.; Haranczyk, Maciej; Warshel, Arieh

    2009-05-01

    Phosphate hydrolysis is ubiquitous in biology. However, despite intensive research on this class of reactions, the precise nature of the reaction mechanism remains controversial. In this work, we have examined the hydrolysis of three homologous phosphate diesters. The solvation free energy was simulated by means of either an implicit solvation model (COSMO), hybrid quantum mechanical / molecular mechanical free energy perturbation (QM/MM-FEP) or a mixed solvation model in which N water molecules were explicitly included in the ab initio description of the reacting system (where N=1-3), with the remainder of the solvent being implicitly modelled as a continuum. Here, both COSMO and QM/MM-FEP reproduce Delta Gobs within an error of about 2kcal/mol. However, we demonstrate that in order to obtain any form of reliable results from a mixed model, it is essential to carefully select the explicit water molecules from short QM/MM runs that act as a model for the true infinite system. Additionally, the mixed models tend to be increasingly inaccurate the more explicit water molecules are placed into the system. Thus, our analysis indicates that this approach provides an unreliable way for modelling phosphate hydrolysis in solution.

  4. A system dynamics model to determine products mix

    Directory of Open Access Journals (Sweden)

    Mahtab Hajghasem

    2014-02-01

    Full Text Available This paper presents an implementation of system dynamics model to determine appropriate product mix by considering various factors such as labor, materials, overhead, etc. for an Iranian producer of cosmetic and sanitary products. The proposed model of this paper considers three hypotheses including the relationship between product mix and profitability, optimum production capacity and having minimum amount of storage to take advantage of low cost production. The implementation of system dynamics on VENSIM software package has confirmed all three hypotheses of the survey and suggested that in order to reach better mix product, it is necessary to reach optimum production planning, take advantage of all available production capacities and use inventory management techniques.

  5. Modeling pedestrian gap crossing index under mixed traffic condition.

    Science.gov (United States)

    Naser, Mohamed M; Zulkiple, Adnan; Al Bargi, Walid A; Khalifa, Nasradeen A; Daniel, Basil David

    2017-12-01

    There are a variety of challenges faced by pedestrians when they walk along and attempt to cross a road, as the most recorded accidents occur during this time. Pedestrians of all types, including both sexes with numerous aging groups, are always subjected to risk and are characterized as the most exposed road users. The increased demand for better traffic management strategies to reduce the risks at intersections, improve quality traffic management, traffic volume, and longer cycle time has further increased concerns over the past decade. This paper aims to develop a sustainable pedestrian gap crossing index model based on traffic flow density. It focusses on the gaps accepted by pedestrians and their decision for street crossing, where (Log-Gap) logarithm of accepted gaps was used to optimize the result of a model for gap crossing behavior. Through a review of extant literature, 15 influential variables were extracted for further empirical analysis. Subsequently, data from the observation at an uncontrolled mid-block in Jalan Ampang in Kuala Lumpur, Malaysia was gathered and Multiple Linear Regression (MLR) and Binary Logit Model (BLM) techniques were employed to analyze the results. From the results, different pedestrian behavioral characteristics were considered for a minimum gap size model, out of which only a few (four) variables could explain the pedestrian road crossing behavior while the remaining variables have an insignificant effect. Among the different variables, age, rolling gap, vehicle type, and crossing were the most influential variables. The study concludes that pedestrians' decision to cross the street depends on the pedestrian age, rolling gap, vehicle type, and size of traffic gap before crossing. The inferences from these models will be useful to increase pedestrian safety and performance evaluation of uncontrolled midblock road crossings in developing countries. Copyright © 2017 National Safety Council and Elsevier Ltd. All rights reserved.

  6. UN MODELO LOGIT PARA LA FRAGILIDAD DEL SISTEMA FINANCIERO VENEZOLANO DENTRO DEL CONTEXTO DE LOS PROCESOS DE FUSIÓN E INTERVENCIÓN | A LOGIT APPROACH OF THE VENEZUELAN FINANCIAL SYSTEM WITHIN THE CONTEXT OF MERGER AND INTERVENTION

    Directory of Open Access Journals (Sweden)

    César Rubicundo

    2016-08-01

    Full Text Available In Venezuela there have been more than 30 mergers, after the approval of the Banking Act in 1999, since from 103 institutions, the financial system closed the year 2013 with 35 brokerage firms, which represents a decrease of 66% due to 20 coalitions, 30 transformations and 18 settlements. Therefore, an analysis is proposed of the current economic situation of the financial system in the context of mergers and interventions, considering internal and external factors according to the constituted capital. The study was based on information from 37 private capital institutions and 04 public institutions, between January 2009 and December 2013. The previous analysis for privately held banks, showed that these institutions have a 78.40% chance of not incurring in situations of fragility; while the State capital banks have 83.30% of surviving in the market. As for the estimated logit models, it was found that the liquidity ratio, ROE, management index and inflation, are components that push towards a fragile situation for privately held banks, with a probability forecast of fragility. With regard to the state capital banks, this situation is explained by a 62.50% equity index, ROE, and inflation. A probability of stability for these banks is expected. The joint model forecasted a probability of a stable financial system for the coming months.

  7. Advective mixing in a nondivergent barotropic hurricane model

    Directory of Open Access Journals (Sweden)

    B. Rutherford

    2010-01-01

    Full Text Available This paper studies Lagrangian mixing in a two-dimensional barotropic model for hurricane-like vortices. Since such flows show high shearing in the radial direction, particle separation across shear-lines is diagnosed through a Lagrangian field, referred to as R-field, that measures trajectory separation orthogonal to the Lagrangian velocity. The shear-lines are identified with the level-contours of another Lagrangian field, referred to as S-field, that measures the average shear-strength along a trajectory. Other fields used for model diagnostics are the Lagrangian field of finite-time Lyapunov exponents (FTLE-field, the Eulerian Q-field, and the angular velocity field. Because of the high shearing, the FTLE-field is not a suitable indicator for advective mixing, and in particular does not exhibit ridges marking the location of finite-time stable and unstable manifolds. The FTLE-field is similar in structure to the radial derivative of the angular velocity. In contrast, persisting ridges and valleys can be clearly recognized in the R-field, and their propagation speed indicates that transport across shear-lines is caused by Rossby waves. A radial mixing rate derived from the R-field gives a time-dependent measure of flux across the shear-lines. On the other hand, a measured mixing rate across the shear-lines, which counts trajectory crossings, confirms the results from the R-field mixing rate, and shows high mixing in the eyewall region after the formation of a polygonal eyewall, which continues until the vortex breaks down. The location of the R-field ridges elucidates the role of radial mixing for the interaction and breakdown of the mesovortices shown by the model.

  8. Analyzing the preference for non-exclusive forms of telecommuting: Modeling and policy implications

    OpenAIRE

    Bagley, Michael N.; Mokhtarian, Patricia L.

    1997-01-01

    This study examines three models of the individual’s preference for home- and center-based telecommuting. Issues concerning the estimation of discrete models when the alternatives are non-exclusive are discussed. Two binary logit models are presented, one on the preference to telecommute from a center versus not telecommuting from a center (adjusted p2 = 0.24), and the other on the preference to telecommute from a center over telecommuting from home (adjusted 2 =0.64). A nested logit model is...

  9. Additive action model for mixed irradiation

    International Nuclear Information System (INIS)

    Lam, G.K.Y.

    1984-01-01

    Recent experimental results indicate that a mixture of high and low LET radiation may have some beneficial features (such as lower OER but with skin sparing) for clinical use, and interest has been renewed in the study of mixtures of high and low LET radiation. Several standard radiation inactivation models can readily accommodate interaction between two mixed radiations, however, this is usually handled by postulating extra free parameters, which can only be determined by fitting to experimental data. A model without any free parameter is proposed to explain the biological effect of mixed radiations, based on the following two assumptions: (a) The combined biological action due to two radiations is additive, assuming no repair has taken place during the interval between the two irradiations; and (b) The initial physical damage induced by radiation develops into final biological effect (e.g. cell killing) over a relatively long period (hours) after irradiation. This model has been shown to provide satisfactory fit to the experiment results of previous studies

  10. Model Selection with the Linear Mixed Model for Longitudinal Data

    Science.gov (United States)

    Ryoo, Ji Hoon

    2011-01-01

    Model building or model selection with linear mixed models (LMMs) is complicated by the presence of both fixed effects and random effects. The fixed effects structure and random effects structure are codependent, so selection of one influences the other. Most presentations of LMM in psychology and education are based on a multilevel or…

  11. Surface wind mixing in the Regional Ocean Modeling System (ROMS)

    Science.gov (United States)

    Robertson, Robin; Hartlipp, Paul

    2017-12-01

    Mixing at the ocean surface is key for atmosphere-ocean interactions and the distribution of heat, energy, and gases in the upper ocean. Winds are the primary force for surface mixing. To properly simulate upper ocean dynamics and the flux of these quantities within the upper ocean, models must reproduce mixing in the upper ocean. To evaluate the performance of the Regional Ocean Modeling System (ROMS) in replicating the surface mixing, the results of four different vertical mixing parameterizations were compared against observations, using the surface mixed layer depth, the temperature fields, and observed diffusivities for comparisons. The vertical mixing parameterizations investigated were Mellor- Yamada 2.5 level turbulent closure (MY), Large- McWilliams- Doney Kpp (LMD), Nakanishi- Niino (NN), and the generic length scale (GLS) schemes. This was done for one temperate site in deep water in the Eastern Pacific and three shallow water sites in the Baltic Sea. The model reproduced the surface mixed layer depth reasonably well for all sites; however, the temperature fields were reproduced well for the deep site, but not for the shallow Baltic Sea sites. In the Baltic Sea, the models overmixed the water column after a few days. Vertical temperature diffusivities were higher than those observed and did not show the temporal fluctuations present in the observations. The best performance was by NN and MY; however, MY became unstable in two of the shallow simulations with high winds. The performance of GLS nearly as good as NN and MY. LMD had the poorest performance as it generated temperature diffusivities that were too high and induced too much mixing. Further observational comparisons are needed to evaluate the effects of different stratification and wind conditions and the limitations on the vertical mixing parameterizations.

  12. Decision-case mix model for analyzing variation in cesarean rates.

    Science.gov (United States)

    Eldenburg, L; Waller, W S

    2001-01-01

    This article contributes a decision-case mix model for analyzing variation in c-section rates. Like recent contributions to the literature, the model systematically takes into account the effect of case mix. Going beyond past research, the model highlights differences in physician decision making in response to obstetric factors. Distinguishing the effects of physician decision making and case mix is important in understanding why c-section rates vary and in developing programs to effect change in physician behavior. The model was applied to a sample of deliveries at a hospital where physicians exhibited considerable variation in their c-section rates. Comparing groups with a low versus high rate, the authors' general conclusion is that the difference in physician decision tendencies (to perform a c-section), in response to specific obstetric factors, is at least as important as case mix in explaining variation in c-section rates. The exact effects of decision making versus case mix depend on how the model application defines the obstetric condition of interest and on the weighting of deliveries by their estimated "risk of Cesarean." The general conclusion is supported by an additional analysis that uses the model's elements to predict individual physicians' annual c-section rates.

  13. Sensitivity of fluvial sediment source apportionment to mixing model assumptions: A Bayesian model comparison.

    Science.gov (United States)

    Cooper, Richard J; Krueger, Tobias; Hiscock, Kevin M; Rawlins, Barry G

    2014-11-01

    Mixing models have become increasingly common tools for apportioning fluvial sediment load to various sediment sources across catchments using a wide variety of Bayesian and frequentist modeling approaches. In this study, we demonstrate how different model setups can impact upon resulting source apportionment estimates in a Bayesian framework via a one-factor-at-a-time (OFAT) sensitivity analysis. We formulate 13 versions of a mixing model, each with different error assumptions and model structural choices, and apply them to sediment geochemistry data from the River Blackwater, Norfolk, UK, to apportion suspended particulate matter (SPM) contributions from three sources (arable topsoils, road verges, and subsurface material) under base flow conditions between August 2012 and August 2013. Whilst all 13 models estimate subsurface sources to be the largest contributor of SPM (median ∼76%), comparison of apportionment estimates reveal varying degrees of sensitivity to changing priors, inclusion of covariance terms, incorporation of time-variant distributions, and methods of proportion characterization. We also demonstrate differences in apportionment results between a full and an empirical Bayesian setup, and between a Bayesian and a frequentist optimization approach. This OFAT sensitivity analysis reveals that mixing model structural choices and error assumptions can significantly impact upon sediment source apportionment results, with estimated median contributions in this study varying by up to 21% between model versions. Users of mixing models are therefore strongly advised to carefully consider and justify their choice of model structure prior to conducting sediment source apportionment investigations. An OFAT sensitivity analysis of sediment fingerprinting mixing models is conductedBayesian models display high sensitivity to error assumptions and structural choicesSource apportionment results differ between Bayesian and frequentist approaches.

  14. Regression Models For Multivariate Count Data.

    Science.gov (United States)

    Zhang, Yiwen; Zhou, Hua; Zhou, Jin; Sun, Wei

    2017-01-01

    Data with multivariate count responses frequently occur in modern applications. The commonly used multinomial-logit model is limiting due to its restrictive mean-variance structure. For instance, analyzing count data from the recent RNA-seq technology by the multinomial-logit model leads to serious errors in hypothesis testing. The ubiquity of over-dispersion and complicated correlation structures among multivariate counts calls for more flexible regression models. In this article, we study some generalized linear models that incorporate various correlation structures among the counts. Current literature lacks a treatment of these models, partly due to the fact that they do not belong to the natural exponential family. We study the estimation, testing, and variable selection for these models in a unifying framework. The regression models are compared on both synthetic and real RNA-seq data.

  15. Unit physics performance of a mix model in Eulerian fluid computations

    Energy Technology Data Exchange (ETDEWEB)

    Vold, Erik [Los Alamos National Laboratory; Douglass, Rod [Los Alamos National Laboratory

    2011-01-25

    In this report, we evaluate the performance of a K-L drag-buoyancy mix model, described in a reference study by Dimonte-Tipton [1] hereafter denoted as [D-T]. The model was implemented in an Eulerian multi-material AMR code, and the results are discussed here for a series of unit physics tests. The tests were chosen to calibrate the model coefficients against empirical data, principally from RT (Rayleigh-Taylor) and RM (Richtmyer-Meshkov) experiments, and the present results are compared to experiments and to results reported in [D-T]. Results show the Eulerian implementation of the mix model agrees well with expectations for test problems in which there is no convective flow of the mass averaged fluid, i.e., in RT mix or in the decay of homogeneous isotropic turbulence (HIT). In RM shock-driven mix, the mix layer moves through the Eulerian computational grid, and there are differences with the previous results computed in a Lagrange frame [D-T]. The differences are attributed to the mass averaged fluid motion and examined in detail. Shock and re-shock mix are not well matched simultaneously. Results are also presented and discussed regarding model sensitivity to coefficient values and to initial conditions (IC), grid convergence, and the generation of atomically mixed volume fractions.

  16. Analysis and modeling of subgrid scalar mixing using numerical data

    Science.gov (United States)

    Girimaji, Sharath S.; Zhou, YE

    1995-01-01

    Direct numerical simulations (DNS) of passive scalar mixing in isotropic turbulence is used to study, analyze and, subsequently, model the role of small (subgrid) scales in the mixing process. In particular, we attempt to model the dissipation of the large scale (supergrid) scalar fluctuations caused by the subgrid scales by decomposing it into two parts: (1) the effect due to the interaction among the subgrid scales; and (2) the effect due to interaction between the supergrid and the subgrid scales. Model comparisons with DNS data show good agreement. This model is expected to be useful in the large eddy simulations of scalar mixing and reaction.

  17. A flavor symmetry model for bilarge leptonic mixing and the lepton masses

    Science.gov (United States)

    Ohlsson, Tommy; Seidl, Gerhart

    2002-11-01

    We present a model for leptonic mixing and the lepton masses based on flavor symmetries and higher-dimensional mass operators. The model predicts bilarge leptonic mixing (i.e., the mixing angles θ12 and θ23 are large and the mixing angle θ13 is small) and an inverted hierarchical neutrino mass spectrum. Furthermore, it approximately yields the experimental hierarchical mass spectrum of the charged leptons. The obtained values for the leptonic mixing parameters and the neutrino mass squared differences are all in agreement with atmospheric neutrino data, the Mikheyev-Smirnov-Wolfenstein large mixing angle solution of the solar neutrino problem, and consistent with the upper bound on the reactor mixing angle. Thus, we have a large, but not close to maximal, solar mixing angle θ12, a nearly maximal atmospheric mixing angle θ23, and a small reactor mixing angle θ13. In addition, the model predicts θ 12≃ {π}/{4}-θ 13.

  18. Application of zero-inflated poisson mixed models in prognostic factors of hepatitis C.

    Science.gov (United States)

    Akbarzadeh Baghban, Alireza; Pourhoseingholi, Asma; Zayeri, Farid; Jafari, Ali Akbar; Alavian, Seyed Moayed

    2013-01-01

    In recent years, hepatitis C virus (HCV) infection represents a major public health problem. Evaluation of risk factors is one of the solutions which help protect people from the infection. This study aims to employ zero-inflated Poisson mixed models to evaluate prognostic factors of hepatitis C. The data was collected from a longitudinal study during 2005-2010. First, mixed Poisson regression (PR) model was fitted to the data. Then, a mixed zero-inflated Poisson model was fitted with compound Poisson random effects. For evaluating the performance of the proposed mixed model, standard errors of estimators were compared. The results obtained from mixed PR showed that genotype 3 and treatment protocol were statistically significant. Results of zero-inflated Poisson mixed model showed that age, sex, genotypes 2 and 3, the treatment protocol, and having risk factors had significant effects on viral load of HCV patients. Of these two models, the estimators of zero-inflated Poisson mixed model had the minimum standard errors. The results showed that a mixed zero-inflated Poisson model was the almost best fit. The proposed model can capture serial dependence, additional overdispersion, and excess zeros in the longitudinal count data.

  19. Sensitivity of the urban airshed model to mixing height profiles

    Energy Technology Data Exchange (ETDEWEB)

    Rao, S.T.; Sistla, G.; Ku, J.Y.; Zhou, N.; Hao, W. [New York State Dept. of Environmental Conservation, Albany, NY (United States)

    1994-12-31

    The United States Environmental Protection Agency (USEPA) has recommended the use of the Urban Airshed Model (UAM), a grid-based photochemical model, for regulatory applications. One of the important parameters in applications of the UAM is the height of the mixed layer or the diffusion break. In this study, we examine the sensitivity of the UAM-predicted ozone concentrations to (a) a spatially invariant diurnal mixing height profile, and (b) a spatially varying diurnal mixing height profile for a high ozone episode of July 1988 for the New York Airshed. The 1985/88 emissions inventory used in the EPA`s Regional Oxidant Modeling simulations has been regridded for this study. Preliminary results suggest that the spatially varying case yields a higher peak ozone concentrations compared to the spatially invariant mixing height simulation, with differences in the peak ozone ranging from a few ppb to about 40 ppb for the days simulated. These differences are attributed to the differences in the shape of the mixing height profiles and its rate of growth during the morning hours when peak emissions are injected into the atmosphere. Examination of the impact of emissions reductions associated with these two mixing height profiles indicates that NO{sub x}-focussed controls provide a greater change in the predicted ozone peak under spatially invariant mixing heights than under the spatially varying mixing height profile. On the other hand, VOC-focussed controls provide a greater change in the predicted peak ozone levels under spatially varying mixing heights than under the spatially invariant mixing height profile.

  20. Comparison of four mathematical models for the calculation of radioimmunoassay data of LH, FSH and GH

    International Nuclear Information System (INIS)

    Geier, T.; Rohde, W.

    1981-01-01

    Weighted linear logit-log regression, point-to-point logit-log interpolation, smoothing spline approximation and the four-parameter logistic function calculated by non-linear regression have been compared. The data for comparison have been obtained from two different pool-sera for each of the LH-, FSH- and GH-RIA and from the basal serum LH values of two populations of children. The Wilcoxon matched pairs signed rank test was used for comparison: For GH there is no significant difference between all methods, for FSH the weighted linear logit-log regression and spline approximation appeared to be equivalent, but for LH no unequivocal assertion can be made. There is no significant difference between the mathematical models for determination of hormone concentration within one assay run of a population as exemplified for LH. In addition, pool sera data were subjected to an analysis of variance and the comparison of the results revealed that the different models did not lead to different statements about assay performance. The point-to-point logit-log interpolation is proposed as most simple curvilinear approximation for assays which cannot be linearized by logit-log transformation. (author)

  1. Ill-posedness in modeling mixed sediment river morphodynamics

    Science.gov (United States)

    Chavarrías, Víctor; Stecca, Guglielmo; Blom, Astrid

    2018-04-01

    In this paper we analyze the Hirano active layer model used in mixed sediment river morphodynamics concerning its ill-posedness. Ill-posedness causes the solution to be unstable to short-wave perturbations. This implies that the solution presents spurious oscillations, the amplitude of which depends on the domain discretization. Ill-posedness not only produces physically unrealistic results but may also cause failure of numerical simulations. By considering a two-fraction sediment mixture we obtain analytical expressions for the mathematical characterization of the model. Using these we show that the ill-posed domain is larger than what was found in previous analyses, not only comprising cases of bed degradation into a substrate finer than the active layer but also in aggradational cases. Furthermore, by analyzing a three-fraction model we observe ill-posedness under conditions of bed degradation into a coarse substrate. We observe that oscillations in the numerical solution of ill-posed simulations grow until the model becomes well-posed, as the spurious mixing of the active layer sediment and substrate sediment acts as a regularization mechanism. Finally we conduct an eigenstructure analysis of a simplified vertically continuous model for mixed sediment for which we show that ill-posedness occurs in a wider range of conditions than the active layer model.

  2. Sample selection and taste correlation in discrete choice transport modelling

    DEFF Research Database (Denmark)

    Mabit, Stefan Lindhard

    2008-01-01

    explain counterintuitive results in value of travel time estimation. However, the results also point at the difficulty of finding suitable instruments for the selection mechanism. Taste heterogeneity is another important aspect of discrete choice modelling. Mixed logit models are designed to capture...... the question for a broader class of models. It is shown that the original result may be somewhat generalised. Another question investigated is whether mode choice operates as a self-selection mechanism in the estimation of the value of travel time. The results show that self-selection can at least partly...... of taste correlation in willingness-to-pay estimation are presented. The first contribution addresses how to incorporate taste correlation in the estimation of the value of travel time for public transport. Given a limited dataset the approach taken is to use theory on the value of travel time as guidance...

  3. Twice random, once mixed: applying mixed models to simultaneously analyze random effects of language and participants.

    Science.gov (United States)

    Janssen, Dirk P

    2012-03-01

    Psychologists, psycholinguists, and other researchers using language stimuli have been struggling for more than 30 years with the problem of how to analyze experimental data that contain two crossed random effects (items and participants). The classical analysis of variance does not apply; alternatives have been proposed but have failed to catch on, and a statistically unsatisfactory procedure of using two approximations (known as F(1) and F(2)) has become the standard. A simple and elegant solution using mixed model analysis has been available for 15 years, and recent improvements in statistical software have made mixed models analysis widely available. The aim of this article is to increase the use of mixed models by giving a concise practical introduction and by giving clear directions for undertaking the analysis in the most popular statistical packages. The article also introduces the DJMIXED: add-on package for SPSS, which makes entering the models and reporting their results as straightforward as possible.

  4. Stochastic model of Rayleigh-Taylor turbulent mixing

    International Nuclear Information System (INIS)

    Abarzhi, S.I.; Cadjan, M.; Fedotov, S.

    2007-01-01

    We propose a stochastic model to describe the random character of the dissipation process in Rayleigh-Taylor turbulent mixing. The parameter alpha, used conventionally to characterize the mixing growth-rate, is not a universal constant and is very sensitive to the statistical properties of the dissipation. The ratio between the rates of momentum loss and momentum gain is the statistic invariant and a robust parameter to diagnose with or without turbulent diffusion accounted for

  5. Mixing height derived from the DMI-HIRLAM NWP model, and used for ETEX dispersion modelling

    Energy Technology Data Exchange (ETDEWEB)

    Soerensen, J.H.; Rasmussen, A. [Danish Meteorological Inst., Copenhagen (Denmark)

    1997-10-01

    For atmospheric dispersion modelling it is of great significance to estimate the mixing height well. Mesoscale and long-range diffusion models using output from numerical weather prediction (NWP) models may well use NWP model profiles of wind, temperature and humidity in computation of the mixing height. This is dynamically consistent, and enables calculation of the mixing height for predicted states of the atmosphere. In autumn 1994, the European Tracer Experiment (ETEX) was carried out with the objective to validate atmospheric dispersion models. The Danish Meteorological Institute (DMI) participates in the model evaluations with the Danish Emergency Response Model of the Atmosphere (DERMA) using NWP model data from the DMI version of the High Resolution Limited Area Model (HIRLAM) as well as from the global model of the European Centre for Medium-Range Weather Forecast (ECMWF). In DERMA, calculation of mixing heights are performed based on a bulk Richardson number approach. Comparing with tracer gas measurements for the first ETEX experiment, a sensitivity study is performed for DERMA. Using DMI-HIRLAM data, the study shows that optimum values of the critical bulk Richardson number in the range 0.15-0.35 are adequate. These results are in agreement with recent mixing height verification studies against radiosonde data. The fairly large range of adequate critical values is a signature of the robustness of the method. Direct verification results against observed missing heights from operational radio-sondes released under the ETEX plume are presented. (au) 10 refs.

  6. Extended Mixed-Efects Item Response Models with the MH-RM Algorithm

    Science.gov (United States)

    Chalmers, R. Philip

    2015-01-01

    A mixed-effects item response theory (IRT) model is presented as a logical extension of the generalized linear mixed-effects modeling approach to formulating explanatory IRT models. Fixed and random coefficients in the extended model are estimated using a Metropolis-Hastings Robbins-Monro (MH-RM) stochastic imputation algorithm to accommodate for…

  7. Modeling Dynamic Effects of the Marketing Mix on Market Shares

    NARCIS (Netherlands)

    D. Fok (Dennis); R. Paap (Richard); Ph.H.B.F. Franses (Philip Hans)

    2003-01-01

    textabstractTo comprehend the competitive structure of a market, it is important to understand the short-run and long-run effects of the marketing mix on market shares. A useful model to link market shares with marketing-mix variables, like price and promotion, is the market share attraction model.

  8. Linear mixed models a practical guide using statistical software

    CERN Document Server

    West, Brady T; Galecki, Andrzej T

    2006-01-01

    Simplifying the often confusing array of software programs for fitting linear mixed models (LMMs), Linear Mixed Models: A Practical Guide Using Statistical Software provides a basic introduction to primary concepts, notation, software implementation, model interpretation, and visualization of clustered and longitudinal data. This easy-to-navigate reference details the use of procedures for fitting LMMs in five popular statistical software packages: SAS, SPSS, Stata, R/S-plus, and HLM. The authors introduce basic theoretical concepts, present a heuristic approach to fitting LMMs based on bo

  9. An improved mixing model providing joint statistics of scalar and scalar dissipation

    Energy Technology Data Exchange (ETDEWEB)

    Meyer, Daniel W. [Department of Energy Resources Engineering, Stanford University, Stanford, CA (United States); Jenny, Patrick [Institute of Fluid Dynamics, ETH Zurich (Switzerland)

    2008-11-15

    For the calculation of nonpremixed turbulent flames with thin reaction zones the joint probability density function (PDF) of the mixture fraction and its dissipation rate plays an important role. The corresponding PDF transport equation involves a mixing model for the closure of the molecular mixing term. Here, the parameterized scalar profile (PSP) mixing model is extended to provide the required joint statistics. Model predictions are validated using direct numerical simulation (DNS) data of a passive scalar mixing in a statistically homogeneous turbulent flow. Comparisons between the DNS and the model predictions are provided, which involve different initial scalar-field lengthscales. (author)

  10. Mixed-order phase transition in a one-dimensional model.

    Science.gov (United States)

    Bar, Amir; Mukamel, David

    2014-01-10

    We introduce and analyze an exactly soluble one-dimensional Ising model with long range interactions that exhibits a mixed-order transition, namely a phase transition in which the order parameter is discontinuous as in first order transitions while the correlation length diverges as in second order transitions. Such transitions are known to appear in a diverse classes of models that are seemingly unrelated. The model we present serves as a link between two classes of models that exhibit a mixed-order transition in one dimension, namely, spin models with a coupling constant that decays as the inverse distance squared and models of depinning transitions, thus making a step towards a unifying framework.

  11. The Simulation of Financial Markets by Agent-Based Mix-Game Models

    OpenAIRE

    Chengling Gou

    2006-01-01

    This paper studies the simulation of financial markets using an agent-based mix-game model which is a variant of the minority game (MG). It specifies the spectra of parameters of mix-game models that fit financial markets by investigating the dynamic behaviors of mix-game models under a wide range of parameters. The main findings are (a) in order to approach efficiency, agents in a real financial market must be heterogeneous, boundedly rational and subject to asymmetric information; (b) an ac...

  12. How ocean lateral mixing changes Southern Ocean variability in coupled climate models

    Science.gov (United States)

    Pradal, M. A. S.; Gnanadesikan, A.; Thomas, J. L.

    2016-02-01

    The lateral mixing of tracers represents a major uncertainty in the formulation of coupled climate models. The mixing of tracers along density surfaces in the interior and horizontally within the mixed layer is often parameterized using a mixing coefficient ARedi. The models used in the Coupled Model Intercomparison Project 5 exhibit more than an order of magnitude range in the values of this coefficient used within the Southern Ocean. The impacts of such uncertainty on Southern Ocean variability have remained unclear, even as recent work has shown that this variability differs between different models. In this poster, we change the lateral mixing coefficient within GFDL ESM2Mc, a coarse-resolution Earth System model that nonetheless has a reasonable circulation within the Southern Ocean. As the coefficient varies from 400 to 2400 m2/s the amplitude of the variability varies significantly. The low-mixing case shows strong decadal variability with an annual mean RMS temperature variability exceeding 1C in the Circumpolar Current. The highest-mixing case shows a very similar spatial pattern of variability, but with amplitudes only about 60% as large. The suppression of mixing is larger in the Atlantic Sector of the Southern Ocean relatively to the Pacific sector. We examine the salinity budgets of convective regions, paying particular attention to the extent to which high mixing prevents the buildup of low-saline waters that are capable of shutting off deep convection entirely.

  13. Modeling containment of large wildfires using generalized linear mixed-model analysis

    Science.gov (United States)

    Mark Finney; Isaac C. Grenfell; Charles W. McHugh

    2009-01-01

    Billions of dollars are spent annually in the United States to contain large wildland fires, but the factors contributing to suppression success remain poorly understood. We used a regression model (generalized linear mixed-model) to model containment probability of individual fires, assuming that containment was a repeated-measures problem (fixed effect) and...

  14. Improving Mixed-phase Cloud Parameterization in Climate Model with the ACRF Measurements

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Zhien [Univ. of Wyoming, Laramie, WY (United States)

    2016-12-13

    Mixed-phase cloud microphysical and dynamical processes are still poorly understood, and their representation in GCMs is a major source of uncertainties in overall cloud feedback in GCMs. Thus improving mixed-phase cloud parameterizations in climate models is critical to reducing the climate forecast uncertainties. This study aims at providing improved knowledge of mixed-phase cloud properties from the long-term ACRF observations and improving mixed-phase clouds simulations in the NCAR Community Atmosphere Model version 5 (CAM5). The key accomplishments are: 1) An improved retrieval algorithm was developed to provide liquid droplet concentration for drizzling or mixed-phase stratiform clouds. 2) A new ice concentration retrieval algorithm for stratiform mixed-phase clouds was developed. 3) A strong seasonal aerosol impact on ice generation in Arctic mixed-phase clouds was identified, which is mainly attributed to the high dust occurrence during the spring season. 4) A suite of multi-senor algorithms was applied to long-term ARM observations at the Barrow site to provide a complete dataset (LWC and effective radius profile for liquid phase, and IWC, Dge profiles and ice concentration for ice phase) to characterize Arctic stratiform mixed-phase clouds. This multi-year stratiform mixed-phase cloud dataset provides necessary information to study related processes, evaluate model stratiform mixed-phase cloud simulations, and improve model stratiform mixed-phase cloud parameterization. 5). A new in situ data analysis method was developed to quantify liquid mass partition in convective mixed-phase clouds. For the first time, we reliably compared liquid mass partitions in stratiform and convective mixed-phase clouds. Due to the different dynamics in stratiform and convective mixed-phase clouds, the temperature dependencies of liquid mass partitions are significantly different due to much higher ice concentrations in convective mixed phase clouds. 6) Systematic evaluations

  15. A model for quasi parity-doublet spectra with strong coriolis mixing

    International Nuclear Information System (INIS)

    Minkov, N.; Drenska, S.; Strecker, M.

    2013-01-01

    The model of coherent quadrupole and octupole motion (CQOM) is combined with the reflection-asymmetric deformed shell model (DSM) in a way allowing fully microscopic description of the Coriolis decoupling and K-mixing effects in the quasi parity-doublet spectra of odd-mass nuclei. In this approach the even-even core is considered within the CQOM model, while the odd nucleon is described within DSM with pairing interaction. The Coriolis decoupling/mixing factors are calculated through a parity-projection of the single-particle wave function. Expressions for the Coriolis mixed quasi parity-doublet levels are obtained in the second order of perturbation theory, while the K-mixed core plus particle wave function is obtained in the first order. Expressions for the B(E1), B(E2) and B(E3) reduced probabilities for transitions within and between different quasi-doublets are obtained by using the total K-mixed wave function. The model scheme is elaborated in a form capable of describing the yrast and non-yrast quasi parity-doublet spectra in odd-mass nuclei. (author)

  16. A time dependent mixing model to close PDF equations for transport in heterogeneous aquifers

    Science.gov (United States)

    Schüler, L.; Suciu, N.; Knabner, P.; Attinger, S.

    2016-10-01

    Probability density function (PDF) methods are a promising alternative to predicting the transport of solutes in groundwater under uncertainty. They make it possible to derive the evolution equations of the mean concentration and the concentration variance, used in moment methods. The mixing model, describing the transport of the PDF in concentration space, is essential for both methods. Finding a satisfactory mixing model is still an open question and due to the rather elaborate PDF methods, a difficult undertaking. Both the PDF equation and the concentration variance equation depend on the same mixing model. This connection is used to find and test an improved mixing model for the much easier to handle concentration variance. Subsequently, this mixing model is transferred to the PDF equation and tested. The newly proposed mixing model yields significantly improved results for both variance modelling and PDF modelling.

  17. Discrete choice models for commuting interactions

    DEFF Research Database (Denmark)

    Rouwendal, Jan; Mulalic, Ismir; Levkovich, Or

    An emerging quantitative spatial economics literature models commuting interactions by a gravity equation that is mathematically equivalent to a multinomial logit model. This model is widely viewed as restrictive because of the independence of irrelevant alternatives (IIA) property that links sub...

  18. Animal welfare and eggs - cheap talk or money on the counter?

    DEFF Research Database (Denmark)

    Andersen, Laura Mørch

    2011-01-01

    Our estimate revealed willingness to pay for animal welfare using a panel mixed logit model. We utilise a unique household level panel, combining real purchases with survey data on perceived public and private good attributes of different types of eggs. We estimate willingness to pay for organic...

  19. Experimental investigation of consumer price evaluations

    NARCIS (Netherlands)

    Z. Sándor (Zsolt); Ph.H.B.F. Franses (Philip Hans)

    2004-01-01

    textabstractWe develop a procedure to collect experimental choice data for estimating consumer preferences with a special focus on consumer price evaluations. For this purpose we employ a heteroskedastic mixed logit model that measures the effect of the way prices are specified on the variance of

  20. A new model for the redundancy allocation problem with component mixing and mixed redundancy strategy

    International Nuclear Information System (INIS)

    Gholinezhad, Hadi; Zeinal Hamadani, Ali

    2017-01-01

    This paper develops a new model for redundancy allocation problem. In this paper, like many recent papers, the choice of the redundancy strategy is considered as a decision variable. But, in our model each subsystem can exploit both active and cold-standby strategies simultaneously. Moreover, the model allows for component mixing such that components of different types may be used in each subsystem. The problem, therefore, boils down to determining the types of components, redundancy levels, and number of active and cold-standby units of each type for each subsystem to maximize system reliability by considering such constraints as available budget, weight, and space. Since RAP belongs to the NP-hard class of optimization problems, a genetic algorithm (GA) is developed for solving the problem. Finally, the performance of the proposed algorithm is evaluated by applying it to a well-known test problem from the literature with relatively satisfactory results. - Highlights: • A new model for the redundancy allocation problem in series–parallel systems is proposed. • The redundancy strategy of each subsystem is considered as a decision variable and can be active, cold-standby or mixed. • Component mixing is allowed, in other words components of any subsystem can be non-identical. • A genetic algorithm is developed for solving the problem. • Computational experiments demonstrate that the new model leads to interesting results.

  1. Consequences of observed Bd-anti Bd mixing in standard and nonstandard models

    International Nuclear Information System (INIS)

    Datta, A.; Paschos, E.A.; Tuerke, U.

    1987-01-01

    Implications of the B d -anti B d mixing report by the ARGUS group are investigated. We show that in order for the standard model to accomodate the result, the B → anti B hadronic matrix element must satisfy lower bounds as a function of top quark mass. In this case B S -anti B S mixing is necessarily large (r S > or approx. 0.74) irrespective of m t . This conclusion remains valid in several popular extensions of the standard model with three generations. In contrast to these models, four generation models can accomodate simultaneously the observed B d -anti B d mixing and a relatively small B S -anti B S mixing. (orig.)

  2. Intercity Travel Demand Analysis Model

    Directory of Open Access Journals (Sweden)

    Ming Lu

    2014-01-01

    Full Text Available It is well known that intercity travel is an important component of travel demand which belongs to short distance corridor travel. The conventional four-step method is no longer suitable for short distance corridor travel demand analysis for the time spent on urban traffic has a great impact on traveler's main mode choice. To solve this problem, the author studied the existing intercity travel demand analysis model, then improved it based on the study, and finally established a combined model of main mode choice and access mode choice. At last, an integrated multilevel nested logit model structure system was built. The model system includes trip generation, destination choice, and mode-route choice based on multinomial logit model, and it achieved linkage and feedback of each part through logsum variable. This model was applied in Shenzhen intercity railway passenger demand forecast in 2010 as a case study. As a result, the forecast results were consistent with the actuality. The model's correctness and feasibility were verified.

  3. Eliciting mixed emotions: A meta-analysis comparing models, types and measures.

    Directory of Open Access Journals (Sweden)

    Raul eBerrios

    2015-04-01

    Full Text Available The idea that people can experience two oppositely valenced emotions has been controversial ever since early attempts to investigate the construct of mixed emotions. This meta-analysis examined the robustness with which mixed emotions have been elicited experimentally. A systematic literature search identified 63 experimental studies that instigated the experience of mixed emotions. Studies were distinguished according to the structure of the underlying affect model – dimensional or discrete – as well as according to the type of mixed emotions studied (e.g., happy-sad, fearful-happy, positive-negative. The meta-analysis using a random-effects model revealed a moderate to high effect size for the elicitation of mixed emotions (dIG+ = .77, which remained consistent regardless of the structure of the affect model, and across different types of mixed emotions. Several methodological and design moderators were tested. Studies using the minimum index (i.e., the minimum value between a pair of opposite valenced affects resulted in smaller effect sizes, whereas subjective measures of mixed emotions increased the effect sizes. The presence of more women in the samples was also associated with larger effect sizes. The current study indicates that mixed emotions are a robust, measurable and non-artifactual experience. The results are discussed in terms of the implications for an affect system that has greater versatility and flexibility than previously thought.

  4. Perturbative estimates of lepton mixing angles in unified models

    International Nuclear Information System (INIS)

    Antusch, Stefan; King, Stephen F.; Malinsky, Michal

    2009-01-01

    Many unified models predict two large neutrino mixing angles, with the charged lepton mixing angles being small and quark-like, and the neutrino masses being hierarchical. Assuming this, we present simple approximate analytic formulae giving the lepton mixing angles in terms of the underlying high energy neutrino mixing angles together with small perturbations due to both charged lepton corrections and renormalisation group (RG) effects, including also the effects of third family canonical normalization (CN). We apply the perturbative formulae to the ubiquitous case of tri-bimaximal neutrino mixing at the unification scale, in order to predict the theoretical corrections to mixing angle predictions and sum rule relations, and give a general discussion of all limiting cases. We also discuss the implications for the sum rule relations of the measurement of a non-zero reactor angle, as hinted at by recent experimental measurements.

  5. Comparison on models for genetic evaluation of non-return rate and success in first insemination of the Danish Holstein cow

    DEFF Research Database (Denmark)

    Sun, C; Su, G

    2010-01-01

    The aim of is study was to compare a linear Gaussian model with logit model and probit model for genetic evaluation of non-return rate within 56 d after first-insemination (NRR56) and success in first insemination (SFI). The whole dataset used in the analysis contained 471,742 records from...... the EBV of proven bulls, obtained from the whole dataset and from a reduced dataset which only contains the first-crop daughters of sires; 2) χ2 statistic for the expected and observed frequency in a cross validation. Heritabilities estimated using linear, probit and logit models were 0.011, 0.014 and 0....... Model validation showed that there was no difference between probit model and logit model, but the two models were better than linear model in stability and predictive ability for genetic evaluation of NRR56 and SFI. However, based on the whole dataset, the correlations between EBV estimated using...

  6. Practical likelihood analysis for spatial generalized linear mixed models

    DEFF Research Database (Denmark)

    Bonat, W. H.; Ribeiro, Paulo Justiniano

    2016-01-01

    We investigate an algorithm for maximum likelihood estimation of spatial generalized linear mixed models based on the Laplace approximation. We compare our algorithm with a set of alternative approaches for two datasets from the literature. The Rhizoctonia root rot and the Rongelap are......, respectively, examples of binomial and count datasets modeled by spatial generalized linear mixed models. Our results show that the Laplace approximation provides similar estimates to Markov Chain Monte Carlo likelihood, Monte Carlo expectation maximization, and modified Laplace approximation. Some advantages...... of Laplace approximation include the computation of the maximized log-likelihood value, which can be used for model selection and tests, and the possibility to obtain realistic confidence intervals for model parameters based on profile likelihoods. The Laplace approximation also avoids the tuning...

  7. Development of two phase turbulent mixing model for subchannel analysis relevant to BWR

    International Nuclear Information System (INIS)

    Sharma, M.P.; Nayak, A.K.; Kannan, Umasankari

    2014-01-01

    A two phase flow model is presented, which predicts both liquid and gas phase turbulent mixing rate between adjacent subchannels of reactor rod bundles. The model presented here is for slug churn flow regime, which is dominant as compared to the other regimes like bubbly flow and annular flow regimes, since turbulent mixing rate is the highest in slug churn flow regime. In this paper, we have defined new dimensionless parameters i.e. liquid mixing number and gas mixing number for two phase turbulent mixing. The liquid mixing number is a function of mixture Reynolds number whereas the gas phase mixing number is a function of both mixture Reynolds number and volumetric fraction of gas. The effect of pressure, geometrical influence of subchannel is also included in this model. The present model has been tested against low pressure and temperature air-water and high pressure and temperature steam-water experimental data found that it shows good agreement with available experimental data. (author)

  8. An R2 statistic for fixed effects in the linear mixed model.

    Science.gov (United States)

    Edwards, Lloyd J; Muller, Keith E; Wolfinger, Russell D; Qaqish, Bahjat F; Schabenberger, Oliver

    2008-12-20

    Statisticians most often use the linear mixed model to analyze Gaussian longitudinal data. The value and familiarity of the R(2) statistic in the linear univariate model naturally creates great interest in extending it to the linear mixed model. We define and describe how to compute a model R(2) statistic for the linear mixed model by using only a single model. The proposed R(2) statistic measures multivariate association between the repeated outcomes and the fixed effects in the linear mixed model. The R(2) statistic arises as a 1-1 function of an appropriate F statistic for testing all fixed effects (except typically the intercept) in a full model. The statistic compares the full model with a null model with all fixed effects deleted (except typically the intercept) while retaining exactly the same covariance structure. Furthermore, the R(2) statistic leads immediately to a natural definition of a partial R(2) statistic. A mixed model in which ethnicity gives a very small p-value as a longitudinal predictor of blood pressure (BP) compellingly illustrates the value of the statistic. In sharp contrast to the extreme p-value, a very small R(2) , a measure of statistical and scientific importance, indicates that ethnicity has an almost negligible association with the repeated BP outcomes for the study.

  9. Modeling of Mixing Behavior in a Combined Blowing Steelmaking Converter with a Filter-Based Euler-Lagrange Model

    Science.gov (United States)

    Li, Mingming; Li, Lin; Li, Qiang; Zou, Zongshu

    2018-05-01

    A filter-based Euler-Lagrange multiphase flow model is used to study the mixing behavior in a combined blowing steelmaking converter. The Euler-based volume of fluid approach is employed to simulate the top blowing, while the Lagrange-based discrete phase model that embeds the local volume change of rising bubbles for the bottom blowing. A filter-based turbulence method based on the local meshing resolution is proposed aiming to improve the modeling of turbulent eddy viscosities. The model validity is verified through comparison with physical experiments in terms of mixing curves and mixing times. The effects of the bottom gas flow rate on bath flow and mixing behavior are investigated and the inherent reasons for the mixing result are clarified in terms of the characteristics of bottom-blowing plumes, the interaction between plumes and top-blowing jets, and the change of bath flow structure.

  10. Criticality in the configuration-mixed interacting boson model: (1) U(5)-Q(χ)Q(χ) mixing

    International Nuclear Information System (INIS)

    Hellemans, V.; Van Isacker, P.; De Baerdemacker, S.; Heyde, K.

    2007-01-01

    The case of U(5)-Q(χ)Q(χ) mixing in the configuration-mixed interacting boson model is studied in its mean-field approximation. Phase diagrams with analytical and numerical solutions are constructed and discussed. Indications for first-order and second-order shape phase transitions can be obtained from binding energies and from critical exponents, respectively

  11. Comparison of measured and modelled mixing heights during the Borex`95 experiment

    Energy Technology Data Exchange (ETDEWEB)

    Mikkelsen, T.; Astrup, P.; Joergensen, H.E.; Ott, S. [Risoe National Lab., Roskilde (Denmark); Soerensen, J.H. [Danish Meteorological Inst., Copenhagen (Denmark); Loefstroem, P. [National Environmental Research Inst., Roskilde (Denmark)

    1997-10-01

    A real-time modelling system designed for `on-the-fly` assessment of atmospheric dispersion during accidental releases is under establishment within the framework of the European Union. It integrates real-time dispersion models for both local scale and long range transport with wind, turbulence and deposition models. As meteorological input, the system uses both on-situ measured and on-line available meteorology. The resulting real-time dispersion system is called MET-RODOS. This paper focuses on evaluation of the MET-RODOS systems build-in local scale pre-processing software for real-time determination of mixing height, - an important parameter for the local scale dispersion assessments. The paper discusses the systems local scale mixing height algorithms as well as its in-line mixing height acquisition from the DMI-HIRLAM model. Comparisons of the diurnal mixing height evolution is made with measured mixing heights from in-situ radio-sonde data during the Borex`95 field trials, and recently also with remote sensed (LIDAR) aerosol profiles measured at Risoe. (LN)

  12. Development of a Medicaid Behavioral Health Case-Mix Model

    Science.gov (United States)

    Robst, John

    2009-01-01

    Many Medicaid programs have either fully or partially carved out mental health services. The evaluation of carve-out plans requires a case-mix model that accounts for differing health status across Medicaid managed care plans. This article develops a diagnosis-based case-mix adjustment system specific to Medicaid behavioral health care. Several…

  13. Fixed versus mixed RSA: Explaining visual representations by fixed and mixed feature sets from shallow and deep computational models.

    Science.gov (United States)

    Khaligh-Razavi, Seyed-Mahdi; Henriksson, Linda; Kay, Kendrick; Kriegeskorte, Nikolaus

    2017-02-01

    Studies of the primate visual system have begun to test a wide range of complex computational object-vision models. Realistic models have many parameters, which in practice cannot be fitted using the limited amounts of brain-activity data typically available. Task performance optimization (e.g. using backpropagation to train neural networks) provides major constraints for fitting parameters and discovering nonlinear representational features appropriate for the task (e.g. object classification). Model representations can be compared to brain representations in terms of the representational dissimilarities they predict for an image set. This method, called representational similarity analysis (RSA), enables us to test the representational feature space as is (fixed RSA) or to fit a linear transformation that mixes the nonlinear model features so as to best explain a cortical area's representational space (mixed RSA). Like voxel/population-receptive-field modelling, mixed RSA uses a training set (different stimuli) to fit one weight per model feature and response channel (voxels here), so as to best predict the response profile across images for each response channel. We analysed response patterns elicited by natural images, which were measured with functional magnetic resonance imaging (fMRI). We found that early visual areas were best accounted for by shallow models, such as a Gabor wavelet pyramid (GWP). The GWP model performed similarly with and without mixing, suggesting that the original features already approximated the representational space, obviating the need for mixing. However, a higher ventral-stream visual representation (lateral occipital region) was best explained by the higher layers of a deep convolutional network and mixing of its feature set was essential for this model to explain the representation. We suspect that mixing was essential because the convolutional network had been trained to discriminate a set of 1000 categories, whose frequencies

  14. Incorporating vehicle mix in stimulus-response car-following models

    Directory of Open Access Journals (Sweden)

    Saidi Siuhi

    2016-06-01

    Full Text Available The objective of this paper is to incorporate vehicle mix in stimulus-response car-following models. Separate models were estimated for acceleration and deceleration responses to account for vehicle mix via both movement state and vehicle type. For each model, three sub-models were developed for different pairs of following vehicles including “automobile following automobile,” “automobile following truck,” and “truck following automobile.” The estimated model parameters were then validated against other data from a similar region and roadway. The results indicated that drivers' behaviors were significantly different among the different pairs of following vehicles. Also the magnitude of the estimated parameters depends on the type of vehicle being driven and/or followed. These results demonstrated the need to use separate models depending on movement state and vehicle type. The differences in parameter estimates confirmed in this paper highlight traffic safety and operational issues of mixed traffic operation on a single lane. The findings of this paper can assist transportation professionals to improve traffic simulation models used to evaluate the impact of different strategies on ameliorate safety and performance of highways. In addition, driver response time lag estimates can be used in roadway design to calculate important design parameters such as stopping sight distance on horizontal and vertical curves for both automobiles and trucks.

  15. Software engineering the mixed model for genome-wide association studies on large samples.

    Science.gov (United States)

    Zhang, Zhiwu; Buckler, Edward S; Casstevens, Terry M; Bradbury, Peter J

    2009-11-01

    Mixed models improve the ability to detect phenotype-genotype associations in the presence of population stratification and multiple levels of relatedness in genome-wide association studies (GWAS), but for large data sets the resource consumption becomes impractical. At the same time, the sample size and number of markers used for GWAS is increasing dramatically, resulting in greater statistical power to detect those associations. The use of mixed models with increasingly large data sets depends on the availability of software for analyzing those models. While multiple software packages implement the mixed model method, no single package provides the best combination of fast computation, ability to handle large samples, flexible modeling and ease of use. Key elements of association analysis with mixed models are reviewed, including modeling phenotype-genotype associations using mixed models, population stratification, kinship and its estimation, variance component estimation, use of best linear unbiased predictors or residuals in place of raw phenotype, improving efficiency and software-user interaction. The available software packages are evaluated, and suggestions made for future software development.

  16. The Simulation of Financial Markets by an Agent-Based Mix-Game Model

    OpenAIRE

    Chengling Gou

    2006-01-01

    This paper studies the simulation of financial markets using an agent-based mix-game model which is a variant of the minority game (MG). It specifies the spectra of parameters of mix-game models that fit financial markets by investigating the dynamic behaviors of mix-game models under a wide range of parameters. The main findings are (a) in order to approach efficiency, agents in a real financial market must be heterogeneous, boundedly rational and subject to asymmetric information; (b) an ac...

  17. Hybrid discrete choice models: Gained insights versus increasing effort

    Energy Technology Data Exchange (ETDEWEB)

    Mariel, Petr, E-mail: petr.mariel@ehu.es [UPV/EHU, Economía Aplicada III, Avda. Lehendakari Aguire, 83, 48015 Bilbao (Spain); Meyerhoff, Jürgen [Institute for Landscape Architecture and Environmental Planning, Technical University of Berlin, D-10623 Berlin, Germany and The Kiel Institute for the World Economy, Duesternbrooker Weg 120, 24105 Kiel (Germany)

    2016-10-15

    Hybrid choice models expand the standard models in discrete choice modelling by incorporating psychological factors as latent variables. They could therefore provide further insights into choice processes and underlying taste heterogeneity but the costs of estimating these models often significantly increase. This paper aims at comparing the results from a hybrid choice model and a classical random parameter logit. Point of departure for this analysis is whether researchers and practitioners should add hybrid choice models to their suite of models routinely estimated. Our comparison reveals, in line with the few prior studies, that hybrid models gain in efficiency by the inclusion of additional information. The use of one of the two proposed approaches, however, depends on the objective of the analysis. If disentangling preference heterogeneity is most important, hybrid model seems to be preferable. If the focus is on predictive power, a standard random parameter logit model might be the better choice. Finally, we give recommendations for an adequate use of hybrid choice models based on known principles of elementary scientific inference. - Highlights: • The paper compares performance of a Hybrid Choice Model (HCM) and a classical Random Parameter Logit (RPL) model. • The HCM indeed provides insights regarding preference heterogeneity not gained from the RPL. • The RPL has similar predictive power as the HCM in our data. • The costs of estimating HCM seem to be justified when learning more on taste heterogeneity is a major study objective.

  18. Hybrid discrete choice models: Gained insights versus increasing effort

    International Nuclear Information System (INIS)

    Mariel, Petr; Meyerhoff, Jürgen

    2016-01-01

    Hybrid choice models expand the standard models in discrete choice modelling by incorporating psychological factors as latent variables. They could therefore provide further insights into choice processes and underlying taste heterogeneity but the costs of estimating these models often significantly increase. This paper aims at comparing the results from a hybrid choice model and a classical random parameter logit. Point of departure for this analysis is whether researchers and practitioners should add hybrid choice models to their suite of models routinely estimated. Our comparison reveals, in line with the few prior studies, that hybrid models gain in efficiency by the inclusion of additional information. The use of one of the two proposed approaches, however, depends on the objective of the analysis. If disentangling preference heterogeneity is most important, hybrid model seems to be preferable. If the focus is on predictive power, a standard random parameter logit model might be the better choice. Finally, we give recommendations for an adequate use of hybrid choice models based on known principles of elementary scientific inference. - Highlights: • The paper compares performance of a Hybrid Choice Model (HCM) and a classical Random Parameter Logit (RPL) model. • The HCM indeed provides insights regarding preference heterogeneity not gained from the RPL. • The RPL has similar predictive power as the HCM in our data. • The costs of estimating HCM seem to be justified when learning more on taste heterogeneity is a major study objective.

  19. Multiple model adaptive control with mixing

    Science.gov (United States)

    Kuipers, Matthew

    Despite the remarkable theoretical accomplishments and successful applications of adaptive control, the field is not sufficiently mature to solve challenging control problems requiring strict performance and safety guarantees. Towards addressing these issues, a novel deterministic multiple-model adaptive control approach called adaptive mixing control is proposed. In this approach, adaptation comes from a high-level system called the supervisor that mixes into feedback a number of candidate controllers, each finely-tuned to a subset of the parameter space. The mixing signal, the supervisor's output, is generated by estimating the unknown parameters and, at every instant of time, calculating the contribution level of each candidate controller based on certainty equivalence. The proposed architecture provides two characteristics relevant to solving stringent, performance-driven applications. First, the full-suite of linear time invariant control tools is available. A disadvantage of conventional adaptive control is its restriction to utilizing only those control laws whose solutions can be feasibly computed in real-time, such as model reference and pole-placement type controllers. Because its candidate controllers are computed off line, the proposed approach suffers no such restriction. Second, the supervisor's output is smooth and does not necessarily depend on explicit a priori knowledge of the disturbance model. These characteristics can lead to improved performance by avoiding the unnecessary switching and chattering behaviors associated with some other multiple adaptive control approaches. The stability and robustness properties of the adaptive scheme are analyzed. It is shown that the mean-square regulation error is of the order of the modeling error. And when the parameter estimate converges to its true value, which is guaranteed if a persistence of excitation condition is satisfied, the adaptive closed-loop system converges exponentially fast to a closed

  20. Identification and comparison of predictive models of rectal and bladder toxicity in the case of prostatic irradiation; Identification et comparaison de modeles predictifs de toxicite rectale et vesicale en cas d'irradiation prostatique

    Energy Technology Data Exchange (ETDEWEB)

    Gnep, K.; Chira, C.; Le Prise, E.; Crevoisier, R. de [Centre Eugene-Marquis, Rennes (France); Zhu, J.; Simon, A.; Ospina Arango, J.D. [Inserm U642, Rennes (France); Messai, T.; Bossi, A. [Institut Gustave-Roussy, Villejuif (France); Beckendorf, V. [Centre Alexis-Vautrin, Nancy (France)

    2011-10-15

    More than 400 patients have been treated by conformational radiation therapy for a localized prostate adenocarcinoma and some have been selected according to the availability of dose-volume histograms. Predictive models of rectal and bladder toxicity have been compared: LKB, Logit EUD and Poisson EUD for rectal toxicity, LKB, Logit EUD, Poisson EUD and Schultheiss for bladder toxicity. Results suggest that these models could be used during the inverse planning of intensity-modulated radiation therapy in order to decrease toxicity. Short communication

  1. Goodness-of-fit tests in mixed models

    KAUST Repository

    Claeskens, Gerda; Hart, Jeffrey D.

    2009-01-01

    Mixed models, with both random and fixed effects, are most often estimated on the assumption that the random effects are normally distributed. In this paper we propose several formal tests of the hypothesis that the random effects and/or errors

  2. Analysis of baseline, average, and longitudinally measured blood pressure data using linear mixed models.

    Science.gov (United States)

    Hossain, Ahmed; Beyene, Joseph

    2014-01-01

    This article compares baseline, average, and longitudinal data analysis methods for identifying genetic variants in genome-wide association study using the Genetic Analysis Workshop 18 data. We apply methods that include (a) linear mixed models with baseline measures, (b) random intercept linear mixed models with mean measures outcome, and (c) random intercept linear mixed models with longitudinal measurements. In the linear mixed models, covariates are included as fixed effects, whereas relatedness among individuals is incorporated as the variance-covariance structure of the random effect for the individuals. The overall strategy of applying linear mixed models decorrelate the data is based on Aulchenko et al.'s GRAMMAR. By analyzing systolic and diastolic blood pressure, which are used separately as outcomes, we compare the 3 methods in identifying a known genetic variant that is associated with blood pressure from chromosome 3 and simulated phenotype data. We also analyze the real phenotype data to illustrate the methods. We conclude that the linear mixed model with longitudinal measurements of diastolic blood pressure is the most accurate at identifying the known single-nucleotide polymorphism among the methods, but linear mixed models with baseline measures perform best with systolic blood pressure as the outcome.

  3. Estimating the Numerical Diapycnal Mixing in the GO5.0 Ocean Model

    Science.gov (United States)

    Megann, A.; Nurser, G.

    2014-12-01

    Constant-depth (or "z-coordinate") ocean models such as MOM4 and NEMO have become the de facto workhorse in climate applications, and have attained a mature stage in their development and are well understood. A generic shortcoming of this model type, however, is a tendency for the advection scheme to produce unphysical numerical diapycnal mixing, which in some cases may exceed the explicitly parameterised mixing based on observed physical processes, and this is likely to have effects on the long-timescale evolution of the simulated climate system. Despite this, few quantitative estimations have been made of the magnitude of the effective diapycnal diffusivity due to numerical mixing in these models. GO5.0 is the latest ocean model configuration developed jointly by the UK Met Office and the National Oceanography Centre (Megann et al, 2014), and forms part of the GC1 and GC2 climate models. It uses version 3.4 of the NEMO model, on the ORCA025 ¼° global tripolar grid. We describe various approaches to quantifying the numerical diapycnal mixing in this model, and present results from analysis of the GO5.0 model based on the isopycnal watermass analysis of Lee et al (2002) that indicate that numerical mixing does indeed form a significant component of the watermass transformation in the ocean interior.

  4. Non-linear mixed-effects pharmacokinetic/pharmacodynamic modelling in NLME using differential equations

    DEFF Research Database (Denmark)

    Tornøe, Christoffer Wenzel; Agersø, Henrik; Madsen, Henrik

    2004-01-01

    The standard software for non-linear mixed-effect analysis of pharmacokinetic/phar-macodynamic (PK/PD) data is NONMEM while the non-linear mixed-effects package NLME is an alternative as tong as the models are fairly simple. We present the nlmeODE package which combines the ordinary differential...... equation (ODE) solver package odesolve and the non-Linear mixed effects package NLME thereby enabling the analysis of complicated systems of ODEs by non-linear mixed-effects modelling. The pharmacokinetics of the anti-asthmatic drug theophylline is used to illustrate the applicability of the nlme...

  5. Modeling Temporal Behavior in Large Networks: A Dynamic Mixed-Membership Model

    Energy Technology Data Exchange (ETDEWEB)

    Rossi, R; Gallagher, B; Neville, J; Henderson, K

    2011-11-11

    Given a large time-evolving network, how can we model and characterize the temporal behaviors of individual nodes (and network states)? How can we model the behavioral transition patterns of nodes? We propose a temporal behavior model that captures the 'roles' of nodes in the graph and how they evolve over time. The proposed dynamic behavioral mixed-membership model (DBMM) is scalable, fully automatic (no user-defined parameters), non-parametric/data-driven (no specific functional form or parameterization), interpretable (identifies explainable patterns), and flexible (applicable to dynamic and streaming networks). Moreover, the interpretable behavioral roles are generalizable, computationally efficient, and natively supports attributes. We applied our model for (a) identifying patterns and trends of nodes and network states based on the temporal behavior, (b) predicting future structural changes, and (c) detecting unusual temporal behavior transitions. We use eight large real-world datasets from different time-evolving settings (dynamic and streaming). In particular, we model the evolving mixed-memberships and the corresponding behavioral transitions of Twitter, Facebook, IP-Traces, Email (University), Internet AS, Enron, Reality, and IMDB. The experiments demonstrate the scalability, flexibility, and effectiveness of our model for identifying interesting patterns, detecting unusual structural transitions, and predicting the future structural changes of the network and individual nodes.

  6. Valid statistical approaches for analyzing sholl data: Mixed effects versus simple linear models.

    Science.gov (United States)

    Wilson, Machelle D; Sethi, Sunjay; Lein, Pamela J; Keil, Kimberly P

    2017-03-01

    The Sholl technique is widely used to quantify dendritic morphology. Data from such studies, which typically sample multiple neurons per animal, are often analyzed using simple linear models. However, simple linear models fail to account for intra-class correlation that occurs with clustered data, which can lead to faulty inferences. Mixed effects models account for intra-class correlation that occurs with clustered data; thus, these models more accurately estimate the standard deviation of the parameter estimate, which produces more accurate p-values. While mixed models are not new, their use in neuroscience has lagged behind their use in other disciplines. A review of the published literature illustrates common mistakes in analyses of Sholl data. Analysis of Sholl data collected from Golgi-stained pyramidal neurons in the hippocampus of male and female mice using both simple linear and mixed effects models demonstrates that the p-values and standard deviations obtained using the simple linear models are biased downwards and lead to erroneous rejection of the null hypothesis in some analyses. The mixed effects approach more accurately models the true variability in the data set, which leads to correct inference. Mixed effects models avoid faulty inference in Sholl analysis of data sampled from multiple neurons per animal by accounting for intra-class correlation. Given the widespread practice in neuroscience of obtaining multiple measurements per subject, there is a critical need to apply mixed effects models more widely. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Modelling ice microphysics of mixed-phase clouds

    Science.gov (United States)

    Ahola, J.; Raatikainen, T.; Tonttila, J.; Romakkaniemi, S.; Kokkola, H.; Korhonen, H.

    2017-12-01

    The low-level Arctic mixed-phase clouds have a significant role for the Arctic climate due to their ability to absorb and reflect radiation. Since the climate change is amplified in polar areas, it is vital to apprehend the mixed-phase cloud processes. From a modelling point of view, this requires a high spatiotemporal resolution to capture turbulence and the relevant microphysical processes, which has shown to be difficult.In order to solve this problem about modelling mixed-phase clouds, a new ice microphysics description has been developed. The recently published large-eddy simulation cloud model UCLALES-SALSA offers a good base for a feasible solution (Tonttila et al., Geosci. Mod. Dev., 10:169-188, 2017). The model includes aerosol-cloud interactions described with a sectional SALSA module (Kokkola et al., Atmos. Chem. Phys., 8, 2469-2483, 2008), which represents a good compromise between detail and computational expense.Newly, the SALSA module has been upgraded to include also ice microphysics. The dynamical part of the model is based on well-known UCLA-LES model (Stevens et al., J. Atmos. Sci., 56, 3963-3984, 1999) which can be used to study cloud dynamics on a fine grid.The microphysical description of ice is sectional and the included processes consist of formation, growth and removal of ice and snow particles. Ice cloud particles are formed by parameterized homo- or heterogeneous nucleation. The growth mechanisms of ice particles and snow include coagulation and condensation of water vapor. Autoconversion from cloud ice particles to snow is parameterized. The removal of ice particles and snow happens by sedimentation and melting.The implementation of ice microphysics is tested by initializing the cloud simulation with atmospheric observations from the Indirect and Semi-Direct Aerosol Campaign (ISDAC). The results are compared to the model results shown in the paper of Ovchinnikov et al. (J. Adv. Model. Earth Syst., 6, 223-248, 2014) and they show a good

  8. Mixing-model Sensitivity to Initial Conditions in Hydrodynamic Predictions

    Science.gov (United States)

    Bigelow, Josiah; Silva, Humberto; Truman, C. Randall; Vorobieff, Peter

    2017-11-01

    Amagat and Dalton mixing-models were studied to compare their thermodynamic prediction of shock states. Numerical simulations with the Sandia National Laboratories shock hydrodynamic code CTH modeled University of New Mexico (UNM) shock tube laboratory experiments shocking a 1:1 molar mixture of helium (He) and sulfur hexafluoride (SF6) . Five input parameters were varied for sensitivity analysis: driver section pressure, driver section density, test section pressure, test section density, and mixture ratio (mole fraction). We show via incremental Latin hypercube sampling (LHS) analysis that significant differences exist between Amagat and Dalton mixing-model predictions. The differences observed in predicted shock speeds, temperatures, and pressures grow more pronounced with higher shock speeds. Supported by NNSA Grant DE-0002913.

  9. Using continuation-ratio logits to analyze the variation of the age composition of fish catches

    DEFF Research Database (Denmark)

    Kvist, Trine; Gislason, Henrik; Thyregod, Poul

    2000-01-01

    Major sources of information for the estimation of the size of the fish stocks and the rate of their exploitation are samples from which the age composition of catches may be determined However, the age composition in the catches often varies as a result of several factors. Stratification...... of the sampling is desirable, because it leads to better estimates of the age composition, and the corresponding variances and covariances. The analysis is impeded by the fact that the response is ordered categorical. This paper introduces an easily applicable method to analyze such data. The method combines...... be applied separately to each level of the logits. The method is illustrated by the analysis of age-composition data collected from the Danish sandeel fishery in the North Sea in 1993. The significance of possible sources of variation is evaluated, and formulae for estimating the proportions of each age...

  10. Applied model for the growth of the daytime mixed layer

    DEFF Research Database (Denmark)

    Batchvarova, E.; Gryning, Sven-Erik

    1991-01-01

    numerically. When the mixed layer is shallow or the atmosphere nearly neutrally stratified, the growth is controlled mainly by mechanical turbulence. When the layer is deep, its growth is controlled mainly by convective turbulence. The model is applied on a data set of the evolution of the height of the mixed...... layer in the morning hours, when both mechanical and convective turbulence contribute to the growth process. Realistic mixed-layer developments are obtained....

  11. A marketing mix model for a complex and turbulent environment

    Directory of Open Access Journals (Sweden)

    R. B. Mason

    2007-12-01

    Full Text Available Purpose: This paper is based on the proposition that the choice of marketing tactics is determined, or at least significantly influenced, by the nature of the company’s external environment. It aims to illustrate the type of marketing mix tactics that are suggested for a complex and turbulent environment when marketing and the environment are viewed through a chaos and complexity theory lens. Design/Methodology/Approach: Since chaos and complexity theories are proposed as a good means of understanding the dynamics of complex and turbulent markets, a comprehensive review and analysis of literature on the marketing mix and marketing tactics from a chaos and complexity viewpoint was conducted. From this literature review, a marketing mix model was conceptualised. Findings: A marketing mix model considered appropriate for success in complex and turbulent environments was developed. In such environments, the literature suggests destabilising marketing activities are more effective, whereas stabilising type activities are more effective in simple, stable environments. Therefore the model proposes predominantly destabilising type tactics as appropriate for a complex and turbulent environment such as is currently being experienced in South Africa. Implications: This paper is of benefit to marketers by emphasising a new way to consider the future marketing activities of their companies. How this model can assist marketers and suggestions for research to develop and apply this model are provided. It is hoped that the model suggested will form the basis of empirical research to test its applicability in the turbulent South African environment. Originality/Value: Since businesses and markets are complex adaptive systems, using complexity theory to understand how to cope in complex, turbulent environments is necessary, but has not been widely researched. In fact, most chaos and complexity theory work in marketing has concentrated on marketing strategy, with

  12. Linear mixing model applied to AVHRR LAC data

    Science.gov (United States)

    Holben, Brent N.; Shimabukuro, Yosio E.

    1993-01-01

    A linear mixing model was applied to coarse spatial resolution data from the NOAA Advanced Very High Resolution Radiometer. The reflective component of the 3.55 - 3.93 microns channel was extracted and used with the two reflective channels 0.58 - 0.68 microns and 0.725 - 1.1 microns to run a Constraine Least Squares model to generate vegetation, soil, and shade fraction images for an area in the Western region of Brazil. The Landsat Thematic Mapper data covering the Emas National park region was used for estimating the spectral response of the mixture components and for evaluating the mixing model results. The fraction images were compared with an unsupervised classification derived from Landsat TM data acquired on the same day. The relationship between the fraction images and normalized difference vegetation index images show the potential of the unmixing techniques when using coarse resolution data for global studies.

  13. Mixed models approaches for joint modeling of different types of responses.

    Science.gov (United States)

    Ivanova, Anna; Molenberghs, Geert; Verbeke, Geert

    2016-01-01

    In many biomedical studies, one jointly collects longitudinal continuous, binary, and survival outcomes, possibly with some observations missing. Random-effects models, sometimes called shared-parameter models or frailty models, received a lot of attention. In such models, the corresponding variance components can be employed to capture the association between the various sequences. In some cases, random effects are considered common to various sequences, perhaps up to a scaling factor; in others, there are different but correlated random effects. Even though a variety of data types has been considered in the literature, less attention has been devoted to ordinal data. For univariate longitudinal or hierarchical data, the proportional odds mixed model (POMM) is an instance of the generalized linear mixed model (GLMM; Breslow and Clayton, 1993). Ordinal data are conveniently replaced by a parsimonious set of dummies, which in the longitudinal setting leads to a repeated set of dummies. When ordinal longitudinal data are part of a joint model, the complexity increases further. This is the setting considered in this paper. We formulate a random-effects based model that, in addition, allows for overdispersion. Using two case studies, it is shown that the combination of random effects to capture association with further correction for overdispersion can improve the model's fit considerably and that the resulting models allow to answer research questions that could not be addressed otherwise. Parameters can be estimated in a fairly straightforward way, using the SAS procedure NLMIXED.

  14. Stochastic transport models for mixing in variable-density turbulence

    Science.gov (United States)

    Bakosi, J.; Ristorcelli, J. R.

    2011-11-01

    In variable-density (VD) turbulent mixing, where very-different- density materials coexist, the density fluctuations can be an order of magnitude larger than their mean. Density fluctuations are non-negligible in the inertia terms of the Navier-Stokes equation which has both quadratic and cubic nonlinearities. Very different mixing rates of different materials give rise to large differential accelerations and some fundamentally new physics that is not seen in constant-density turbulence. In VD flows material mixing is active in a sense far stronger than that applied in the Boussinesq approximation of buoyantly-driven flows: the mass fraction fluctuations are coupled to each other and to the fluid momentum. Statistical modeling of VD mixing requires accounting for basic constraints that are not important in the small-density-fluctuation passive-scalar-mixing approximation: the unit-sum of mass fractions, bounded sample space, and the highly skewed nature of the probability densities become essential. We derive a transport equation for the joint probability of mass fractions, equivalent to a system of stochastic differential equations, that is consistent with VD mixing in multi-component turbulence and consistently reduces to passive scalar mixing in constant-density flows.

  15. Logit dynamics for strategic games mixing time and metastability

    OpenAIRE

    Ferraioli, Diodato

    2012-01-01

    2010 - 2011 A complex system is generally de_ned as a system emerging from the interaction of several and di_erent components, each one with their properties and their goals, usually subject to external inuences. Nowadays, complex systems are ubiquitous and they are found in many research areas: examples can be found in Economy (e.g., markets), Physics (e.g., ideal gases, spin systems), Biology (e.g., evolution of life) and Computer Science (e.g., Internet and social network...

  16. Staying Power of Churn Prediction Models

    NARCIS (Netherlands)

    Risselada, Hans; Verhoef, Peter C.; Bijmolt, Tammo H. A.

    In this paper, we study the staying power of various churn prediction models. Staying power is defined as the predictive performance of a model in a number of periods after the estimation period. We examine two methods, logit models and classification trees, both with and without applying a bagging

  17. A mathematical model for turbulent incompressible flows through mixing grids

    International Nuclear Information System (INIS)

    Allaire, G.

    1989-01-01

    A mathematical model is proposed for the computation of turbulent incompressible flows through mixing grids. This model is obtained as follows: in a three-dimentional-domain we represent a mixing grid by small identical wings of size ε 2 periodically distributed at the nodes of a plane regular mesh of size ε, and we consider incompressible Navier-Stokes equations with a no-slip condition on the wings. Using an appropriate homogenization process we pass to the limit when ε tends to zero and we obtain a Brinkman equation, i.e. a Navier-Stokes equation plus a zero-order term for the velocity, in a homogeneous domain without anymore wings. The interest of this model is that the spatial discretization is simpler in a homogeneous domain, and, moreover, the new term, which expresses the grid's mixing effect, can be evaluated with a local computation around a single wing

  18. From linear to generalized linear mixed models: A case study in repeated measures

    Science.gov (United States)

    Compared to traditional linear mixed models, generalized linear mixed models (GLMMs) can offer better correspondence between response variables and explanatory models, yielding more efficient estimates and tests in the analysis of data from designed experiments. Using proportion data from a designed...

  19. Effects of the ρ - ω mixing interaction in relativistic models

    International Nuclear Information System (INIS)

    Menezes, D.P.; Providencia, C.

    2003-01-01

    The effects of the ρ-ω mixing term in infinite nuclear matter and in finite nuclei are investigated with the non-linear Walecka model in a Thomas-Fermi approximation. For infinite nuclear matter the influence of the mixing term in the binding energy calculated with the NL3 and TM1 parametrizations can be neglected. Its influence on the symmetry energy is only felt for the TM1 with a unrealistically large value for the mixing term strength. For finite nuclei the contribution of the isospin mixing term is very large as compared with the expected value to solve the Nolen-Schiffer anomaly

  20. The MIDAS Touch: Mixed Data Sampling Regression Models

    OpenAIRE

    Ghysels, Eric; Santa-Clara, Pedro; Valkanov, Rossen

    2004-01-01

    We introduce Mixed Data Sampling (henceforth MIDAS) regression models. The regressions involve time series data sampled at different frequencies. Technically speaking MIDAS models specify conditional expectations as a distributed lag of regressors recorded at some higher sampling frequencies. We examine the asymptotic properties of MIDAS regression estimation and compare it with traditional distributed lag models. MIDAS regressions have wide applicability in macroeconomics and �nance.

  1. Application of Hierarchical Linear Models/Linear Mixed-Effects Models in School Effectiveness Research

    Science.gov (United States)

    Ker, H. W.

    2014-01-01

    Multilevel data are very common in educational research. Hierarchical linear models/linear mixed-effects models (HLMs/LMEs) are often utilized to analyze multilevel data nowadays. This paper discusses the problems of utilizing ordinary regressions for modeling multilevel educational data, compare the data analytic results from three regression…

  2. Bayes factor between Student t and Gaussian mixed models within an animal breeding context

    Directory of Open Access Journals (Sweden)

    García-Cortés Luis

    2008-07-01

    Full Text Available Abstract The implementation of Student t mixed models in animal breeding has been suggested as a useful statistical tool to effectively mute the impact of preferential treatment or other sources of outliers in field data. Nevertheless, these additional sources of variation are undeclared and we do not know whether a Student t mixed model is required or if a standard, and less parameterized, Gaussian mixed model would be sufficient to serve the intended purpose. Within this context, our aim was to develop the Bayes factor between two nested models that only differed in a bounded variable in order to easily compare a Student t and a Gaussian mixed model. It is important to highlight that the Student t density converges to a Gaussian process when degrees of freedom tend to infinity. The twomodels can then be viewed as nested models that differ in terms of degrees of freedom. The Bayes factor can be easily calculated from the output of a Markov chain Monte Carlo sampling of the complex model (Student t mixed model. The performance of this Bayes factor was tested under simulation and on a real dataset, using the deviation information criterion (DIC as the standard reference criterion. The two statistical tools showed similar trends along the parameter space, although the Bayes factor appeared to be the more conservative. There was considerable evidence favoring the Student t mixed model for data sets simulated under Student t processes with limited degrees of freedom, and moderate advantages associated with using the Gaussian mixed model when working with datasets simulated with 50 or more degrees of freedom. For the analysis of real data (weight of Pietrain pigs at six months, both the Bayes factor and DIC slightly favored the Student t mixed model, with there being a reduced incidence of outlier individuals in this population.

  3. Proposed model for fuel-coolant mixing during a core-melt accident

    International Nuclear Information System (INIS)

    Corradini, M.L.

    1983-01-01

    If complete failure of normal and emergency coolant flow occurs in a light water reactor, fission product decay heat would eventually cause melting of the reactor fuel and cladding. The core melt may then slump into the lower plenum and later into the reactor cavity and contact residual liquid water. A model is proposed to describe the fuel-coolant mixing process upon contact. The model is compared to intermediate scale experiments being conducted at Sandia. The modelling of this mixing process will aid in understanding three important processes: (1) fuel debris sizes upon quenching in water, (2) the hydrogen source term during fuel quench, and (3) the rate of steam production. Additional observations of Sandia data indicate that the steam explosion is affected by this mixing process

  4. "Logits and Tigers and Bears, Oh My! A Brief Look at the Simple Math of Logistic Regression and How It Can Improve Dissemination of Results"

    Directory of Open Access Journals (Sweden)

    Jason W. Osborne

    2012-06-01

    Full Text Available Logistic regression is slowly gaining acceptance in the social sciences, and fills an important niche in the researcher's toolkit: being able to predict important outcomes that are not continuous in nature. While OLS regression is a valuable tool, it cannot routinely be used to predict outcomes that are binary or categorical in nature. These outcomes represent important social science lines of research: retention in, or dropout from school, using illicit drugs, underage alcohol consumption, antisocial behavior, purchasing decisions, voting patterns, risky behavior, and so on. The goal of this paper is to briefly lead the reader through the surprisingly simple mathematics that underpins logistic regression: probabilities, odds, odds ratios, and logits. Anyone with spreadsheet software or a scientific calculator can follow along, and in turn, this knowledge can be used to make much more interesting, clear, and accurate presentations of results (especially to non-technical audiences. In particular, I will share an example of an interaction in logistic regression, how it was originally graphed, and how the graph was made substantially more user-friendly by converting the original metric (logits to a more readily interpretable metric (probability through three simple steps.

  5. Minimization of required model runs in the Random Mixing approach to inverse groundwater flow and transport modeling

    Science.gov (United States)

    Hoerning, Sebastian; Bardossy, Andras; du Plessis, Jaco

    2017-04-01

    Most geostatistical inverse groundwater flow and transport modelling approaches utilize a numerical solver to minimize the discrepancy between observed and simulated hydraulic heads and/or hydraulic concentration values. The optimization procedure often requires many model runs, which for complex models lead to long run times. Random Mixing is a promising new geostatistical technique for inverse modelling. The method is an extension of the gradual deformation approach. It works by finding a field which preserves the covariance structure and maintains observed hydraulic conductivities. This field is perturbed by mixing it with new fields that fulfill the homogeneous conditions. This mixing is expressed as an optimization problem which aims to minimize the difference between the observed and simulated hydraulic heads and/or concentration values. To preserve the spatial structure, the mixing weights must lie on the unit hyper-sphere. We present a modification to the Random Mixing algorithm which significantly reduces the number of model runs required. The approach involves taking n equally spaced points on the unit circle as weights for mixing conditional random fields. Each of these mixtures provides a solution to the forward model at the conditioning locations. For each of the locations the solutions are then interpolated around the circle to provide solutions for additional mixing weights at very low computational cost. The interpolated solutions are used to search for a mixture which maximally reduces the objective function. This is in contrast to other approaches which evaluate the objective function for the n mixtures and then interpolate the obtained values. Keeping the mixture on the unit circle makes it easy to generate equidistant sampling points in the space; however, this means that only two fields are mixed at a time. Once the optimal mixture for two fields has been found, they are combined to form the input to the next iteration of the algorithm. This

  6. A knowledge representation model for the optimisation of electricity generation mixes

    International Nuclear Information System (INIS)

    Chee Tahir, Aidid; Bañares-Alcántara, René

    2012-01-01

    Highlights: ► Prototype energy model which uses semantic representation (ontologies). ► Model accepts both quantitative and qualitative based energy policy goals. ► Uses logic inference to formulate equations for linear optimisation. ► Proposes electricity generation mix based on energy policy goals. -- Abstract: Energy models such as MARKAL, MESSAGE and DNE-21 are optimisation tools which aid in the formulation of energy policies. The strength of these models lie in their solid theoretical foundations built on rigorous mathematical equations designed to process numerical (quantitative) data related to economics and the environment. Nevertheless, a complete consideration of energy policy issues also requires the consideration of the political and social aspects of energy. These political and social issues are often associated with non-numerical (qualitative) information. To enable the evaluation of these aspects in a computer model, we hypothesise that a different approach to energy model optimisation design is required. A prototype energy model that is based on a semantic representation using ontologies and is integrated to engineering models implemented in Java has been developed. The model provides both quantitative and qualitative evaluation capabilities through the use of logical inference. The semantic representation of energy policy goals is used (i) to translate a set of energy policy goals into a set of logic queries which is then used to determine the preferred electricity generation mix and (ii) to assist in the formulation of a set of equations which is then solved in order to obtain a proposed electricity generation mix. Scenario case studies have been developed and tested on the prototype energy model to determine its capabilities. Knowledge queries were made on the semantic representation to determine an electricity generation mix which fulfilled a set of energy policy goals (e.g. CO 2 emissions reduction, water conservation, energy supply

  7. Eliciting mixed emotions: a meta-analysis comparing models, types, and measures

    Science.gov (United States)

    Berrios, Raul; Totterdell, Peter; Kellett, Stephen

    2015-01-01

    The idea that people can experience two oppositely valenced emotions has been controversial ever since early attempts to investigate the construct of mixed emotions. This meta-analysis examined the robustness with which mixed emotions have been elicited experimentally. A systematic literature search identified 63 experimental studies that instigated the experience of mixed emotions. Studies were distinguished according to the structure of the underlying affect model—dimensional or discrete—as well as according to the type of mixed emotions studied (e.g., happy-sad, fearful-happy, positive-negative). The meta-analysis using a random-effects model revealed a moderate to high effect size for the elicitation of mixed emotions (dIG+ = 0.77), which remained consistent regardless of the structure of the affect model, and across different types of mixed emotions. Several methodological and design moderators were tested. Studies using the minimum index (i.e., the minimum value between a pair of opposite valenced affects) resulted in smaller effect sizes, whereas subjective measures of mixed emotions increased the effect sizes. The presence of more women in the samples was also associated with larger effect sizes. The current study indicates that mixed emotions are a robust, measurable and non-artifactual experience. The results are discussed in terms of the implications for an affect system that has greater versatility and flexibility than previously thought. PMID:25926805

  8. The salinity effect in a mixed layer ocean model

    Science.gov (United States)

    Miller, J. R.

    1976-01-01

    A model of the thermally mixed layer in the upper ocean as developed by Kraus and Turner and extended by Denman is further extended to investigate the effects of salinity. In the tropical and subtropical Atlantic Ocean rapid increases in salinity occur at the bottom of a uniformly mixed surface layer. The most significant effects produced by the inclusion of salinity are the reduction of the deepening rate and the corresponding change in the heating characteristics of the mixed layer. If the net surface heating is positive, but small, salinity effects must be included to determine whether the mixed layer temperature will increase or decrease. Precipitation over tropical oceans leads to the development of a shallow stable layer accompanied by a decrease in the temperature and salinity at the sea surface.

  9. BDA special care case mix model.

    Science.gov (United States)

    Bateman, P; Arnold, C; Brown, R; Foster, L V; Greening, S; Monaghan, N; Zoitopoulos, L

    2010-04-10

    Routine dental care provided in special care dentistry is complicated by patient specific factors which increase the time taken and costs of treatment. The BDA have developed and conducted a field trial of a case mix tool to measure this complexity. For each episode of care the case mix tool assesses the following on a four point scale: 'ability to communicate', 'ability to cooperate', 'medical status', 'oral risk factors', 'access to oral care' and 'legal and ethical barriers to care'. The tool is reported to be easy to use and captures sufficient detail to discriminate between types of service and special care dentistry provided. It offers potential as a simple to use and clinically relevant source of performance management and commissioning data. This paper describes the model, demonstrates how it is currently being used, and considers future developments in its use.

  10. Experiments and CFD Modelling of Turbulent Mass Transfer in a Mixing Channel

    DEFF Research Database (Denmark)

    Hjertager Osenbroch, Lene Kristin; Hjertager, Bjørn H.; Solberg, Tron

    2006-01-01

    . Three different flow cases are studied. The 2D numerical predictions of the mixing channel show that none of the k-ε turbulence models tested is suitable for the flow cases studied here. The turbulent Schmidt number is reduced to obtain a better agreement between measured and predicted mean......Experiments are carried out for passive mixing in order to obtain local mean and turbulent velocities and concentrations. The mixing takes place in a square channel with two inlets separated by a block. A combined PIV/PLIF technique is used to obtain instantaneous velocity and concentration fields...... and fluctuating concentrations. The multi-peak presumed PDF mixing model is tested....

  11. The 4s web-marketing mix model

    NARCIS (Netherlands)

    Constantinides, Efthymios

    2002-01-01

    This paper reviews the criticism on the 4Ps Marketing Mix framework, the most popular tool of traditional marketing management, and categorizes the main objections of using the model as the foundation of physical marketing. It argues that applying the traditional approach, based on the 4Ps paradigm,

  12. Nonlinear spectral mixing theory to model multispectral signatures

    Energy Technology Data Exchange (ETDEWEB)

    Borel, C.C. [Los Alamos National Lab., NM (United States). Astrophysics and Radiation Measurements Group

    1996-02-01

    Nonlinear spectral mixing occurs due to multiple reflections and transmissions between discrete surfaces, e.g. leaves or facets of a rough surface. The radiosity method is an energy conserving computational method used in thermal engineering and it models nonlinear spectral mixing realistically and accurately. In contrast to the radiative transfer method the radiosity method takes into account the discreteness of the scattering surfaces (e.g. exact location, orientation and shape) such as leaves and includes mutual shading between them. An analytic radiosity-based scattering model for vegetation was developed and used to compute vegetation indices for various configurations. The leaf reflectance and transmittance was modeled using the PROSPECT model for various amounts of water, chlorophyll and variable leaf structure. The soil background was modeled using SOILSPEC with a linear mixture of reflectances of sand, clay and peat. A neural network and a geometry based retrieval scheme were used to retrieve leaf area index and chlorophyll concentration for dense canopies. Only simulated canopy reflectances in the 6 visible through short wave IR Landsat TM channels were used. The authors used an empirical function to compute the signal-to-noise ratio of a retrieved quantity.

  13. Models of neutrino mass and mixing

    International Nuclear Information System (INIS)

    Ma, Ernest

    2000-01-01

    There are two basic theoretical approaches to obtaining neutrino mass and mixing. In the minimalist approach, one adds just enough new stuff to the Minimal Standard Model to get m ν ≠0 and U αi ≠1. In the holistic approach, one uses a general framework or principle to enlarge the Minimal Standard Model such that, among other things, m ν ≠0 and U αi ≠1. In both cases, there are important side effects besides neutrino oscillations. I discuss a number of examples, including the possibility of leptogenesis from R parity nonconservation in supersymmetry

  14. The Mixed Quark-Gluon Condensate from the Global Color Symmetry Model

    Institute of Scientific and Technical Information of China (English)

    ZONG Hong-Shi; PING Jia-Lun; LU Xiao-Fu; WANG Fan; ZHAO En-Guang

    2002-01-01

    The mixed quark-gluon condensate from the global color symmetry model is derived. It is shown that themixed quark-gluon condensate depends explicitly on the gluon propagator. This interesting feature may be regarded asan additional constraint on the model of gluon propagator. The values of the mixed quark-gluon condensate from someansatz for the gluon propagator are compared with those determined from QCD sum rules.

  15. Sediment fingerprinting experiments to test the sensitivity of multivariate mixing models

    Science.gov (United States)

    Gaspar, Leticia; Blake, Will; Smith, Hugh; Navas, Ana

    2014-05-01

    Sediment fingerprinting techniques provide insight into the dynamics of sediment transfer processes and support for catchment management decisions. As questions being asked of fingerprinting datasets become increasingly complex, validation of model output and sensitivity tests are increasingly important. This study adopts an experimental approach to explore the validity and sensitivity of mixing model outputs for materials with contrasting geochemical and particle size composition. The experiments reported here focused on (i) the sensitivity of model output to different fingerprint selection procedures and (ii) the influence of source material particle size distributions on model output. Five soils with significantly different geochemistry, soil organic matter and particle size distributions were selected as experimental source materials. A total of twelve sediment mixtures were prepared in the laboratory by combining different quantified proportions of the Kruskal-Wallis test, Discriminant Function Analysis (DFA), Principal Component Analysis (PCA), or correlation matrix). Summary results for the use of the mixing model with the different sets of fingerprint properties for the twelve mixed soils were reasonably consistent with the initial mixing percentages initially known. Given the experimental nature of the work and dry mixing of materials, geochemical conservative behavior was assumed for all elements, even for those that might be disregarded in aquatic systems (e.g. P). In general, the best fits between actual and modeled proportions were found using a set of nine tracer properties (Sr, Rb, Fe, Ti, Ca, Al, P, Si, K, Si) that were derived using DFA coupled with a multivariate stepwise algorithm, with errors between real and estimated value that did not exceed 6.7 % and values of GOF above 94.5 %. The second set of experiments aimed to explore the sensitivity of model output to variability in the particle size of source materials assuming that a degree of

  16. Mixing of the Glauber dynamics for the ferromagnetic Potts model

    OpenAIRE

    Bordewich, Magnus; Greenhill, Catherine; Patel, Viresh

    2013-01-01

    We present several results on the mixing time of the Glauber dynamics for sampling from the Gibbs distribution in the ferromagnetic Potts model. At a fixed temperature and interaction strength, we study the interplay between the maximum degree ($\\Delta$) of the underlying graph and the number of colours or spins ($q$) in determining whether the dynamics mixes rapidly or not. We find a lower bound $L$ on the number of colours such that Glauber dynamics is rapidly mixing if at least $L$ colours...

  17. Computational model for turbulent flow around a grid spacer with mixing vane

    International Nuclear Information System (INIS)

    Tsutomu Ikeno; Takeo Kajishima

    2005-01-01

    Turbulent mixing coefficient and pressure drop are important factors in subchannel analysis to predict onset of DNB. However, universal correlations are difficult since these factors are significantly affected by the geometry of subchannel and a grid spacer with mixing vane. Therefore, we propose a computational model to estimate these factors. Computational model: To represent the effect of geometry of grid spacer in computational model, we applied a large eddy simulation (LES) technique in couple with an improved immersed-boundary method. In our previous work (Ikeno, et al., NURETH-10), detailed properties of turbulence in subchannel were successfully investigated by developing the immersed boundary method in LES. In this study, additional improvements are given: new one-equation dynamic sub-grid scale (SGS) model is introduced to account for the complex geometry without any artificial modification; the higher order accuracy is maintained by consistent treatment for boundary conditions for velocity and pressure. NUMERICAL TEST AND DISCUSSION: Turbulent mixing coefficient and pressure drop are affected strongly by the arrangement and inclination of mixing vane. Therefore, computations are carried out for each of convolute and periodic arrangements, and for each of 30 degree and 20 degree inclinations. The difference in turbulent mixing coefficient due to these factors is reasonably predicted by our method. (An example of this numerical test is shown in Fig. 1.) Turbulent flow of the problem includes unsteady separation behind the mixing vane and vortex shedding in downstream. Anisotropic distribution of turbulent stress is also appeared in rod gap. Therefore, our computational model has advantage for assessing the influence of arrangement and inclination of mixing vane. By coarser computational mesh, one can screen several candidates for spacer design. Then, by finer mesh, more quantitative analysis is possible. By such a scheme, we believe this method is useful

  18. Estimating marginal properties of quantitative real-time PCR data using nonlinear mixed models

    DEFF Research Database (Denmark)

    Gerhard, Daniel; Bremer, Melanie; Ritz, Christian

    2014-01-01

    A unified modeling framework based on a set of nonlinear mixed models is proposed for flexible modeling of gene expression in real-time PCR experiments. Focus is on estimating the marginal or population-based derived parameters: cycle thresholds and ΔΔc(t), but retaining the conditional mixed mod...

  19. Extending the linear model with R generalized linear, mixed effects and nonparametric regression models

    CERN Document Server

    Faraway, Julian J

    2005-01-01

    Linear models are central to the practice of statistics and form the foundation of a vast range of statistical methodologies. Julian J. Faraway''s critically acclaimed Linear Models with R examined regression and analysis of variance, demonstrated the different methods available, and showed in which situations each one applies. Following in those footsteps, Extending the Linear Model with R surveys the techniques that grow from the regression model, presenting three extensions to that framework: generalized linear models (GLMs), mixed effect models, and nonparametric regression models. The author''s treatment is thoroughly modern and covers topics that include GLM diagnostics, generalized linear mixed models, trees, and even the use of neural networks in statistics. To demonstrate the interplay of theory and practice, throughout the book the author weaves the use of the R software environment to analyze the data of real examples, providing all of the R commands necessary to reproduce the analyses. All of the ...

  20. Intercomparison of model simulations of mixed-phase clouds observed during the ARM Mixed-Phase Arctic Cloud Experiment. Part II: Multi-layered cloud

    Energy Technology Data Exchange (ETDEWEB)

    Morrison, H; McCoy, R B; Klein, S A; Xie, S; Luo, Y; Avramov, A; Chen, M; Cole, J; Falk, M; Foster, M; Genio, A D; Harrington, J; Hoose, C; Khairoutdinov, M; Larson, V; Liu, X; McFarquhar, G; Poellot, M; Shipway, B; Shupe, M; Sud, Y; Turner, D; Veron, D; Walker, G; Wang, Z; Wolf, A; Xu, K; Yang, F; Zhang, G

    2008-02-27

    Results are presented from an intercomparison of single-column and cloud-resolving model simulations of a deep, multi-layered, mixed-phase cloud system observed during the ARM Mixed-Phase Arctic Cloud Experiment. This cloud system was associated with strong surface turbulent sensible and latent heat fluxes as cold air flowed over the open Arctic Ocean, combined with a low pressure system that supplied moisture at mid-level. The simulations, performed by 13 single-column and 4 cloud-resolving models, generally overestimate the liquid water path and strongly underestimate the ice water path, although there is a large spread among the models. This finding is in contrast with results for the single-layer, low-level mixed-phase stratocumulus case in Part I of this study, as well as previous studies of shallow mixed-phase Arctic clouds, that showed an underprediction of liquid water path. The overestimate of liquid water path and underestimate of ice water path occur primarily when deeper mixed-phase clouds extending into the mid-troposphere were observed. These results suggest important differences in the ability of models to simulate Arctic mixed-phase clouds that are deep and multi-layered versus shallow and single-layered. In general, models with a more sophisticated, two-moment treatment of the cloud microphysics produce a somewhat smaller liquid water path that is closer to observations. The cloud-resolving models tend to produce a larger cloud fraction than the single-column models. The liquid water path and especially the cloud fraction have a large impact on the cloud radiative forcing at the surface, which is dominated by the longwave flux for this case.

  1. Semiparametric mixed-effects analysis of PK/PD models using differential equations.

    Science.gov (United States)

    Wang, Yi; Eskridge, Kent M; Zhang, Shunpu

    2008-08-01

    Motivated by the use of semiparametric nonlinear mixed-effects modeling on longitudinal data, we develop a new semiparametric modeling approach to address potential structural model misspecification for population pharmacokinetic/pharmacodynamic (PK/PD) analysis. Specifically, we use a set of ordinary differential equations (ODEs) with form dx/dt = A(t)x + B(t) where B(t) is a nonparametric function that is estimated using penalized splines. The inclusion of a nonparametric function in the ODEs makes identification of structural model misspecification feasible by quantifying the model uncertainty and provides flexibility for accommodating possible structural model deficiencies. The resulting model will be implemented in a nonlinear mixed-effects modeling setup for population analysis. We illustrate the method with an application to cefamandole data and evaluate its performance through simulations.

  2. Mixing Modeling Analysis For SRS Salt Waste Disposition

    International Nuclear Information System (INIS)

    Lee, S.

    2011-01-01

    Nuclear waste at Savannah River Site (SRS) waste tanks consists of three different types of waste forms. They are the lighter salt solutions referred to as supernate, the precipitated salts as salt cake, and heavier fine solids as sludge. The sludge is settled on the tank floor. About half of the residual waste radioactivity is contained in the sludge, which is only about 8 percentage of the total waste volume. Mixing study to be evaluated here for the Salt Disposition Integration (SDI) project focuses on supernate preparations in waste tanks prior to transfer to the Salt Waste Processing Facility (SWPF) feed tank. The methods to mix and blend the contents of the SRS blend tanks were evalutaed to ensure that the contents are properly blended before they are transferred from the blend tank such as Tank 50H to the SWPF feed tank. The work consists of two principal objectives to investigate two different pumps. One objective is to identify a suitable pumping arrangement that will adequately blend/mix two miscible liquids to obtain a uniform composition in the tank with a minimum level of sludge solid particulate in suspension. The other is to estimate the elevation in the tank at which the transfer pump inlet should be located where the solid concentration of the entrained fluid remains below the acceptance criterion (0.09 wt% or 1200 mg/liter) during transfer operation to the SWPF. Tank 50H is a Waste Tank that will be used to prepare batches of salt feed for SWPF. The salt feed must be a homogeneous solution satisfying the acceptance criterion of the solids entrainment during transfer operation. The work described here consists of two modeling areas. They are the mixing modeling analysis during miscible liquid blending operation, and the flow pattern analysis during transfer operation of the blended liquid. The modeling results will provide quantitative design and operation information during the mixing/blending process and the transfer operation of the blended

  3. A Modified Cellular Automaton Approach for Mixed Bicycle Traffic Flow Modeling

    Directory of Open Access Journals (Sweden)

    Xiaonian Shan

    2015-01-01

    Full Text Available Several previous studies have used the Cellular Automaton (CA for the modeling of bicycle traffic flow. However, previous CA models have several limitations, resulting in differences between the simulated and the observed traffic flow features. The primary objective of this study is to propose a modified CA model for simulating the characteristics of mixed bicycle traffic flow. Field data were collected on physically separated bicycle path in Shanghai, China, and were used to calibrate the CA model using the genetic algorithm. Traffic flow features between simulations of several CA models and field observations were compared. The results showed that our modified CA model produced more accurate simulation for the fundamental diagram and the passing events in mixed bicycle traffic flow. Based on our model, the bicycle traffic flow features, including the fundamental diagram, the number of passing events, and the number of lane changes, were analyzed. We also analyzed the traffic flow features with different traffic densities, traffic components on different travel lanes. Results of the study can provide important information for understanding and simulating the operations of mixed bicycle traffic flow.

  4. Evaluation of vertical coordinate and vertical mixing algorithms in the HYbrid-Coordinate Ocean Model (HYCOM)

    Science.gov (United States)

    Halliwell, George R.

    Vertical coordinate and vertical mixing algorithms included in the HYbrid Coordinate Ocean Model (HYCOM) are evaluated in low-resolution climatological simulations of the Atlantic Ocean. The hybrid vertical coordinates are isopycnic in the deep ocean interior, but smoothly transition to level (pressure) coordinates near the ocean surface, to sigma coordinates in shallow water regions, and back again to level coordinates in very shallow water. By comparing simulations to climatology, the best model performance is realized using hybrid coordinates in conjunction with one of the three available differential vertical mixing models: the nonlocal K-Profile Parameterization, the NASA GISS level 2 turbulence closure, and the Mellor-Yamada level 2.5 turbulence closure. Good performance is also achieved using the quasi-slab Price-Weller-Pinkel dynamical instability model. Differences among these simulations are too small relative to other errors and biases to identify the "best" vertical mixing model for low-resolution climate simulations. Model performance deteriorates slightly when the Kraus-Turner slab mixed layer model is used with hybrid coordinates. This deterioration is smallest when solar radiation penetrates beneath the mixed layer and when shear instability mixing is included. A simulation performed using isopycnic coordinates to emulate the Miami Isopycnic Coordinate Ocean Model (MICOM), which uses Kraus-Turner mixing without penetrating shortwave radiation and shear instability mixing, demonstrates that the advantages of switching from isopycnic to hybrid coordinates and including more sophisticated turbulence closures outweigh the negative numerical effects of maintaining hybrid vertical coordinates.

  5. A mixed model framework for teratology studies.

    Science.gov (United States)

    Braeken, Johan; Tuerlinckx, Francis

    2009-10-01

    A mixed model framework is presented to model the characteristic multivariate binary anomaly data as provided in some teratology studies. The key features of the model are the incorporation of covariate effects, a flexible random effects distribution by means of a finite mixture, and the application of copula functions to better account for the relation structure of the anomalies. The framework is motivated by data of the Boston Anticonvulsant Teratogenesis study and offers an integrated approach to investigate substantive questions, concerning general and anomaly-specific exposure effects of covariates, interrelations between anomalies, and objective diagnostic measurement.

  6. Propensity for Voluntary Travel Behavior Changes: An Experimental Analysis

    DEFF Research Database (Denmark)

    Meloni, Italo; Sanjust, Benedetta; Sottile, Eleonora

    2013-01-01

    In this paper we analyze individual propensity to voluntary travel behavior change combining concepts from theory of change with the methodologies deriving from behavioral models. In particular, following the theory of voluntary changes, we set up a two-week panel survey including soft measure...... implementation, which consisted of providing car users with a personalized travel plan after the first week of observation (before) and using the second week to monitoring the post-behavior (after). These data have then been used to estimate a Mixed Logit for the choice to use a personal vehicle or a light metro......; and a Multinomial Logit for the decision to change behavior. Results from both models show the relevance of providing information about available alternatives to individuals while promoting voluntary travel behavioral change....

  7. Evaluation of Aerosol Mixing State Classes in the GISS Modele-matrix Climate Model Using Single-particle Mass Spectrometry Measurements

    Science.gov (United States)

    Bauer, Susanne E.; Ault, Andrew; Prather, Kimberly A.

    2013-01-01

    Aerosol particles in the atmosphere are composed of multiple chemical species. The aerosol mixing state, which describes how chemical species are mixed at the single-particle level, provides critical information on microphysical characteristics that determine the interaction of aerosols with the climate system. The evaluation of mixing state has become the next challenge. This study uses aerosol time-of-flight mass spectrometry (ATOFMS) data and compares the results to those of the Goddard Institute for Space Studies modelE-MATRIX (Multiconfiguration Aerosol TRacker of mIXing state) model, a global climate model that includes a detailed aerosol microphysical scheme. We use data from field campaigns that examine a variety of air mass regimens (urban, rural, and maritime). At all locations, polluted areas in California (Riverside, La Jolla, and Long Beach), a remote location in the Sierra Nevada Mountains (Sugar Pine) and observations from Jeju (South Korea), the majority of aerosol species are internally mixed. Coarse aerosol particles, those above 1 micron, are typically aged, such as coated dust or reacted sea-salt particles. Particles below 1 micron contain large fractions of organic material, internally-mixed with sulfate and black carbon, and few external mixtures. We conclude that observations taken over multiple weeks characterize typical air mass types at a given location well; however, due to the instrumentation, we could not evaluate mass budgets. These results represent the first detailed comparison of single-particle mixing states in a global climate model with real-time single-particle mass spectrometry data, an important step in improving the representation of mixing state in global climate models.

  8. Bayesian Option Pricing Using Mixed Normal Heteroskedasticity Models

    DEFF Research Database (Denmark)

    Rombouts, Jeroen V.K.; Stentoft, Lars Peter

    While stochastic volatility models improve on the option pricing error when compared to the Black-Scholes-Merton model, mispricings remain. This paper uses mixed normal heteroskedasticity models to price options. Our model allows for significant negative skewness and time varying higher order...... moments of the risk neutral distribution. Parameter inference using Gibbs sampling is explained and we detail how to compute risk neutral predictive densities taking into account parameter uncertainty. When forecasting out-of-sample options on the S&P 500 index, substantial improvements are found compared...

  9. Handbook of mixed membership models and their applications

    CERN Document Server

    Airoldi, Edoardo M; Erosheva, Elena A; Fienberg, Stephen E

    2014-01-01

    In response to scientific needs for more diverse and structured explanations of statistical data, researchers have discovered how to model individual data points as belonging to multiple groups. Handbook of Mixed Membership Models and Their Applications shows you how to use these flexible modeling tools to uncover hidden patterns in modern high-dimensional multivariate data. It explores the use of the models in various application settings, including survey data, population genetics, text analysis, image processing and annotation, and molecular biology.Through examples using real data sets, yo

  10. Best practices for use of stable isotope mixing models in food-web studies

    Science.gov (United States)

    Stable isotope mixing models are increasingly used to quantify contributions of resources to consumers. While potentially powerful tools, these mixing models have the potential to be misused, abused, and misinterpreted. Here we draw on our collective experiences to address the qu...

  11. Mixing parametrizations for ocean climate modelling

    Science.gov (United States)

    Gusev, Anatoly; Moshonkin, Sergey; Diansky, Nikolay; Zalesny, Vladimir

    2016-04-01

    The algorithm is presented of splitting the total evolutionary equations for the turbulence kinetic energy (TKE) and turbulence dissipation frequency (TDF), which is used to parameterize the viscosity and diffusion coefficients in ocean circulation models. The turbulence model equations are split into the stages of transport-diffusion and generation-dissipation. For the generation-dissipation stage, the following schemes are implemented: the explicit-implicit numerical scheme, analytical solution and the asymptotic behavior of the analytical solutions. The experiments were performed with different mixing parameterizations for the modelling of Arctic and the Atlantic climate decadal variability with the eddy-permitting circulation model INMOM (Institute of Numerical Mathematics Ocean Model) using vertical grid refinement in the zone of fully developed turbulence. The proposed model with the split equations for turbulence characteristics is similar to the contemporary differential turbulence models, concerning the physical formulations. At the same time, its algorithm has high enough computational efficiency. Parameterizations with using the split turbulence model make it possible to obtain more adequate structure of temperature and salinity at decadal timescales, compared to the simpler Pacanowski-Philander (PP) turbulence parameterization. Parameterizations with using analytical solution or numerical scheme at the generation-dissipation step of the turbulence model leads to better representation of ocean climate than the faster parameterization using the asymptotic behavior of the analytical solution. At the same time, the computational efficiency left almost unchanged relative to the simple PP parameterization. Usage of PP parametrization in the circulation model leads to realistic simulation of density and circulation with violation of T,S-relationships. This error is majorly avoided with using the proposed parameterizations containing the split turbulence model

  12. Mixed Platoon Flow Dispersion Model Based on Speed-Truncated Gaussian Mixture Distribution

    Directory of Open Access Journals (Sweden)

    Weitiao Wu

    2013-01-01

    Full Text Available A mixed traffic flow feature is presented on urban arterials in China due to a large amount of buses. Based on field data, a macroscopic mixed platoon flow dispersion model (MPFDM was proposed to simulate the platoon dispersion process along the road section between two adjacent intersections from the flow view. More close to field observation, truncated Gaussian mixture distribution was adopted as the speed density distribution for mixed platoon. Expectation maximum (EM algorithm was used for parameters estimation. The relationship between the arriving flow distribution at downstream intersection and the departing flow distribution at upstream intersection was investigated using the proposed model. Comparison analysis using virtual flow data was performed between the Robertson model and the MPFDM. The results confirmed the validity of the proposed model.

  13. Linear mixed models a practical guide using statistical software

    CERN Document Server

    West, Brady T; Galecki, Andrzej T

    2014-01-01

    Highly recommended by JASA, Technometrics, and other journals, the first edition of this bestseller showed how to easily perform complex linear mixed model (LMM) analyses via a variety of software programs. Linear Mixed Models: A Practical Guide Using Statistical Software, Second Edition continues to lead readers step by step through the process of fitting LMMs. This second edition covers additional topics on the application of LMMs that are valuable for data analysts in all fields. It also updates the case studies using the latest versions of the software procedures and provides up-to-date information on the options and features of the software procedures available for fitting LMMs in SAS, SPSS, Stata, R/S-plus, and HLM.New to the Second Edition A new chapter on models with crossed random effects that uses a case study to illustrate software procedures capable of fitting these models Power analysis methods for longitudinal and clustered study designs, including software options for power analyses and suggest...

  14. Comparison of mixed layer models predictions with experimental data

    Energy Technology Data Exchange (ETDEWEB)

    Faggian, P.; Riva, G.M. [CISE Spa, Divisione Ambiente, Segrate (Italy); Brusasca, G. [ENEL Spa, CRAM, Milano (Italy)

    1997-10-01

    The temporal evolution of the PBL vertical structure for a North Italian rural site, situated within relatively large agricultural fields and almost flat terrain, has been investigated during the period 22-28 June 1993 by experimental and modellistic point of view. In particular, the results about a sunny day (June 22) and a cloudy day (June 25) are presented in this paper. Three schemes to estimate mixing layer depth have been compared, i.e. Holzworth (1967), Carson (1973) and Gryning-Batchvarova models (1990), which use standard meteorological observations. To estimate their degree of accuracy, model outputs were analyzed considering radio-sounding meteorological profiles and stability atmospheric classification criteria. Besides, the mixed layer depths prediction were compared with the estimated values obtained by a simple box model, whose input requires hourly measures of air concentrations and ground flux of {sup 222}Rn. (LN)

  15. Reliability Estimation of Aero-engine Based on Mixed Weibull Distribution Model

    Science.gov (United States)

    Yuan, Zhongda; Deng, Junxiang; Wang, Dawei

    2018-02-01

    Aero-engine is a complex mechanical electronic system, based on analysis of reliability of mechanical electronic system, Weibull distribution model has an irreplaceable role. Till now, only two-parameter Weibull distribution model and three-parameter Weibull distribution are widely used. Due to diversity of engine failure modes, there is a big error with single Weibull distribution model. By contrast, a variety of engine failure modes can be taken into account with mixed Weibull distribution model, so it is a good statistical analysis model. Except the concept of dynamic weight coefficient, in order to make reliability estimation result more accurately, three-parameter correlation coefficient optimization method is applied to enhance Weibull distribution model, thus precision of mixed distribution reliability model is improved greatly. All of these are advantageous to popularize Weibull distribution model in engineering applications.

  16. Modelling subject-specific childhood growth using linear mixed-effect models with cubic regression splines.

    Science.gov (United States)

    Grajeda, Laura M; Ivanescu, Andrada; Saito, Mayuko; Crainiceanu, Ciprian; Jaganath, Devan; Gilman, Robert H; Crabtree, Jean E; Kelleher, Dermott; Cabrera, Lilia; Cama, Vitaliano; Checkley, William

    2016-01-01

    Childhood growth is a cornerstone of pediatric research. Statistical models need to consider individual trajectories to adequately describe growth outcomes. Specifically, well-defined longitudinal models are essential to characterize both population and subject-specific growth. Linear mixed-effect models with cubic regression splines can account for the nonlinearity of growth curves and provide reasonable estimators of population and subject-specific growth, velocity and acceleration. We provide a stepwise approach that builds from simple to complex models, and account for the intrinsic complexity of the data. We start with standard cubic splines regression models and build up to a model that includes subject-specific random intercepts and slopes and residual autocorrelation. We then compared cubic regression splines vis-à-vis linear piecewise splines, and with varying number of knots and positions. Statistical code is provided to ensure reproducibility and improve dissemination of methods. Models are applied to longitudinal height measurements in a cohort of 215 Peruvian children followed from birth until their fourth year of life. Unexplained variability, as measured by the variance of the regression model, was reduced from 7.34 when using ordinary least squares to 0.81 (p linear mixed-effect models with random slopes and a first order continuous autoregressive error term. There was substantial heterogeneity in both the intercept (p modeled with a first order continuous autoregressive error term as evidenced by the variogram of the residuals and by a lack of association among residuals. The final model provides a parametric linear regression equation for both estimation and prediction of population- and individual-level growth in height. We show that cubic regression splines are superior to linear regression splines for the case of a small number of knots in both estimation and prediction with the full linear mixed effect model (AIC 19,352 vs. 19

  17. Configuration mixing in the sdg interacting boson model

    International Nuclear Information System (INIS)

    Bouldjedri, A; Van Isacker, P; Zerguine, S

    2005-01-01

    A wavefunction analysis of the strong-coupling limits of the sdg interacting boson model is presented. The analysis is carried out for two-boson states and allows us to characterize the boson configuration mixing in the different limits. Based on these results and those of a shell-model analysis of the sdg IBM, qualitative conclusions are drawn about the range of applicability of each limit

  18. Configuration mixing in the sdg interacting boson model

    Energy Technology Data Exchange (ETDEWEB)

    Bouldjedri, A [Department of Physics, Faculty of Science, University of Batna, Avenue Boukhelouf M El Hadi, 05000 Batna (Algeria); Van Isacker, P [GANIL, BP 55027, F-14076 Caen cedex 5 (France); Zerguine, S [Department of Physics, Faculty of Science, University of Batna, Avenue Boukhelouf M El Hadi, 05000 Batna (Algeria)

    2005-11-01

    A wavefunction analysis of the strong-coupling limits of the sdg interacting boson model is presented. The analysis is carried out for two-boson states and allows us to characterize the boson configuration mixing in the different limits. Based on these results and those of a shell-model analysis of the sdg IBM, qualitative conclusions are drawn about the range of applicability of each limit.

  19. Economic Analysis of Job-Related Attributes in Undergraduate Students' Initial Job Selection

    Science.gov (United States)

    Jin, Yanhong H.; Mjelde, James W.; Litzenberg, Kerry K.

    2014-01-01

    Economic tradeoffs students place on location, salary, distances to natural resource amenities, size of the city where the job is located, and commuting times for their first college graduate job are estimated using a mixed logit model for a sample of Texas A&M University students. The Midwest is the least preferred area having a mean salary…

  20. Development of a transverse mixing model for large scale impulsion phenomenon in tight lattice

    International Nuclear Information System (INIS)

    Liu, Xiaojing; Ren, Shuo; Cheng, Xu

    2017-01-01

    Highlights: • Experiment data of Krauss is used to validate the feasibility of CFD simulation method. • CFD simulation is performed to simulate the large scale impulsion phenomenon for tight-lattice bundle. • A mixing model to simulate the large scale impulsion phenomenon is proposed based on CFD result fitting. • The new developed mixing model has been added in the subchannel code. - Abstract: Tight-lattice is widely adopted in the innovative reactor fuel bundles design since it can increase the conversion ratio and improve the heat transfer between fuel bundles and coolant. It has been noticed that a large scale impulsion of cross-velocity exists in the gap region, which plays an important role on the transverse mixing flow and heat transfer. Although many experiments and numerical simulation have been carried out to study the impulsion of velocity, a model to describe the wave length, amplitude and frequency of mixing coefficient is still missing. This research work takes advantage of the CFD method to simulate the experiment of Krauss and to compare experiment data and simulation result in order to demonstrate the feasibility of simulation method and turbulence model. Then, based on this verified method and model, several simulations are performed with different Reynolds number and different Pitch-to-Diameter ratio. By fitting the CFD results achieved, a mixing model to simulate the large scale impulsion phenomenon is proposed and adopted in the current subchannel code. The new mixing model is applied to some fuel assembly analysis by subchannel calculation, it can be noticed that the new developed mixing model can reduce the hot channel factor and contribute to a uniform distribution of outlet temperature.

  1. Discrete Symmetries and Models of Flavour Mixing

    International Nuclear Information System (INIS)

    King, Stephen F

    2015-01-01

    In this talk we shall give an overview of the role of discrete symmetries, including both CP and family symmetry, in constructing unified models of quark and lepton (including especially neutrino) masses and mixing. Various different approaches to model building will be described, denoted as direct, semi-direct and indirect, and the pros and cons of each approach discussed. Particular examples based on Δ(6n 2 ) will be discussed and an A to Z of Flavour with Pati-Salam will be presented. (paper)

  2. Mathematical model and metaheuristics for simultaneous balancing and sequencing of a robotic mixed-model assembly line

    Science.gov (United States)

    Li, Zixiang; Janardhanan, Mukund Nilakantan; Tang, Qiuhua; Nielsen, Peter

    2018-05-01

    This article presents the first method to simultaneously balance and sequence robotic mixed-model assembly lines (RMALB/S), which involves three sub-problems: task assignment, model sequencing and robot allocation. A new mixed-integer programming model is developed to minimize makespan and, using CPLEX solver, small-size problems are solved for optimality. Two metaheuristics, the restarted simulated annealing algorithm and co-evolutionary algorithm, are developed and improved to address this NP-hard problem. The restarted simulated annealing method replaces the current temperature with a new temperature to restart the search process. The co-evolutionary method uses a restart mechanism to generate a new population by modifying several vectors simultaneously. The proposed algorithms are tested on a set of benchmark problems and compared with five other high-performing metaheuristics. The proposed algorithms outperform their original editions and the benchmarked methods. The proposed algorithms are able to solve the balancing and sequencing problem of a robotic mixed-model assembly line effectively and efficiently.

  3. The Impact of Employer Attitude to Green Commuting Plans on Reducing Car Driving: A Mixed Method Analysis

    Directory of Open Access Journals (Sweden)

    Chuan Ding

    2014-04-01

    Full Text Available Reducing car trips and promoting green commuting modes are generally considered important solutions to reduce the increase of energy consumption and transportation CO2 emissions. One potential solution for alleviating transportation CO2 emissions has been to identify a role for the employer through green commuter programs. This paper offers an approach to assess the effects of employer attitudes towards green commuting plans on commuter mode choice and the intermediary role car ownership plays in the mode choice decision process. A mixed method which extends the traditional discrete choice model by incorporating latent variables and mediating variables with a structure equation model was used to better understand the commuter mode choice behaviour. The empirical data were selected from Washington-Baltimore Regional Household Travel Survey in 2007-2008, including all the trips from home to workplace during the morning hours. The model parameters were estimated using the simultaneous estimation approach and the integrated model turns out to be superior to the traditional multinomial logit (MNL model accounting for the impact of employer attitudes towards green commuting. The direct and indirect effects of socio-demographic attributes and employer attitudes towards green commuting were estimated. Through the structural equation modelling with mediating variable, this approach confirmed the intermediary nature of car ownership in the choice process. The results found in this paper provide helpful information for transportation and planning policymakers to test the transportation and planning policies effects and encourage green commuting reducing transportation CO2 emissions.

  4. Intercomparison of model simulations of mixed-phase clouds observed during the ARM Mixed-Phase Arctic Cloud Experiment. Part I: Single layer cloud

    Energy Technology Data Exchange (ETDEWEB)

    Klein, S A; McCoy, R B; Morrison, H; Ackerman, A; Avramov, A; deBoer, G; Chen, M; Cole, J; DelGenio, A; Golaz, J; Hashino, T; Harrington, J; Hoose, C; Khairoutdinov, M; Larson, V; Liu, X; Luo, Y; McFarquhar, G; Menon, S; Neggers, R; Park, S; Poellot, M; von Salzen, K; Schmidt, J; Sednev, I; Shipway, B; Shupe, M; Spangenberg, D; Sud, Y; Turner, D; Veron, D; Falk, M; Foster, M; Fridlind, A; Walker, G; Wang, Z; Wolf, A; Xie, S; Xu, K; Yang, F; Zhang, G

    2008-02-27

    Results are presented from an intercomparison of single-column and cloud-resolving model simulations of a cold-air outbreak mixed-phase stratocumulus cloud observed during the Atmospheric Radiation Measurement (ARM) program's Mixed-Phase Arctic Cloud Experiment. The observed cloud occurred in a well-mixed boundary layer with a cloud top temperature of -15 C. The observed liquid water path of around 160 g m{sup -2} was about two-thirds of the adiabatic value and much greater than the mass of ice crystal precipitation which when integrated from the surface to cloud top was around 15 g m{sup -2}. The simulations were performed by seventeen single-column models (SCMs) and nine cloud-resolving models (CRMs). While the simulated ice water path is generally consistent with the observed values, the median SCM and CRM liquid water path is a factor of three smaller than observed. Results from a sensitivity study in which models removed ice microphysics indicate that in many models the interaction between liquid and ice-phase microphysics is responsible for the large model underestimate of liquid water path. Despite this general underestimate, the simulated liquid and ice water paths of several models are consistent with the observed values. Furthermore, there is some evidence that models with more sophisticated microphysics simulate liquid and ice water paths that are in better agreement with the observed values, although considerable scatter is also present. Although no single factor guarantees a good simulation, these results emphasize the need for improvement in the model representation of mixed-phase microphysics. This case study, which has been well observed from both aircraft and ground-based remote sensors, could be a benchmark for model simulations of mixed-phase clouds.

  5. Classification rates: non‐parametric verses parametric models using ...

    African Journals Online (AJOL)

    This research sought to establish if non parametric modeling achieves a higher correct classification ratio than a parametric model. The local likelihood technique was used to model fit the data sets. The same sets of data were modeled using parametric logit and the abilities of the two models to correctly predict the binary ...

  6. Modeling the Joint Choice Decisions on Urban Shopping Destination and Travel-to-Shop Mode: A Comparative Study of Different Structures

    Directory of Open Access Journals (Sweden)

    Chuan Ding

    2014-01-01

    Full Text Available The joint choice of shopping destination and travel-to-shop mode in downtown area is described by making use of the cross-nested logit (CNL model structure that allows for potential interalternative correlation along the both choice dimensions. Meanwhile, the traditional multinomial logit (MNL model and nested logit (NL model are also formulated, respectively. This study uses the data collected in the downtown areas of Maryland-Washington, D.C. region, for shopping trips, considering household, individual, land use, and travel related characteristics. The results of the model reveal the significant influencing factors on joint choice travel behavior between shopping destination and travel mode. A comparison of the different models shows that the proposed CNL model structure offers significant improvements in capturing unobserved correlations between alternatives over MNL model and NL model. Moreover, a Monte Carlo simulation for a group of scenarios assuming that there is an increase in parking fees in downtown area is undertaken to examine the impact of a change in car travel cost on the joint choice of shopping destination and travel mode switching. The results are expected to give a better understanding on the shopping travel behavior.

  7. In-core LOCA-s: analytical solution for the delayed mixing model for moderator poison concentration

    International Nuclear Information System (INIS)

    Firla, A.P.

    1995-01-01

    Solutions to dynamic moderator poison concentration model with delayed mixing under single pressure tube / calandria tube rupture scenario are discussed. Such a model is described by a delay differential equation, and for such equations the standard ways of solution are not directly applicable. In the paper an exact, direct time-domain analytical solution to the delayed mixing model is presented and discussed. The obtained solution has a 'marching' form and is easy to calculate numerically. Results of the numerical calculations based on the analytical solution indicate that for the expected range of mixing times the existing uniform mixing model is a good representation of the moderator poison mixing process for single PT/CT breaks. However, for postulated multi-pipe breaks ( which is very unlikely to occur ) the uniform mixing model is not adequate any more; at the same time an 'approximate' solution based on Laplace transform significantly overpredicts the rate of poison concentration decrease, resulting in excessive increase in the moderator dilution factor. In this situation the true, analytical solution must be used. The analytical solution presented in the paper may also serve as a bench-mark test for the accuracy of the existing poison mixing models. Moreover, because of the existing oscillatory tendency of the solution, special care must be taken in using delay differential models in other applications. (author). 3 refs., 3 tabs., 8 figs

  8. Mathematical, physical and numerical principles essential for models of turbulent mixing

    Energy Technology Data Exchange (ETDEWEB)

    Sharp, David Howland [Los Alamos National Laboratory; Lim, Hyunkyung [STONY BROOK UNIV; Yu, Yan [STONY BROOK UNIV; Glimm, James G [STONY BROOK UNIV

    2009-01-01

    We propose mathematical, physical and numerical principles which are important for the modeling of turbulent mixing, especially the classical and well studied Rayleigh-Taylor and Richtmyer-Meshkov instabilities which involve acceleration driven mixing of a fluid discontinuity layer, by a steady accerleration or an impulsive force.

  9. A Mixing Based Model for DME Combustion in Diesel Engines

    DEFF Research Database (Denmark)

    Bek, Bjarne H.; Sorenson, Spencer C.

    1998-01-01

    A series of studies has been conducted investigating the behavior of di-methyl ether (DME) fuel jets injected into quiescent combus-tion chambers. These studies have shown that it is possible to make a good estimate of the penetration of the jet based on existing correlations for diesel fuel......, by using appropriate fuel properties. The results of the spray studies have been incorporated into a first generation model for DME combustion. The model is entirely based on physical mixing, where chemical processes have been assumed to be very fast in relation to mixing. The assumption was made...

  10. A mixing based model for DME combustion in diesel engines

    DEFF Research Database (Denmark)

    Bek, Bjarne Hjort; Sorenson, Spencer C

    2001-01-01

    A series of studies has been conducted investigating the behavior of di-methyl ether (DME) fuel jets injected into quiescent combustion chambers. These studies have shown that it is possible to make a good estimate of the penetration of the jet based on existing correlations for diesel fuel......, by using appropriate fuel properties. The results of the spray studies have been incorporated into a first generation model for DME combustion. The model is entirely based on physical mixing, where chemical processes have been assumed to be very fast in relation to mixing. The assumption was made...

  11. Multivariate Survival Mixed Models for Genetic Analysis of Longevity Traits

    DEFF Research Database (Denmark)

    Pimentel Maia, Rafael; Madsen, Per; Labouriau, Rodrigo

    2014-01-01

    A class of multivariate mixed survival models for continuous and discrete time with a complex covariance structure is introduced in a context of quantitative genetic applications. The methods introduced can be used in many applications in quantitative genetics although the discussion presented co...... applications. The methods presented are implemented in such a way that large and complex quantitative genetic data can be analyzed......A class of multivariate mixed survival models for continuous and discrete time with a complex covariance structure is introduced in a context of quantitative genetic applications. The methods introduced can be used in many applications in quantitative genetics although the discussion presented...... concentrates on longevity studies. The framework presented allows to combine models based on continuous time with models based on discrete time in a joint analysis. The continuous time models are approximations of the frailty model in which the hazard function will be assumed to be piece-wise constant...

  12. Multivariate Survival Mixed Models for Genetic Analysis of Longevity Traits

    DEFF Research Database (Denmark)

    Pimentel Maia, Rafael; Madsen, Per; Labouriau, Rodrigo

    2013-01-01

    A class of multivariate mixed survival models for continuous and discrete time with a complex covariance structure is introduced in a context of quantitative genetic applications. The methods introduced can be used in many applications in quantitative genetics although the discussion presented co...... applications. The methods presented are implemented in such a way that large and complex quantitative genetic data can be analyzed......A class of multivariate mixed survival models for continuous and discrete time with a complex covariance structure is introduced in a context of quantitative genetic applications. The methods introduced can be used in many applications in quantitative genetics although the discussion presented...... concentrates on longevity studies. The framework presented allows to combine models based on continuous time with models based on discrete time in a joint analysis. The continuous time models are approximations of the frailty model in which the hazard function will be assumed to be piece-wise constant...

  13. Production, decay, and mixing models of the iota meson

    International Nuclear Information System (INIS)

    Palmer, W.F.; Pinsky, S.S.; Bender, C.

    1984-01-01

    We solve a five-channel mixing problem involving eta, eta', zeta(1275), iota(1440), and a new hypothetical high-mass pseudoscalar state between 1600 and 1900 MeV. We obtain the quark and glue content of iota(1440). We compare two solutions to the mixing problem with iota(1440) production and decay data, and with quark-model predictions for bare masses. In one solution the iota(1440) is primarily a glueball. This solution is preferred by the production and decay data. In the other solution the iota(1440) is a radially excited (ss-bar) state. This solution is preferred by the quark-model picture for the bare masses. We judge the weight of the combined evidence to favor the glueball interpretation

  14. Dynamic behaviours of mix-game model and its application

    Institute of Scientific and Technical Information of China (English)

    Gou Cheng-Ling

    2006-01-01

    In this paper a minority game (MG) is modified by adding into it some agents who play a majority game. Such a game is referred to as a mix-game. The highlight of this model is that the two groups of agents in the mix-game have different bounded abilities to deal with historical information and to count their own performance. Through simulations,it is found that the local volatilities change a lot by adding some agents who play the majority game into the MG,and the change of local volatilities greatly depends on different combinations of historical memories of the two groups.Furthermore, the analyses of the underlying mechanisms for this finding are made. The applications of mix-game mode are also given as an example.

  15. A dynamic analysis of motorcycle ownership and usage: a panel data modeling approach.

    Science.gov (United States)

    Wen, Chieh-Hua; Chiou, Yu-Chiun; Huang, Wan-Ling

    2012-11-01

    This study aims to develop motorcycle ownership and usage models with consideration of the state dependence and heterogeneity effects based on a large-scale questionnaire panel survey on vehicle owners. To account for the independence among alternatives and heterogeneity among individuals, the modeling structure of motorcycle ownership adopts disaggregate choice models considering the multinomial, nested, and mixed logit formulations. Three types of panel data regression models--ordinary, fixed, and random effects--are developed and compared for motorcycle usage. The estimation results show that motorcycle ownership in the previous year does exercise a significantly positive effect on the number of motorcycles owned by households in the current year, suggesting that the state dependence effect does exist in motorcycle ownership decisions. In addition, the fixed effects model is the preferred specification for modeling motorcycle usage, indicating strong evidence for existence of heterogeneity. Among various management strategies evaluated under different scenarios, increasing gas prices and parking fees will lead to larger reductions in total kilometers traveled. Copyright © 2011 Elsevier Ltd. All rights reserved.

  16. Analyzing the Mixing Dynamics of an Industrial Batch Bin Blender via Discrete Element Modeling Method

    Directory of Open Access Journals (Sweden)

    Maitraye Sen

    2017-04-01

    Full Text Available A discrete element model (DEM has been developed for an industrial batch bin blender in which three different types of materials are mixed. The mixing dynamics have been evaluated from a model-based study with respect to the blend critical quality attributes (CQAs which are relative standard deviation (RSD and segregation intensity. In the actual industrial setup, a sensor mounted on the blender lid is used to determine the blend composition in this region. A model-based analysis has been used to understand the mixing efficiency in the other zones inside the blender and to determine if the data obtained near the blender-lid region are able to provide a good representation of the overall blend quality. Sub-optimal mixing zones have been identified and other potential sampling locations have been investigated in order to obtain a good approximation of the blend variability. The model has been used to study how the mixing efficiency can be improved by varying the key processing parameters, i.e., blender RPM/speed, fill level/volume and loading order. Both segregation intensity and RSD reduce at a lower fill level and higher blender RPM and are a function of the mixing time. This work demonstrates the use of a model-based approach to improve process knowledge regarding a pharmaceutical mixing process. The model can be used to acquire qualitative information about the influence of different critical process parameters and equipment geometry on the mixing dynamics.

  17. Development of a nonlocal convective mixing scheme with varying upward mixing rates for use in air quality and chemical transport models.

    Science.gov (United States)

    Mihailović, Dragutin T; Alapaty, Kiran; Sakradzija, Mirjana

    2008-06-01

    Asymmetrical convective non-local scheme (CON) with varying upward mixing rates is developed for simulation of vertical turbulent mixing in the convective boundary layer in air quality and chemical transport models. The upward mixing rate form the surface layer is parameterized using the sensible heat flux and the friction and convective velocities. Upward mixing rates varying with height are scaled with an amount of turbulent kinetic energy in layer, while the downward mixing rates are derived from mass conservation. This scheme provides a less rapid mass transport out of surface layer into other layers than other asymmetrical convective mixing schemes. In this paper, we studied the performance of a nonlocal convective mixing scheme with varying upward mixing in the atmospheric boundary layer and its impact on the concentration of pollutants calculated with chemical and air-quality models. This scheme was additionally compared versus a local eddy-diffusivity scheme (KSC). Simulated concentrations of NO(2) and the nitrate wet deposition by the CON scheme are closer to the observations when compared to those obtained from using the KSC scheme. Concentrations calculated with the CON scheme are in general higher and closer to the observations than those obtained by the KSC scheme (of the order of 15-20%). Nitrate wet deposition calculated with the CON scheme are in general higher and closer to the observations than those obtained by the KSC scheme. To examine the performance of the scheme, simulated and measured concentrations of a pollutant (NO(2)) and nitrate wet deposition was compared for the year 2002. The comparison was made for the whole domain used in simulations performed by the chemical European Monitoring and Evaluation Programme Unified model (version UNI-ACID, rv2.0) where schemes were incorporated.

  18. The transition model test for serial dependence in mixed-effects models for binary data

    DEFF Research Database (Denmark)

    Breinegaard, Nina; Rabe-Hesketh, Sophia; Skrondal, Anders

    2017-01-01

    Generalized linear mixed models for longitudinal data assume that responses at different occasions are conditionally independent, given the random effects and covariates. Although this assumption is pivotal for consistent estimation, violation due to serial dependence is hard to assess by model...

  19. Longitudinal mixed-effects models for latent cognitive function

    NARCIS (Netherlands)

    van den Hout, Ardo; Fox, Gerardus J.A.; Muniz-Terrera, Graciela

    2015-01-01

    A mixed-effects regression model with a bent-cable change-point predictor is formulated to describe potential decline of cognitive function over time in the older population. For the individual trajectories, cognitive function is considered to be a latent variable measured through an item response

  20. Mixed layer modeling in the East Pacific warm pool during 2002

    Science.gov (United States)

    Van Roekel, Luke P.; Maloney, Eric D.

    2012-06-01

    Two vertical mixing models (the modified dynamic instability model of Price et al.; PWP, and K-Profile Parameterizaton; KPP) are used to analyze intraseasonal sea surface temperature (SST) variability in the northeast tropical Pacific near the Costa Rica Dome during boreal summer of 2002. Anomalies in surface latent heat flux and shortwave radiation are the root cause of the three intraseasonal SST oscillations of order 1°C amplitude that occur during this time, although surface stress variations have a significant impact on the third event. A slab ocean model that uses observed monthly varying mixed layer depths and accounts for penetrating shortwave radiation appears to well-simulate the first two SST oscillations, but not the third. The third oscillation is associated with small mixed layer depths (impact these intraseasonal oscillations. These results suggest that a slab ocean coupled to an atmospheric general circulation model, as used in previous studies of east Pacific intraseasonal variability, may not be entirely adequate to realistically simulate SST variations. Further, while most of the results from the PWP and KPP models are similar, some important differences that emerge are discussed.

  1. Item selection via Bayesian IRT models.

    Science.gov (United States)

    Arima, Serena

    2015-02-10

    With reference to a questionnaire that aimed to assess the quality of life for dysarthric speakers, we investigate the usefulness of a model-based procedure for reducing the number of items. We propose a mixed cumulative logit model, which is known in the psychometrics literature as the graded response model: responses to different items are modelled as a function of individual latent traits and as a function of item characteristics, such as their difficulty and their discrimination power. We jointly model the discrimination and the difficulty parameters by using a k-component mixture of normal distributions. Mixture components correspond to disjoint groups of items. Items that belong to the same groups can be considered equivalent in terms of both difficulty and discrimination power. According to decision criteria, we select a subset of items such that the reduced questionnaire is able to provide the same information that the complete questionnaire provides. The model is estimated by using a Bayesian approach, and the choice of the number of mixture components is justified according to information criteria. We illustrate the proposed approach on the basis of data that are collected for 104 dysarthric patients by local health authorities in Lecce and in Milan. Copyright © 2014 John Wiley & Sons, Ltd.

  2. Criticality in the configuration-mixed interacting boson model (1) $U(5)-\\hat{Q}(\\chi)\\cdot\\hat{Q}(\\chi)$ mixing

    CERN Document Server

    Hellemans, V; De Baerdemacker, S; Heyde, K

    2008-01-01

    The case of U(5)--$\\hat{Q}(\\chi)\\cdot\\hat{Q}(\\chi)$ mixing in the configuration-mixed Interacting Boson Model is studied in its mean-field approximation. Phase diagrams with analytical and numerical solutions are constructed and discussed. Indications for first-order and second-order shape phase transitions can be obtained from binding energies and from critical exponents, respectively.

  3. Laminar/transition sweeping flow-mixing model for wire-wrapped LMFBR assemblies

    International Nuclear Information System (INIS)

    Burns, K.F.; Rohsenow, W.M.; Todreas, N.E.

    1980-07-01

    Recent interest in analyzing the thermal hydraulic characteristics of LMFBR assemblies operating in the mixed convection regime motivates the extension of the aforementioned turbulent sweeping flow model to low Reynolds number flows. The accuracy to which knowledge of the mixing parameters is required has not been well determined, due to the increased influence of conduction and buoyancy effects with respect to energy transport at low Reynolds numbers. This study represents a best estimate attempt to correlate the existing low Reynolds number sweeping flow data. The laminar/transition model which is presented is expected to be useful in anayzing mixed convection conditions. However, the justification for making additional improvemements is contingent upon two factors. First, the ability of the proposed laminar/transition model to predict additional low Reynolds number sweeping flow data for other geometries needs to be investigated. Secondly, the sensitivity of temperature predictions to uncertainties in the values of the sweeping flow parameters should be quantified

  4. Evaluation of a Linear Mixing Model to Retrieve Soil and Vegetation Temperatures of Land Targets

    International Nuclear Information System (INIS)

    Yang, Jinxin; Jia, Li; Cui, Yaokui; Zhou, Jie; Menenti, Massimo

    2014-01-01

    A simple linear mixing model of heterogeneous soil-vegetation system and retrieval of component temperatures from directional remote sensing measurements by inverting this model is evaluated in this paper using observations by a thermal camera. The thermal camera was used to obtain multi-angular TIR (Thermal Infra-Red) images over vegetable and orchard canopies. A whole thermal camera image was treated as a pixel of a satellite image to evaluate the model with the two-component system, i.e. soil and vegetation. The evaluation included two parts: evaluation of the linear mixing model and evaluation of the inversion of the model to retrieve component temperatures. For evaluation of the linear mixing model, the RMSE is 0.2 K between the observed and modelled brightness temperatures, which indicates that the linear mixing model works well under most conditions. For evaluation of the model inversion, the RMSE between the model retrieved and the observed vegetation temperatures is 1.6K, correspondingly, the RMSE between the observed and retrieved soil temperatures is 2.0K. According to the evaluation of the sensitivity of retrieved component temperatures on fractional cover, the linear mixing model gives more accurate retrieval accuracies for both soil and vegetation temperatures under intermediate fractional cover conditions

  5. A Proposed Model of Retransformed Qualitative Data within a Mixed Methods Research Design

    Science.gov (United States)

    Palladino, John M.

    2009-01-01

    Most models of mixed methods research design provide equal emphasis of qualitative and quantitative data analyses and interpretation. Other models stress one method more than the other. The present article is a discourse about the investigator's decision to employ a mixed method design to examine special education teachers' advocacy and…

  6. Multifractal Modeling of Turbulent Mixing

    Science.gov (United States)

    Samiee, Mehdi; Zayernouri, Mohsen; Meerschaert, Mark M.

    2017-11-01

    Stochastic processes in random media are emerging as interesting tools for modeling anomalous transport phenomena. Applications include intermittent passive scalar transport with background noise in turbulent flows, which are observed in atmospheric boundary layers, turbulent mixing in reactive flows, and long-range dependent flow fields in disordered/fractal environments. In this work, we propose a nonlocal scalar transport equation involving the fractional Laplacian, where the corresponding fractional index is linked to the multifractal structure of the nonlinear passive scalar power spectrum. This work was supported by the AFOSR Young Investigator Program (YIP) award (FA9550-17-1-0150) and partially by MURI/ARO (W911NF-15-1-0562).

  7. A consistency assessment of coupled cohesive zone models for mixed-mode debonding problems

    Directory of Open Access Journals (Sweden)

    R. Dimitri

    2014-07-01

    Full Text Available Due to their simplicity, cohesive zone models (CZMs are very attractive to describe mixed-mode failure and debonding processes of materials and interfaces. Although a large number of coupled CZMs have been proposed, and despite the extensive related literature, little attention has been devoted to ensuring the consistency of these models for mixed-mode conditions, primarily in a thermodynamical sense. A lack of consistency may affect the local or global response of a mechanical system. This contribution deals with the consistency check for some widely used exponential and bilinear mixed-mode CZMs. The coupling effect on stresses and energy dissipation is first investigated and the path-dependance of the mixed-mode debonding work of separation is analitically evaluated. Analytical predictions are also compared with results from numerical implementations, where the interface is described with zero-thickness contact elements. A node-to-segment strategy is here adopted, which incorporates decohesion and contact within a unified framework. A new thermodynamically consistent mixed-mode CZ model based on a reformulation of the Xu-Needleman model as modified by van den Bosch et al. is finally proposed and derived by applying the Coleman and Noll procedure in accordance with the second law of thermodynamics. The model holds monolithically for loading and unloading processes, as well as for decohesion and contact, and its performance is demonstrated through suitable examples.

  8. PREDICTION OF THE MIXING ENTHALPIES OF BINARY LIQUID ALLOYS BY MOLECULAR INTERACTION VOLUME MODEL

    Institute of Scientific and Technical Information of China (English)

    H.W.Yang; D.P.Tao; Z.H.Zhou

    2008-01-01

    The mixing enthalpies of 23 binary liquid alloys are calculated by molecular interaction volume model (MIVM), which is a two-parameter model with the partial molar infinite dilute mixing enthalpies. The predicted values are in agreement with the experimental data and then indicate that the model is reliable and convenient.

  9. Computer modeling of ORNL storage tank sludge mobilization and mixing

    International Nuclear Information System (INIS)

    Terrones, G.; Eyler, L.L.

    1993-09-01

    This report presents and analyzes the results of the computer modeling of mixing and mobilization of sludge in horizontal, cylindrical storage tanks using submerged liquid jets. The computer modeling uses the TEMPEST computational fluid dynamics computer program. The horizontal, cylindrical storage tank configuration is similar to the Melton Valley Storage Tanks (MVST) at Oak Ridge National (ORNL). The MVST tank contents exhibit non-homogeneous, non-Newtonian rheology characteristics. The eventual goals of the simulations are to determine under what conditions sludge mobilization using submerged liquid jets is feasible in tanks of this configuration, and to estimate mixing times required to approach homogeneity of the contents of the tanks

  10. Data on copula modeling of mixed discrete and continuous neural time series.

    Science.gov (United States)

    Hu, Meng; Li, Mingyao; Li, Wu; Liang, Hualou

    2016-06-01

    Copula is an important tool for modeling neural dependence. Recent work on copula has been expanded to jointly model mixed time series in neuroscience ("Hu et al., 2016, Joint Analysis of Spikes and Local Field Potentials using Copula" [1]). Here we present further data for joint analysis of spike and local field potential (LFP) with copula modeling. In particular, the details of different model orders and the influence of possible spike contamination in LFP data from the same and different electrode recordings are presented. To further facilitate the use of our copula model for the analysis of mixed data, we provide the Matlab codes, together with example data.

  11. Wax Precipitation Modeled with Many Mixed Solid Phases

    DEFF Research Database (Denmark)

    Heidemann, Robert A.; Madsen, Jesper; Stenby, Erling Halfdan

    2005-01-01

    The behavior of the Coutinho UNIQUAC model for solid wax phases has been examined. The model can produce as many mixed solid phases as the number of waxy components. In binary mixtures, the solid rich in the lighter component contains little of the heavier component but the second phase shows sub......-temperature and low-temperature forms, are pure. Model calculations compare well with the data of Pauly et al. for C18 to C30 waxes precipitating from n-decane solutions. (C) 2004 American Institute of Chemical Engineers....

  12. The effect of turbulent mixing models on the predictions of subchannel codes

    International Nuclear Information System (INIS)

    Tapucu, A.; Teyssedou, A.; Tye, P.; Troche, N.

    1994-01-01

    In this paper, the predictions of the COBRA-IV and ASSERT-4 subchannel codes have been compared with experimental data on void fraction, mass flow rate, and pressure drop obtained for two interconnected subchannels. COBRA-IV is based on a one-dimensional separated flow model with the turbulent intersubchannel mixing formulated as an extension of the single-phase mixing model, i.e. fluctuating equal mass exchange. ASSERT-4 is based on a drift flux model with the turbulent mixing modelled by assuming an exchange of equal volumes with different densities thus allowing a net fluctuating transverse mass flux from one subchannel to the other. This feature is implemented in the constitutive relationship for the relative velocity required by the conservation equations. It is observed that the predictions of ASSERT-4 follow the experimental trends better than COBRA-IV; therefore the approach of equal volume exchange constitutes an improvement over that of the equal mass exchange. ((orig.))

  13. Improved Expectation Maximization Algorithm for Gaussian Mixed Model Using the Kernel Method

    Directory of Open Access Journals (Sweden)

    Mohd Izhan Mohd Yusoff

    2013-01-01

    Full Text Available Fraud activities have contributed to heavy losses suffered by telecommunication companies. In this paper, we attempt to use Gaussian mixed model, which is a probabilistic model normally used in speech recognition to identify fraud calls in the telecommunication industry. We look at several issues encountered when calculating the maximum likelihood estimates of the Gaussian mixed model using an Expectation Maximization algorithm. Firstly, we look at a mechanism for the determination of the initial number of Gaussian components and the choice of the initial values of the algorithm using the kernel method. We show via simulation that the technique improves the performance of the algorithm. Secondly, we developed a procedure for determining the order of the Gaussian mixed model using the log-likelihood function and the Akaike information criteria. Finally, for illustration, we apply the improved algorithm to real telecommunication data. The modified method will pave the way to introduce a comprehensive method for detecting fraud calls in future work.

  14. Intercomparison of model simulations of mixed-phase clouds observed during the ARM Mixed-Phase Arctic Cloud Experiment. Part I: Single layer cloud

    Energy Technology Data Exchange (ETDEWEB)

    Klein, Stephen A.; McCoy, Renata B.; Morrison, Hugh; Ackerman, Andrew S.; Avramov, Alexander; de Boer, Gijs; Chen, Mingxuan; Cole, Jason N.S.; Del Genio, Anthony D.; Falk, Michael; Foster, Michael J.; Fridlind, Ann; Golaz, Jean-Christophe; Hashino, Tempei; Harrington, Jerry Y.; Hoose, Corinna; Khairoutdinov, Marat F.; Larson, Vincent E.; Liu, Xiaohong; Luo, Yali; McFarquhar, Greg M.; Menon, Surabi; Neggers, Roel A. J.; Park, Sungsu; Poellot, Michael R.; Schmidt, Jerome M.; Sednev, Igor; Shipway, Ben J.; Shupe, Matthew D.; Spangenberg, Douglas A.; Sud, Yogesh C.; Turner, David D.; Veron, Dana E.; von Salzen, Knut; Walker, Gregory K.; Wang, Zhien; Wolf, Audrey B.; Xie, Shaocheng; Xu, Kuan-Man; Yang, Fanglin; Zhang, Gong

    2009-02-02

    Results are presented from an intercomparison of single-column and cloud-resolving model simulations of a cold-air outbreak mixed-phase stratocumulus cloud observed during the Atmospheric Radiation Measurement (ARM) program's Mixed-Phase Arctic Cloud Experiment. The observed cloud occurred in a well-mixed boundary layer with a cloud top temperature of -15 C. The observed average liquid water path of around 160 g m{sup -2} was about two-thirds of the adiabatic value and much greater than the average mass of ice crystal precipitation which when integrated from the surface to cloud top was around 15 g m{sup -2}. The simulations were performed by seventeen single-column models (SCMs) and nine cloud-resolving models (CRMs). While the simulated ice water path is generally consistent with the observed values, the median SCM and CRM liquid water path is a factor of three smaller than observed. Results from a sensitivity study in which models removed ice microphysics suggest that in many models the interaction between liquid and ice-phase microphysics is responsible for the large model underestimate of liquid water path. Despite this general underestimate, the simulated liquid and ice water paths of several models are consistent with the observed values. Furthermore, there is evidence that models with more sophisticated microphysics simulate liquid and ice water paths that are in better agreement with the observed values, although considerable scatter is also present. Although no single factor guarantees a good simulation, these results emphasize the need for improvement in the model representation of mixed-phase microphysics.

  15. Quantifying the effect of mixing on the mean age of air in CCMVal-2 and CCMI-1 models

    Science.gov (United States)

    Dietmüller, Simone; Eichinger, Roland; Garny, Hella; Birner, Thomas; Boenisch, Harald; Pitari, Giovanni; Mancini, Eva; Visioni, Daniele; Stenke, Andrea; Revell, Laura; Rozanov, Eugene; Plummer, David A.; Scinocca, John; Jöckel, Patrick; Oman, Luke; Deushi, Makoto; Kiyotaka, Shibata; Kinnison, Douglas E.; Garcia, Rolando; Morgenstern, Olaf; Zeng, Guang; Stone, Kane Adam; Schofield, Robyn

    2018-05-01

    The stratospheric age of air (AoA) is a useful measure of the overall capabilities of a general circulation model (GCM) to simulate stratospheric transport. Previous studies have reported a large spread in the simulation of AoA by GCMs and coupled chemistry-climate models (CCMs). Compared to observational estimates, simulated AoA is mostly too low. Here we attempt to untangle the processes that lead to the AoA differences between the models and between models and observations. AoA is influenced by both mean transport by the residual circulation and two-way mixing; we quantify the effects of these processes using data from the CCM inter-comparison projects CCMVal-2 (Chemistry-Climate Model Validation Activity 2) and CCMI-1 (Chemistry-Climate Model Initiative, phase 1). Transport along the residual circulation is measured by the residual circulation transit time (RCTT). We interpret the difference between AoA and RCTT as additional aging by mixing. Aging by mixing thus includes mixing on both the resolved and subgrid scale. We find that the spread in AoA between the models is primarily caused by differences in the effects of mixing and only to some extent by differences in residual circulation strength. These effects are quantified by the mixing efficiency, a measure of the relative increase in AoA by mixing. The mixing efficiency varies strongly between the models from 0.24 to 1.02. We show that the mixing efficiency is not only controlled by horizontal mixing, but by vertical mixing and vertical diffusion as well. Possible causes for the differences in the models' mixing efficiencies are discussed. Differences in subgrid-scale mixing (including differences in advection schemes and model resolutions) likely contribute to the differences in mixing efficiency. However, differences in the relative contribution of resolved versus parameterized wave forcing do not appear to be related to differences in mixing efficiency or AoA.

  16. CFD modeling of thermal mixing in a T-junction geometry using LES model

    Energy Technology Data Exchange (ETDEWEB)

    Ayhan, Hueseyin, E-mail: huseyinayhan@hacettepe.edu.tr [Hacettepe University, Department of Nuclear Engineering, Beytepe, Ankara 06800 (Turkey); Soekmen, Cemal Niyazi, E-mail: cemalniyazi.sokmen@hacettepe.edu.tr [Hacettepe University, Department of Nuclear Engineering, Beytepe, Ankara 06800 (Turkey)

    2012-12-15

    Highlights: Black-Right-Pointing-Pointer CFD simulations of temperature and velocity fluctuations for thermal mixing cases in T-junction are performed. Black-Right-Pointing-Pointer It is found that the frequency range of 2-5 Hz contains most of the energy; therefore, may cause thermal fatigue. Black-Right-Pointing-Pointer This study shows that RANS based calculations fail to predict a realistic mixing between the fluids. Black-Right-Pointing-Pointer LES model can predict instantaneous turbulence behavior. - Abstract: Turbulent mixing of fluids at different temperatures can lead to temperature fluctuations at the pipe material. These fluctuations, or thermal striping, inducing cyclical thermal stresses and resulting thermal fatigue, may cause unexpected failure of pipe material. Therefore, an accurate characterization of temperature fluctuations is important in order to estimate the lifetime of pipe material. Thermal fatigue of the coolant circuits of nuclear power plants is one of the major issues in nuclear safety. To investigate thermal fatigue damage, the OECD/NEA has recently organized a blind benchmark study including some of results of present work for prediction of temperature and velocity fluctuations performing a thermal mixing experiment in a T-junction. This paper aims to estimate the frequency of velocity and temperature fluctuations in the mixing region using Computational Fluid Dynamics (CFD). Reynolds Averaged Navier-Stokes and Large Eddy Simulation (LES) models were used to simulate turbulence. CFD results were compared with the available experimental results. Predicted LES results, even in coarse mesh, were found to be in well-agreement with the experimental results in terms of amplitude and frequency of temperature and velocity fluctuations. Analysis of the temperature fluctuations and the power spectrum densities (PSD) at the locations having the strongest temperature fluctuations in the tee junction shows that the frequency range of 2-5 Hz

  17. Two-level mixed modeling of longitudinal pedigree data for genetic association analysis

    DEFF Research Database (Denmark)

    Tan, Q.

    2013-01-01

    of follow-up. Approaches have been proposed to integrate kinship correlation into the mixed effect models to explicitly model the genetic relationship which have been proven as an efficient way for dealing with sample clustering in pedigree data. Although useful for adjusting relatedness in the mixed...... assess the genetic associations with the mean level and the rate of change in a phenotype both with kinship correlation integrated in the mixed effect models. We apply our method to longitudinal pedigree data to estimate the genetic effects on systolic blood pressure measured over time in large pedigrees......Genetic association analysis on complex phenotypes under a longitudinal design involving pedigrees encounters the problem of correlation within pedigrees which could affect statistical assessment of the genetic effects on both the mean level of the phenotype and its rate of change over the time...

  18. Modeling Photodetachment from HO2- Using the pd Case of the Generalized Mixed Character Molecular Orbital Model

    Science.gov (United States)

    Blackstone, Christopher C.; Sanov, Andrei

    2016-06-01

    Using the generalized model for photodetachment of electrons from mixed-character molecular orbitals, we gain insight into the nature of the HOMO of HO2- by treating it as a coherent superpostion of one p- and one d-type atomic orbital. Fitting the pd model function to the ab initio calculated HOMO of HO2- yields a fractional d-character, γp, of 0.979. The modeled curve of the anisotropy parameter, β, as a function of electron kinetic energy for a pd-type mixed character orbital is matched to the experimental data.

  19. A Comparison of Item Fit Statistics for Mixed IRT Models

    Science.gov (United States)

    Chon, Kyong Hee; Lee, Won-Chan; Dunbar, Stephen B.

    2010-01-01

    In this study we examined procedures for assessing model-data fit of item response theory (IRT) models for mixed format data. The model fit indices used in this study include PARSCALE's G[superscript 2], Orlando and Thissen's S-X[superscript 2] and S-G[superscript 2], and Stone's chi[superscript 2*] and G[superscript 2*]. To investigate the…

  20. Inflow, Outflow, Yields, and Stellar Population Mixing in Chemical Evolution Models

    Energy Technology Data Exchange (ETDEWEB)

    Andrews, Brett H. [PITT PACC, Department of Physics and Astronomy, University of Pittsburgh, Pittsburgh, PA 15260 (United States); Weinberg, David H.; Schönrich, Ralph; Johnson, Jennifer A., E-mail: andrewsb@pitt.edu [Department of Astronomy, The Ohio State University, 140 West 18th Avenue, Columbus, OH 43210 (United States)

    2017-02-01

    Chemical evolution models are powerful tools for interpreting stellar abundance surveys and understanding galaxy evolution. However, their predictions depend heavily on the treatment of inflow, outflow, star formation efficiency (SFE), the stellar initial mass function, the SN Ia delay time distribution, stellar yields, and stellar population mixing. Using flexCE, a flexible one-zone chemical evolution code, we investigate the effects of and trade-offs between parameters. Two critical parameters are SFE and the outflow mass-loading parameter, which shift the knee in [O/Fe]–[Fe/H] and the equilibrium abundances that the simulations asymptotically approach, respectively. One-zone models with simple star formation histories follow narrow tracks in [O/Fe]–[Fe/H] unlike the observed bimodality (separate high- α and low- α sequences) in this plane. A mix of one-zone models with inflow timescale and outflow mass-loading parameter variations, motivated by the inside-out galaxy formation scenario with radial mixing, reproduces the two sequences better than a one-zone model with two infall epochs. We present [X/Fe]–[Fe/H] tracks for 20 elements assuming three different supernova yield models and find some significant discrepancies with solar neighborhood observations, especially for elements with strongly metallicity-dependent yields. We apply principal component abundance analysis to the simulations and existing data to reveal the main correlations among abundances and quantify their contributions to variation in abundance space. For the stellar population mixing scenario, the abundances of α -elements and elements with metallicity-dependent yields dominate the first and second principal components, respectively, and collectively explain 99% of the variance in the model. flexCE is a python package available at https://github.com/bretthandrews/flexCE.

  1. Inflow, Outflow, Yields, and Stellar Population Mixing in Chemical Evolution Models

    International Nuclear Information System (INIS)

    Andrews, Brett H.; Weinberg, David H.; Schönrich, Ralph; Johnson, Jennifer A.

    2017-01-01

    Chemical evolution models are powerful tools for interpreting stellar abundance surveys and understanding galaxy evolution. However, their predictions depend heavily on the treatment of inflow, outflow, star formation efficiency (SFE), the stellar initial mass function, the SN Ia delay time distribution, stellar yields, and stellar population mixing. Using flexCE, a flexible one-zone chemical evolution code, we investigate the effects of and trade-offs between parameters. Two critical parameters are SFE and the outflow mass-loading parameter, which shift the knee in [O/Fe]–[Fe/H] and the equilibrium abundances that the simulations asymptotically approach, respectively. One-zone models with simple star formation histories follow narrow tracks in [O/Fe]–[Fe/H] unlike the observed bimodality (separate high- α and low- α sequences) in this plane. A mix of one-zone models with inflow timescale and outflow mass-loading parameter variations, motivated by the inside-out galaxy formation scenario with radial mixing, reproduces the two sequences better than a one-zone model with two infall epochs. We present [X/Fe]–[Fe/H] tracks for 20 elements assuming three different supernova yield models and find some significant discrepancies with solar neighborhood observations, especially for elements with strongly metallicity-dependent yields. We apply principal component abundance analysis to the simulations and existing data to reveal the main correlations among abundances and quantify their contributions to variation in abundance space. For the stellar population mixing scenario, the abundances of α -elements and elements with metallicity-dependent yields dominate the first and second principal components, respectively, and collectively explain 99% of the variance in the model. flexCE is a python package available at https://github.com/bretthandrews/flexCE.

  2. Chlorophyll modulation of mixed layer thermodynamics in a mixed-layer isopycnal general circulation model - An example from Arabian Sea and Equatorial Pacific

    Digital Repository Service at National Institute of Oceanography (India)

    Nakamoto, S.; PrasannaKumar, S.; Oberhuber, J.M.; Saito, H.; Muneyama, K.

    and supported by quasi-steady upwelling. Remotely sensed chlorophyll pigment concentrations from the Coastal Zone Color Scanner (CZCS) are used to investigate the chlorophyll modulation of ocean mixed layer thermodynamics in a bulk mixed-layer model, embedded...

  3. Mixed models in cerebral ischemia study

    Directory of Open Access Journals (Sweden)

    Matheus Henrique Dal Molin Ribeiro

    2016-06-01

    Full Text Available The data modeling from longitudinal studies stands out in the current scientific scenario, especially in the areas of health and biological sciences, which induces a correlation between measurements for the same observed unit. Thus, the modeling of the intra-individual dependency is required through the choice of a covariance structure that is able to receive and accommodate the sample variability. However, the lack of methodology for correlated data analysis may result in an increased occurrence of type I or type II errors and underestimate/overestimate the standard errors of the model estimates. In the present study, a Gaussian mixed model was adopted for the variable response latency of an experiment investigating the memory deficits in animals subjected to cerebral ischemia when treated with fish oil (FO. The model parameters estimation was based on maximum likelihood methods. Based on the restricted likelihood ratio test and information criteria, the autoregressive covariance matrix was adopted for errors. The diagnostic analyses for the model were satisfactory, since basic assumptions and results obtained corroborate with biological evidence; that is, the effectiveness of the FO treatment to alleviate the cognitive effects caused by cerebral ischemia was found.

  4. Study on system dynamics of evolutionary mix-game models

    Science.gov (United States)

    Gou, Chengling; Guo, Xiaoqian; Chen, Fang

    2008-11-01

    Mix-game model is ameliorated from an agent-based MG model, which is used to simulate the real financial market. Different from MG, there are two groups of agents in Mix-game: Group 1 plays a majority game and Group 2 plays a minority game. These two groups of agents have different bounded abilities to deal with historical information and to count their own performance. In this paper, we modify Mix-game model by assigning the evolution abilities to agents: if the winning rates of agents are smaller than a threshold, they will copy the best strategies the other agent has; and agents will repeat such evolution at certain time intervals. Through simulations this paper finds: (1) the average winning rates of agents in Group 1 and the mean volatilities increase with the increases of the thresholds of Group 1; (2) the average winning rates of both groups decrease but the mean volatilities of system increase with the increase of the thresholds of Group 2; (3) the thresholds of Group 2 have greater impact on system dynamics than the thresholds of Group 1; (4) the characteristics of system dynamics under different time intervals of strategy change are similar to each other qualitatively, but they are different quantitatively; (5) As the time interval of strategy change increases from 1 to 20, the system behaves more and more stable and the performances of agents in both groups become better also.

  5. Application of mixed models for the assessment genotype and ...

    African Journals Online (AJOL)

    SAM

    2014-05-07

    May 7, 2014 ... cused mainly on the yield of cottonseed and fiber, with the CA 324 and ..... Gaps and opportunities for agricultural sector development in ... Adaptability and stability of maize varieties using mixed models. Crop. Breeding and ...

  6. The problem with time in mixed continuous/discrete time modelling

    NARCIS (Netherlands)

    Rovers, K.C.; Kuper, Jan; Smit, Gerardus Johannes Maria

    The design of cyber-physical systems requires the use of mixed continuous time and discrete time models. Current modelling tools have problems with time transformations (such as a time delay) or multi-rate systems. We will present a novel approach that implements signals as functions of time,

  7. Production, decay, and mixing models of the iota meson. II

    International Nuclear Information System (INIS)

    Palmer, W.F.; Pinsky, S.S.

    1987-01-01

    A five-channel mixing model for the ground and radially excited isoscalar pseudoscalar states and a glueball is presented. The model extends previous work by including two-body unitary corrections, following the technique of Toernqvist. The unitary corrections include contributions from three classes of two-body intermediate states: pseudoscalar-vector, pseudoscalar-scalar, and vector-vector states. All necessary three-body couplings are extracted from decay data. The solution of the mixing model provides information about the bare mass of the glueball and the fundamental quark-glue coupling. The solution also gives the composition of the wave function of the physical states in terms of the bare quark and glue states. Finally, it is shown how the coupling constants extracted from decay data can be used to calculate the decay rates of the five physical states to all two-body channels

  8. Disability and multi-state labour force choices with state dependence

    OpenAIRE

    Oguzoglu, Umut

    2010-01-01

    I use a dynamic mixed multinomial logit model with unobserved heterogeneity to study the impact of work limiting disabilities on disaggregated labour choices. The first seven waves of the Household Income and Labour Dynamics in Australia survey are used to investigate this relationship. Findings point out to strong state dependence in employment choices. Further, the impact of disability on employment outcomes is highly significant. Model simulations suggest that high cross and own state depe...

  9. Negative binomial mixed models for analyzing microbiome count data.

    Science.gov (United States)

    Zhang, Xinyan; Mallick, Himel; Tang, Zaixiang; Zhang, Lei; Cui, Xiangqin; Benson, Andrew K; Yi, Nengjun

    2017-01-03

    Recent advances in next-generation sequencing (NGS) technology enable researchers to collect a large volume of metagenomic sequencing data. These data provide valuable resources for investigating interactions between the microbiome and host environmental/clinical factors. In addition to the well-known properties of microbiome count measurements, for example, varied total sequence reads across samples, over-dispersion and zero-inflation, microbiome studies usually collect samples with hierarchical structures, which introduce correlation among the samples and thus further complicate the analysis and interpretation of microbiome count data. In this article, we propose negative binomial mixed models (NBMMs) for detecting the association between the microbiome and host environmental/clinical factors for correlated microbiome count data. Although having not dealt with zero-inflation, the proposed mixed-effects models account for correlation among the samples by incorporating random effects into the commonly used fixed-effects negative binomial model, and can efficiently handle over-dispersion and varying total reads. We have developed a flexible and efficient IWLS (Iterative Weighted Least Squares) algorithm to fit the proposed NBMMs by taking advantage of the standard procedure for fitting the linear mixed models. We evaluate and demonstrate the proposed method via extensive simulation studies and the application to mouse gut microbiome data. The results show that the proposed method has desirable properties and outperform the previously used methods in terms of both empirical power and Type I error. The method has been incorporated into the freely available R package BhGLM ( http://www.ssg.uab.edu/bhglm/ and http://github.com/abbyyan3/BhGLM ), providing a useful tool for analyzing microbiome data.

  10. Efficient and robust estimation for longitudinal mixed models for binary data

    DEFF Research Database (Denmark)

    Holst, René

    2009-01-01

    This paper proposes a longitudinal mixed model for binary data. The model extends the classical Poisson trick, in which a binomial regression is fitted by switching to a Poisson framework. A recent estimating equations method for generalized linear longitudinal mixed models, called GEEP, is used...... as a vehicle for fitting the conditional Poisson regressions, given a latent process of serial correlated Tweedie variables. The regression parameters are estimated using a quasi-score method, whereas the dispersion and correlation parameters are estimated by use of bias-corrected Pearson-type estimating...... equations, using second moments only. Random effects are predicted by BLUPs. The method provides a computationally efficient and robust approach to the estimation of longitudinal clustered binary data and accommodates linear and non-linear models. A simulation study is used for validation and finally...

  11. Model's sparse representation based on reduced mixed GMsFE basis methods

    Energy Technology Data Exchange (ETDEWEB)

    Jiang, Lijian, E-mail: ljjiang@hnu.edu.cn [Institute of Mathematics, Hunan University, Changsha 410082 (China); Li, Qiuqi, E-mail: qiuqili@hnu.edu.cn [College of Mathematics and Econometrics, Hunan University, Changsha 410082 (China)

    2017-06-01

    In this paper, we propose a model's sparse representation based on reduced mixed generalized multiscale finite element (GMsFE) basis methods for elliptic PDEs with random inputs. A typical application for the elliptic PDEs is the flow in heterogeneous random porous media. Mixed generalized multiscale finite element method (GMsFEM) is one of the accurate and efficient approaches to solve the flow problem in a coarse grid and obtain the velocity with local mass conservation. When the inputs of the PDEs are parameterized by the random variables, the GMsFE basis functions usually depend on the random parameters. This leads to a large number degree of freedoms for the mixed GMsFEM and substantially impacts on the computation efficiency. In order to overcome the difficulty, we develop reduced mixed GMsFE basis methods such that the multiscale basis functions are independent of the random parameters and span a low-dimensional space. To this end, a greedy algorithm is used to find a set of optimal samples from a training set scattered in the parameter space. Reduced mixed GMsFE basis functions are constructed based on the optimal samples using two optimal sampling strategies: basis-oriented cross-validation and proper orthogonal decomposition. Although the dimension of the space spanned by the reduced mixed GMsFE basis functions is much smaller than the dimension of the original full order model, the online computation still depends on the number of coarse degree of freedoms. To significantly improve the online computation, we integrate the reduced mixed GMsFE basis methods with sparse tensor approximation and obtain a sparse representation for the model's outputs. The sparse representation is very efficient for evaluating the model's outputs for many instances of parameters. To illustrate the efficacy of the proposed methods, we present a few numerical examples for elliptic PDEs with multiscale and random inputs. In particular, a two-phase flow model in

  12. Modeling patterns in count data using loglinear and related models

    International Nuclear Information System (INIS)

    Atwood, C.L.

    1995-12-01

    This report explains the use of loglinear and logit models, for analyzing Poisson and binomial counts in the presence of explanatory variables. The explanatory variables may be unordered categorical variables or numerical variables, or both. The report shows how to construct models to fit data, and how to test whether a model is too simple or too complex. The appropriateness of the methods with small data sets is discussed. Several example analyses, using the SAS computer package, illustrate the methods

  13. Attribution of horizontal and vertical contributions to spurious mixing in an Arbitrary Lagrangian-Eulerian ocean model

    Science.gov (United States)

    Gibson, Angus H.; Hogg, Andrew McC.; Kiss, Andrew E.; Shakespeare, Callum J.; Adcroft, Alistair

    2017-11-01

    We examine the separate contributions to spurious mixing from horizontal and vertical processes in an ALE ocean model, MOM6, using reference potential energy (RPE). The RPE is a global diagnostic which changes only due to mixing between density classes. We extend this diagnostic to a sub-timestep timescale in order to individually separate contributions to spurious mixing through horizontal (tracer advection) and vertical (regridding/remapping) processes within the model. We both evaluate the overall spurious mixing in MOM6 against previously published output from other models (MOM5, MITGCM and MPAS-O), and investigate impacts on the components of spurious mixing in MOM6 across a suite of test cases: a lock exchange, internal wave propagation, and a baroclinically-unstable eddying channel. The split RPE diagnostic demonstrates that the spurious mixing in a lock exchange test case is dominated by horizontal tracer advection, due to the spatial variability in the velocity field. In contrast, the vertical component of spurious mixing dominates in an internal waves test case. MOM6 performs well in this test case owing to its quasi-Lagrangian implementation of ALE. Finally, the effects of model resolution are examined in a baroclinic eddies test case. In particular, the vertical component of spurious mixing dominates as horizontal resolution increases, an important consideration as global models evolve towards higher horizontal resolutions.

  14. Progress Report on SAM Reduced-Order Model Development for Thermal Stratification and Mixing during Reactor Transients

    Energy Technology Data Exchange (ETDEWEB)

    Hu, R. [Argonne National Lab. (ANL), Argonne, IL (United States)

    2017-09-01

    This report documents the initial progress on the reduced-order flow model developments in SAM for thermal stratification and mixing modeling. Two different modeling approaches are pursued. The first one is based on one-dimensional fluid equations with additional terms accounting for the thermal mixing from both flow circulations and turbulent mixing. The second approach is based on three-dimensional coarse-grid CFD approach, in which the full three-dimensional fluid conservation equations are modeled with closure models to account for the effects of turbulence.

  15. A model of radiative neutrino masses. Mixing and a possible fourth generation

    International Nuclear Information System (INIS)

    Babu, K.S.; Ma, E.; Pantaleone, J.

    1989-01-01

    We consider the phenomenological consequences of a recently proposed model with four lepton generations such that the three known neutrinos have radiatively induced Majorana masses. Mixing among generations in the presence of a heavy fourth neutrino necessitates a reevaluation of the usual experimental tests of the standard model. One interesting possibility is to have a τ lifetime longer than predicted by the standard three-generation model. Another is to have neutrino masses and mixing angles in the range needed for a natural explanation of the solar-neutrino puzzle in terms of the Mikheyev-Smirnov-Wolfenstein effect. (orig.)

  16. Random effects coefficient of determination for mixed and meta-analysis models.

    Science.gov (United States)

    Demidenko, Eugene; Sargent, James; Onega, Tracy

    2012-01-01

    The key feature of a mixed model is the presence of random effects. We have developed a coefficient, called the random effects coefficient of determination, [Formula: see text], that estimates the proportion of the conditional variance of the dependent variable explained by random effects. This coefficient takes values from 0 to 1 and indicates how strong the random effects are. The difference from the earlier suggested fixed effects coefficient of determination is emphasized. If [Formula: see text] is close to 0, there is weak support for random effects in the model because the reduction of the variance of the dependent variable due to random effects is small; consequently, random effects may be ignored and the model simplifies to standard linear regression. The value of [Formula: see text] apart from 0 indicates the evidence of the variance reduction in support of the mixed model. If random effects coefficient of determination is close to 1 the variance of random effects is very large and random effects turn into free fixed effects-the model can be estimated using the dummy variable approach. We derive explicit formulas for [Formula: see text] in three special cases: the random intercept model, the growth curve model, and meta-analysis model. Theoretical results are illustrated with three mixed model examples: (1) travel time to the nearest cancer center for women with breast cancer in the U.S., (2) cumulative time watching alcohol related scenes in movies among young U.S. teens, as a risk factor for early drinking onset, and (3) the classic example of the meta-analysis model for combination of 13 studies on tuberculosis vaccine.

  17. Mixing characterisation of full-scale membrane bioreactors: CFD modelling with experimental validation.

    Science.gov (United States)

    Brannock, M; Wang, Y; Leslie, G

    2010-05-01

    Membrane Bioreactors (MBRs) have been successfully used in aerobic biological wastewater treatment to solve the perennial problem of effective solids-liquid separation. The optimisation of MBRs requires knowledge of the membrane fouling, biokinetics and mixing. However, research has mainly concentrated on the fouling and biokinetics (Ng and Kim, 2007). Current methods of design for a desired flow regime within MBRs are largely based on assumptions (e.g. complete mixing of tanks) and empirical techniques (e.g. specific mixing energy). However, it is difficult to predict how sludge rheology and vessel design in full-scale installations affects hydrodynamics, hence overall performance. Computational Fluid Dynamics (CFD) provides a method for prediction of how vessel features and mixing energy usage affect the hydrodynamics. In this study, a CFD model was developed which accounts for aeration, sludge rheology and geometry (i.e. bioreactor and membrane module). This MBR CFD model was then applied to two full-scale MBRs and was successfully validated against experimental results. The effect of sludge settling and rheology was found to have a minimal impact on the bulk mixing (i.e. the residence time distribution).

  18. Neutrino bilarge mixing and flavor physics in the flipped SU(5) model

    Energy Technology Data Exchange (ETDEWEB)

    Huang Chaoshang; Li Tianjun; Liao Wei E-mail: liaow@ictp.trieste.it

    2003-11-24

    We have constructed a specific supersymmetric flipped SU(5) GUT model in which bilarge neutrino mixing is incorporated. Because the up-type and down-type quarks in the model are flipped in the representations ten and five with respect to the usual SU(5), the radiatively generated flavor mixing in squark mass matrices due to the large neutrino mixing has a pattern different from those in the conventional SU(5) and SO(10) supersymmetric GUTs. This leads to phenomenological consequences quite different from SU(5) or SO(10) supersymmetric GUT models. That is, it has almost no impact on B physics. On the contrary, the model has effects in top and charm physics as well as lepton physics. In particular, it gives promising prediction on the mass difference, {delta}M{sub D}, of the D-D-bar mixing which for some ranges of the parameter space with large tan{beta} can be at the order of 10{sup 9} {Dirac_h} s{sup -1}, one order of magnitude smaller than the experimental upper bound. In some regions of the parameter space {delta}M{sub D} can saturate the present bound. For these ranges of parameter space, t{yields}u,c+h{sup 0} can reach 10{sup -5}-10{sup -6} which would be observed at the LHC and future {gamma}-{gamma} colliders.

  19. Modelling ventricular fibrillation coarseness during cardiopulmonary resuscitation by mixed effects stochastic differential equations.

    Science.gov (United States)

    Gundersen, Kenneth; Kvaløy, Jan Terje; Eftestøl, Trygve; Kramer-Johansen, Jo

    2015-10-15

    For patients undergoing cardiopulmonary resuscitation (CPR) and being in a shockable rhythm, the coarseness of the electrocardiogram (ECG) signal is an indicator of the state of the patient. In the current work, we show how mixed effects stochastic differential equations (SDE) models, commonly used in pharmacokinetic and pharmacodynamic modelling, can be used to model the relationship between CPR quality measurements and ECG coarseness. This is a novel application of mixed effects SDE models to a setting quite different from previous applications of such models and where using such models nicely solves many of the challenges involved in analysing the available data. Copyright © 2015 John Wiley & Sons, Ltd.

  20. Modelling lactation curve for milk fat to protein ratio in Iranian buffaloes (Bubalus bubalis) using non-linear mixed models.

    Science.gov (United States)

    Hossein-Zadeh, Navid Ghavi

    2016-08-01

    The aim of this study was to compare seven non-linear mathematical models (Brody, Wood, Dhanoa, Sikka, Nelder, Rook and Dijkstra) to examine their efficiency in describing the lactation curves for milk fat to protein ratio (FPR) in Iranian buffaloes. Data were 43 818 test-day records for FPR from the first three lactations of Iranian buffaloes which were collected on 523 dairy herds in the period from 1996 to 2012 by the Animal Breeding Center of Iran. Each model was fitted to monthly FPR records of buffaloes using the non-linear mixed model procedure (PROC NLMIXED) in SAS and the parameters were estimated. The models were tested for goodness of fit using Akaike's information criterion (AIC), Bayesian information criterion (BIC) and log maximum likelihood (-2 Log L). The Nelder and Sikka mixed models provided the best fit of lactation curve for FPR in the first and second lactations of Iranian buffaloes, respectively. However, Wood, Dhanoa and Sikka mixed models provided the best fit of lactation curve for FPR in the third parity buffaloes. Evaluation of first, second and third lactation features showed that all models, except for Dijkstra model in the third lactation, under-predicted test time at which daily FPR was minimum. On the other hand, minimum FPR was over-predicted by all equations. Evaluation of the different models used in this study indicated that non-linear mixed models were sufficient for fitting test-day FPR records of Iranian buffaloes.

  1. Modeling and analysis of ORNL horizontal storage tank mobilization and mixing

    International Nuclear Information System (INIS)

    Mahoney, L.A.; Terrones, G.; Eyler, L.L.

    1994-06-01

    The retrieval and treatment of radioactive sludges that are stored in tanks constitute a prevalent problem at several US Department of Energy sites. The tanks typically contain a settled sludge layer with non-Newtonian rheological characteristics covered by a layer of supernatant. The first step in retrieval is the mobilization and mixing of the supernatant and sludge in the storage tanks. Submerged jets have been proposed to achieve sludge mobilization in tanks, including the 189 m 3 (50,000 gallon) Melton Valley Storage tanks (MVST) at Oak Ridge National Laboratory (ORNL) and the planned 378 m 3 (100,000 gallon) tanks being designed as part of the MVST Capacity Increase Project (MVST-CIP). This report focuses on the modeling of mixing and mobilization in horizontal cylindrical tanks like those of the MVST design using submerged, recirculating liquid jets. The computer modeling of the mobilization and mixing processes uses the TEMPEST computational fluid dynamics program (Trend and Eyler 1992). The goals of the simulations are to determine under what conditions sludge mobilization using submerged liquid jets is feasible in tanks of this configuration, and to estimate mixing times required to approach homogeneity of the contents

  2. UPTRANS: an incremental transport model with feedback for quick-response strategy evaluation

    CSIR Research Space (South Africa)

    Venter, C

    2009-07-01

    Full Text Available The paper describes the development of a prototype transport model to be used for high-level evaluation of a potentially large number of alternative land use-transport scenarios. It uses advanced logit modelling to capture travel behaviour change...

  3. Introduction to models of neutrino masses and mixings

    International Nuclear Information System (INIS)

    Joshipura, Anjan S.

    2004-01-01

    This review contains an introduction to models of neutrino masses for non-experts. Topics discussed are i) different types of neutrino masses ii) structure of neutrino masses and mixing needed to understand neutrino oscillation results iii) mechanism to generate neutrino masses in gauge theories and iv) discussion of generic scenarios proposed to realize the required neutrino mass structures. (author)

  4. COMBINING SOURCES IN STABLE ISOTOPE MIXING MODELS: ALTERNATIVE METHODS

    Science.gov (United States)

    Stable isotope mixing models are often used to quantify source contributions to a mixture. Examples include pollution source identification; trophic web studies; analysis of water sources for soils, plants, or water bodies; and many others. A common problem is having too many s...

  5. Two-equation and multi-fluid turbulence models for Rayleigh–Taylor mixing

    International Nuclear Information System (INIS)

    Kokkinakis, I.W.; Drikakis, D.; Youngs, D.L.; Williams, R.J.R.

    2015-01-01

    Highlights: • We present a new improved version of the K–L model. • The improved K–L is found in good agreement with the multi-fluid model and ILES. • The study concerns Rayleigh–Taylor flows at initial density ratios 3:1 and 20:1. - Abstract: This paper presents a new, improved version of the K–L model, as well as a detailed investigation of K–L and multi-fluid models with reference to high-resolution implicit large eddy simulations of compressible Rayleigh–Taylor mixing. The accuracy of the models is examined for different interface pressures and specific heat ratios for Rayleigh–Taylor flows at initial density ratios 3:1 and 20:1. It is shown that the original version of the K–L model requires modifications in order to provide comparable results to the multi-fluid model. The modifications concern the addition of an enthalpy diffusion term to the energy equation; the formulation of the turbulent kinetic energy (source) term in the K equation; and the calculation of the local Atwood number. The proposed modifications significantly improve the results of the K–L model, which are found in good agreement with the multi-fluid model and implicit large eddy simulations with respect to the self-similar mixing width; peak turbulent kinetic energy growth rate, as well as volume fraction and turbulent kinetic energy profiles. However, a key advantage of the two-fluid model is that it can represent the degree of molecular mixing in a direct way, by transferring mass between the two phases. The limitations of the single-fluid K–L model as well as the merits of more advanced Reynolds-averaged Navier–Stokes models are also discussed throughout the paper.

  6. Business models in commercial media markets: Bargaining, advertising, and mixing

    OpenAIRE

    Thöne, Miriam; Rasch, Alexander; Wenzel, Tobias

    2016-01-01

    We consider a product and a media market and show how a change in the business model employed by the media platforms affects consumers, producers (or advertisers), and price negotiations for advertisements. On both markets, two firms differentiated á la Hotelling compete for consumers. On the media market, consumers can mix between the two outlets whereas on the product market, consumers have to decide for one supplier. With pay-tv, as opposed to free-to-air, mixing by consumers disappears, p...

  7. Numerical modeling of two-phase binary fluid mixing using mixed finite elements

    KAUST Repository

    Sun, Shuyu

    2012-07-27

    Diffusion coefficients of dense gases in liquids can be measured by considering two-phase binary nonequilibrium fluid mixing in a closed cell with a fixed volume. This process is based on convection and diffusion in each phase. Numerical simulation of the mixing often requires accurate algorithms. In this paper, we design two efficient numerical methods for simulating the mixing of two-phase binary fluids in one-dimensional, highly permeable media. Mathematical model for isothermal compositional two-phase flow in porous media is established based on Darcy\\'s law, material balance, local thermodynamic equilibrium for the phases, and diffusion across the phases. The time-lag and operator-splitting techniques are used to decompose each convection-diffusion equation into two steps: diffusion step and convection step. The Mixed finite element (MFE) method is used for diffusion equation because it can achieve a high-order and stable approximation of both the scalar variable and the diffusive fluxes across grid-cell interfaces. We employ the characteristic finite element method with moving mesh to track the liquid-gas interface. Based on the above schemes, we propose two methods: single-domain and two-domain methods. The main difference between two methods is that the two-domain method utilizes the assumption of sharp interface between two fluid phases, while the single-domain method allows fractional saturation level. Two-domain method treats the gas domain and the liquid domain separately. Because liquid-gas interface moves with time, the two-domain method needs work with a moving mesh. On the other hand, the single-domain method allows the use of a fixed mesh. We derive the formulas to compute the diffusive flux for MFE in both methods. The single-domain method is extended to multiple dimensions. Numerical results indicate that both methods can accurately describe the evolution of the pressure and liquid level. © 2012 Springer Science+Business Media B.V.

  8. Modelling the development of mixing height in near equatorial region

    Energy Technology Data Exchange (ETDEWEB)

    Samah, A.A. [Univ. of Malaya, Air Pollution Research Unit, Kuala Lumpur (Malaysia)

    1997-10-01

    Most current air pollution models were developed for mid-latitude conditions and as such many of the empirical parameters used were based on observations taken in the mid-latitude boundary layer which is physically different from that of the equatorial boundary layer. In the equatorial boundary layer the Coriolis parameter f is small or zero and moisture plays a more important role in the control of stability and the surface energy balance. Therefore air pollution models such as the OMLMULTI or the ADMS which were basically developed for mid-latitude conditions must be applied with some caution and would need some adaptation to properly simulate the properties of equatorial boundary layer. This work elucidates some of the problems of modelling the evolution of mixing height in the equatorial region. The mixing height estimates were compared with routine observations taken during a severe air pollution episodes in Malaysia. (au)

  9. Stochastic Mixed-Effects Parameters Bertalanffy Process, with Applications to Tree Crown Width Modeling

    Directory of Open Access Journals (Sweden)

    Petras Rupšys

    2015-01-01

    Full Text Available A stochastic modeling approach based on the Bertalanffy law gained interest due to its ability to produce more accurate results than the deterministic approaches. We examine tree crown width dynamic with the Bertalanffy type stochastic differential equation (SDE and mixed-effects parameters. In this study, we demonstrate how this simple model can be used to calculate predictions of crown width. We propose a parameter estimation method and computational guidelines. The primary goal of the study was to estimate the parameters by considering discrete sampling of the diameter at breast height and crown width and by using maximum likelihood procedure. Performance statistics for the crown width equation include statistical indexes and analysis of residuals. We use data provided by the Lithuanian National Forest Inventory from Scots pine trees to illustrate issues of our modeling technique. Comparison of the predicted crown width values of mixed-effects parameters model with those obtained using fixed-effects parameters model demonstrates the predictive power of the stochastic differential equations model with mixed-effects parameters. All results were implemented in a symbolic algebra system MAPLE.

  10. Modelling of binary logistic regression for obesity among secondary students in a rural area of Kedah

    Science.gov (United States)

    Kamaruddin, Ainur Amira; Ali, Zalila; Noor, Norlida Mohd.; Baharum, Adam; Ahmad, Wan Muhamad Amir W.

    2014-07-01

    Logistic regression analysis examines the influence of various factors on a dichotomous outcome by estimating the probability of the event's occurrence. Logistic regression, also called a logit model, is a statistical procedure used to model dichotomous outcomes. In the logit model the log odds of the dichotomous outcome is modeled as a linear combination of the predictor variables. The log odds ratio in logistic regression provides a description of the probabilistic relationship of the variables and the outcome. In conducting logistic regression, selection procedures are used in selecting important predictor variables, diagnostics are used to check that assumptions are valid which include independence of errors, linearity in the logit for continuous variables, absence of multicollinearity, and lack of strongly influential outliers and a test statistic is calculated to determine the aptness of the model. This study used the binary logistic regression model to investigate overweight and obesity among rural secondary school students on the basis of their demographics profile, medical history, diet and lifestyle. The results indicate that overweight and obesity of students are influenced by obesity in family and the interaction between a student's ethnicity and routine meals intake. The odds of a student being overweight and obese are higher for a student having a family history of obesity and for a non-Malay student who frequently takes routine meals as compared to a Malay student.

  11. Joint Residence-Workplace Location Choice Model Based on Household Decision Behavior

    Directory of Open Access Journals (Sweden)

    Pengpeng Jiao

    2015-01-01

    Full Text Available Residence location and workplace are the two most important urban land-use types, and there exist strong interdependences between them. Existing researches often assume that one choice dimension is correlated to the other. Using the mixed logit framework, three groups of choice models are developed to illustrate such choice dependencies. First, for all households, this paper presents a basic methodology of the residence location and workplace choice without decision sequence based on the assumption that the two choice behaviors are independent of each other. Second, the paper clusters all households into two groups, choosing residence or workplace first, and formulates the residence location and workplace choice models under the constraint of decision sequence. Third, this paper combines the residence location and workplace together as the choice alternative and puts forward the joint choice model. A questionnaire survey is implemented in Beijing city to collect the data of 1994 households. Estimation results indicate that the joint choice model fits the data significantly better, and the elasticity effects analyses show that the joint choice model reflects the influences of relevant factors to the choice probability well and leads to the job-housing balance.

  12. MATRIX (Multiconfiguration Aerosol TRacker of mIXing state): an aerosol microphysical module for global atmospheric models

    OpenAIRE

    Bauer , S. E.; Wright , D.; Koch , D.; Lewis , E. R.; Mcgraw , R.; Chang , L.-S.; Schwartz , S. E.; Ruedy , R.

    2008-01-01

    A new aerosol microphysical module MATRIX, the Multiconfiguration Aerosol TRacker of mIXing state, and its application in the Goddard Institute for Space Studies (GISS) climate model (ModelE) are described. This module, which is based on the quadrature method of moments (QMOM), represents nucleation, condensation, coagulation, internal and external mixing, and cloud-drop activation and provides aerosol particle mass and number concentration and particle size information for up to 16 mixed-mod...

  13. A multilevel nonlinear mixed-effects approach to model growth in pigs

    DEFF Research Database (Denmark)

    Strathe, Anders Bjerring; Danfær, Allan Christian; Sørensen, H.

    2010-01-01

    Growth functions have been used to predict market weight of pigs and maximize return over feed costs. This study was undertaken to compare 4 growth functions and methods of analyzing data, particularly one that considers nonlinear repeated measures. Data were collected from an experiment with 40...... pigs maintained from birth to maturity and their BW measured weekly or every 2 wk up to 1,007 d. Gompertz, logistic, Bridges, and Lopez functions were fitted to the data and compared using information criteria. For each function, a multilevel nonlinear mixed effects model was employed because....... Furthermore, studies should consider adding continuous autoregressive process when analyzing nonlinear mixed models with repeated measures....

  14. Error characterization of CO2 vertical mixing in the atmospheric transport model WRF-VPRM

    Directory of Open Access Journals (Sweden)

    U. Karstens

    2012-03-01

    Full Text Available One of the dominant uncertainties in inverse estimates of regional CO2 surface-atmosphere fluxes is related to model errors in vertical transport within the planetary boundary layer (PBL. In this study we present the results from a synthetic experiment using the atmospheric model WRF-VPRM to realistically simulate transport of CO2 for large parts of the European continent at 10 km spatial resolution. To elucidate the impact of vertical mixing error on modeled CO2 mixing ratios we simulated a month during the growing season (August 2006 with different commonly used parameterizations of the PBL (Mellor-Yamada-Janjić (MYJ and Yonsei-University (YSU scheme. To isolate the effect of transport errors we prescribed the same CO2 surface fluxes for both simulations. Differences in simulated CO2 mixing ratios (model bias were on the order of 3 ppm during daytime with larger values at night. We present a simple method to reduce this bias by 70–80% when the true height of the mixed layer is known.

  15. Application of mixed models for the assessment genotype and ...

    African Journals Online (AJOL)

    Application of mixed models for the assessment genotype and environment interactions in cotton ( Gossypium hirsutum ) cultivars in Mozambique. ... The cultivars ISA 205, STAM 42 and REMU 40 showed superior productivity when they were selected by the Harmonic Mean of Genotypic Values (HMGV) criterion in relation ...

  16. SU-D-204-05: Fitting Four NTCP Models to Treatment Outcome Data of Salivary Glands Recorded Six Months After Radiation Therapy for Head and Neck Tumors

    Energy Technology Data Exchange (ETDEWEB)

    Mavroidis, P; Price, A; Kostich, M; Green, R; Das, S; Marks, L; Chera, B [University North Carolina, Chapel Hill, NC (United States); Amdur, R; Mendenhall, W [University of Florida, Gainesville, FL (United States); Sheets, N [University of North Carolina, Raleigh, NC (United States)

    2016-06-15

    Purpose: To estimate the radiobiological parameters of four popular NTCP models that describe the dose-response relations of salivary glands to the severity of patient reported dry mouth 6 months post chemo-radiotherapy. To identify the glands, which best correlate with the manifestation of those clinical endpoints. Finally, to evaluate the goodness-of-fit of the NTCP models. Methods: Forty-three patients were treated on a prospective multiinstitutional phase II study for oropharyngeal squamous cell carcinoma. All the patients received 60 Gy IMRT and they reported symptoms using the novel patient reported outcome version of the CTCAE. We derived the individual patient dosimetric data of the parotid and submandibular glands (SMG) as separate structures as well as combinations. The Lyman-Kutcher-Burman (LKB), Relative Seriality (RS), Logit and Relative Logit (RL) NTCP models were used to fit the patients data. The fitting of the different models was assessed through the area under the receiver operating characteristic curve (AUC) and the Odds Ratio methods. Results: The AUC values were highest for the contralateral parotid for Grade ≥ 2 (0.762 for the LKB, RS, Logit and 0.753 for the RL). For the salivary glands the AUC values were: 0.725 for the LKB, RS, Logit and 0.721 for the RL. For the contralateral SMG the AUC values were: 0.721 for LKB, 0.714 for Logit and 0.712 for RS and RL. The Odds Ratio for the contralateral parotid was 5.8 (1.3–25.5) for all the four NTCP models for the radiobiological dose threshold of 21Gy. Conclusion: It was shown that all the examined NTCP models could fit the clinical data well with very similar accuracy. The contralateral parotid gland appears to correlated best with the clinical endpoints of severe/very severe dry mouth. An EQD2Gy dose of 21Gy appears to be a safe threshold to be used as a constraint in treatment planning.

  17. On the TAP Free Energy in the Mixed p-Spin Models

    Science.gov (United States)

    Chen, Wei-Kuo; Panchenko, Dmitry

    2018-05-01

    Thouless et al. (Phys Mag 35(3):593-601, 1977), derived a representation for the free energy of the Sherrington-Kirkpatrick model, called the TAP free energy, written as the difference of the energy and entropy on the extended configuration space of local magnetizations with an Onsager correction term. In the setting of mixed p-spin models with Ising spins, we prove that the free energy can indeed be written as the supremum of the TAP free energy over the space of local magnetizations whose Edwards-Anderson order parameter (self-overlap) is to the right of the support of the Parisi measure. Furthermore, for generic mixed p-spin models, we prove that the free energy is equal to the TAP free energy evaluated on the local magnetization of any pure state.

  18. A D-vine copula-based model for repeated measurements extending linear mixed models with homogeneous correlation structure.

    Science.gov (United States)

    Killiches, Matthias; Czado, Claudia

    2018-03-22

    We propose a model for unbalanced longitudinal data, where the univariate margins can be selected arbitrarily and the dependence structure is described with the help of a D-vine copula. We show that our approach is an extremely flexible extension of the widely used linear mixed model if the correlation is homogeneous over the considered individuals. As an alternative to joint maximum-likelihood a sequential estimation approach for the D-vine copula is provided and validated in a simulation study. The model can handle missing values without being forced to discard data. Since conditional distributions are known analytically, we easily make predictions for future events. For model selection, we adjust the Bayesian information criterion to our situation. In an application to heart surgery data our model performs clearly better than competing linear mixed models. © 2018, The International Biometric Society.

  19. Fluctuations in a mixed IS-LM business cycle model

    Directory of Open Access Journals (Sweden)

    Hamad Talibi Alaoui

    2008-09-01

    Full Text Available In the present paper, we extend a delayed IS-LM business cycle model by introducing an additional advance (anticipated capital stock in the investment function. The resulting model is represented in terms of mixed differential equations. For the deviating argument $au$ (advance and delay being a bifurcation parameter we investigate the local stability and the local Hopf bifurcation. Also some numerical simulations are given to support the theoretical analysis.

  20. Access to and Competition Between Airports: A Case Study for the San Francisco Bay Area

    NARCIS (Netherlands)

    Pels, E.; Nijkamp, P.; Rietveld, P.

    2003-01-01

    In this paper (nested) logit models that describe the combined access mode-airport-choice are estimated. A three level nested logit model is rejected. A two level nested logit model with the airport choice at the top level and the access mode choice at the lower level is preferred. From the

  1. The Brown Muck of $B^0$ and $B^0_s$ Mixing: Beyond the Standard Model

    Energy Technology Data Exchange (ETDEWEB)

    Bouchard, Christopher Michael [Univ. of Illinois, Urbana-Champaign, IL (United States)

    2011-01-01

    Standard Model contributions to neutral $B$ meson mixing begin at the one loop level where they are further suppressed by a combination of the GIM mechanism and Cabibbo suppression. This combination makes $B$ meson mixing a promising probe of new physics, where as yet undiscovered particles and/or interactions can participate in the virtual loops. Relating underlying interactions of the mixing process to experimental observation requires a precise calculation of the non-perturbative process of hadronization, characterized by hadronic mixing matrix elements. This thesis describes a calculation of the hadronic mixing matrix elements relevant to a large class of new physics models. The calculation is performed via lattice QCD using the MILC collaboration's gauge configurations with $2+1$ dynamical sea quarks.

  2. A dynamic random effects multinomial logit model of household car ownership

    DEFF Research Database (Denmark)

    Bue Bjørner, Thomas; Leth-Petersen, Søren

    2007-01-01

    Using a large household panel we estimate demand for car ownership by means of a dynamic multinomial model with correlated random effects. Results suggest that the persistence in car ownership observed in the data should be attributed to both true state dependence and to unobserved heterogeneity...... (random effects). It also appears that random effects related to single and multiple car ownership are correlated, suggesting that the IIA assumption employed in simple multinomial models of car ownership is invalid. Relatively small elasticities with respect to income and car costs are estimated...

  3. Modeling of speed distribution for mixed bicycle traffic flow

    Directory of Open Access Journals (Sweden)

    Cheng Xu

    2015-11-01

    Full Text Available Speed is a fundamental measure of traffic performance for highway systems. There were lots of results for the speed characteristics of motorized vehicles. In this article, we studied the speed distribution for mixed bicycle traffic which was ignored in the past. Field speed data were collected from Hangzhou, China, under different survey sites, traffic conditions, and percentages of electric bicycle. The statistics results of field data show that the total mean speed of electric bicycles is 17.09 km/h, 3.63 km/h faster and 27.0% higher than that of regular bicycles. Normal, log-normal, gamma, and Weibull distribution models were used for testing speed data. The results of goodness-of-fit hypothesis tests imply that the log-normal and Weibull model can fit the field data very well. Then, the relationships between mean speed and electric bicycle proportions were proposed using linear regression models, and the mean speed for purely electric bicycles or regular bicycles can be obtained. The findings of this article will provide effective help for the safety and traffic management of mixed bicycle traffic.

  4. Mixed models, linear dependency, and identification in age-period-cohort models.

    Science.gov (United States)

    O'Brien, Robert M

    2017-07-20

    This paper examines the identification problem in age-period-cohort models that use either linear or categorically coded ages, periods, and cohorts or combinations of these parameterizations. These models are not identified using the traditional fixed effect regression model approach because of a linear dependency between the ages, periods, and cohorts. However, these models can be identified if the researcher introduces a single just identifying constraint on the model coefficients. The problem with such constraints is that the results can differ substantially depending on the constraint chosen. Somewhat surprisingly, age-period-cohort models that specify one or more of ages and/or periods and/or cohorts as random effects are identified. This is the case without introducing an additional constraint. I label this identification as statistical model identification and show how statistical model identification comes about in mixed models and why which effects are treated as fixed and which are treated as random can substantially change the estimates of the age, period, and cohort effects. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  5. The 4s web-marketing mix model

    OpenAIRE

    Constantinides, Efthymios

    2002-01-01

    This paper reviews the criticism on the 4Ps Marketing Mix framework, the most popular tool of traditional marketing management, and categorizes the main objections of using the model as the foundation of physical marketing. It argues that applying the traditional approach, based on the 4Ps paradigm, is also a poor choice in the case of virtual marketing and identifies two main limitations of the framework in online environments: the drastically diminished role of the Ps and the lack of any st...

  6. Bayesian prediction of spatial count data using generalized linear mixed models

    DEFF Research Database (Denmark)

    Christensen, Ole Fredslund; Waagepetersen, Rasmus Plenge

    2002-01-01

    Spatial weed count data are modeled and predicted using a generalized linear mixed model combined with a Bayesian approach and Markov chain Monte Carlo. Informative priors for a data set with sparse sampling are elicited using a previously collected data set with extensive sampling. Furthermore, ...

  7. Mixed-effects height–diameter models for ten conifers in the inland ...

    African Journals Online (AJOL)

    To demonstrate the utility of mixed-effects height–diameter models when conducting forest inventories, mixedeffects height–diameter models are presented for several commercially and ecologically important conifers in the inland Northwest of the USA. After obtaining height–diameter measurements from a plot/stand of ...

  8. A Nonlinear Mixed Effects Model for the Prediction of Natural Gas Consumption by Individual Customers

    Czech Academy of Sciences Publication Activity Database

    Brabec, Marek; Konár, Ondřej; Pelikán, Emil; Malý, Marek

    2008-01-01

    Roč. 24, č. 4 (2008), s. 659-678 ISSN 0169-2070 R&D Projects: GA AV ČR 1ET400300513 Institutional research plan: CEZ:AV0Z10300504 Keywords : individual gas consumption * nonlinear mixed effects model * ARIMAX * ARX * generalized linear mixed model * conditional modeling Subject RIV: JE - Non-nuclear Energetics, Energy Consumption ; Use Impact factor: 1.685, year: 2008

  9. Mildly mixed coupled models vs. WMAP7 data

    International Nuclear Information System (INIS)

    La Vacca, Giuseppe; Bonometto, Silvio A.

    2011-01-01

    Mildly mixed coupled models include massive ν's and CDM-DE coupling. We present new tests of their likelihood vs. recent data including WMAP7, confirming it to exceed ΛCDM, although at ∼2--σ's. We then show the impact on the physics of the dark components of ν-mass detection in 3 H β-decay or 0νββ-decay experiments.

  10. Investigation of coolant mixing in WWER-440/213 RPV with improved turbulence model

    International Nuclear Information System (INIS)

    Kiss, B.; Aszodi, A.

    2011-01-01

    A detailed and complex RPV model of WWER-440/213 type reactor was developed in Budapest University of Technology and Economics Institute of Nuclear Techniques in the previous years. This model contains the main structural elements as inlet and outlet nozzles, guide baffles of hydro-accumulators coolant, alignment drifts, perforated plates, brake- and guide tube chamber and simplified core. With the new vessel model a series of parameter studies were performed considering turbulence models, discretization schemes, and modeling methods with ANSYS CFX. In the course of parameter studies the coolant mixing was investigated in the RPV. The coolant flow was 'traced' with different scalar concentration at the inlet nozzles and its distribution was calculated at the core bottom. The simulation results were compared with PAKS NPP measured mixing factors data (available from FLOMIX project. Based on the comparison the SST turbulence model was chosen for the further simulations, which unifies the advantages of two-equation (kω and kε) models. The most widely used turbulence models are Reynolds-averaged Navier-Stokes models that are based on time-averaging of the equations. Time-averaging filters out all turbulent scales from the simulation, and the effect of turbulence on the mean flow is then re-introduced through appropriate modeling assumptions. Because of this characteristic of SST turbulence model a decision was made in year 2011 to investigate the coolant mixing with improved turbulence model as well. The hybrid SAS-SST turbulence model was chosen, which is capable of resolving large scale turbulent structures without the time and grid-scale resolution restrictions of LES, often allowing the use of existing grids created for Reynolds-averaged Navier-Stokes simulations. As a first step the coolant mixing was investigated in the downcomer only. Eddies are occurred after the loop connection because of the steep flow direction change. This turbulent, vertiginous flow was

  11. A method for fitting regression splines with varying polynomial order in the linear mixed model.

    Science.gov (United States)

    Edwards, Lloyd J; Stewart, Paul W; MacDougall, James E; Helms, Ronald W

    2006-02-15

    The linear mixed model has become a widely used tool for longitudinal analysis of continuous variables. The use of regression splines in these models offers the analyst additional flexibility in the formulation of descriptive analyses, exploratory analyses and hypothesis-driven confirmatory analyses. We propose a method for fitting piecewise polynomial regression splines with varying polynomial order in the fixed effects and/or random effects of the linear mixed model. The polynomial segments are explicitly constrained by side conditions for continuity and some smoothness at the points where they join. By using a reparameterization of this explicitly constrained linear mixed model, an implicitly constrained linear mixed model is constructed that simplifies implementation of fixed-knot regression splines. The proposed approach is relatively simple, handles splines in one variable or multiple variables, and can be easily programmed using existing commercial software such as SAS or S-plus. The method is illustrated using two examples: an analysis of longitudinal viral load data from a study of subjects with acute HIV-1 infection and an analysis of 24-hour ambulatory blood pressure profiles.

  12. Fermion masses and flavor mixings in a model with S4 flavor symmetry

    International Nuclear Information System (INIS)

    Ding Guijun

    2010-01-01

    We present a supersymmetric model of quark and lepton based on S 4 xZ 3 xZ 4 flavor symmetry. The S 4 symmetry is broken down to Klein four and Z 3 subgroups in the neutrino and the charged lepton sectors, respectively. Tri-Bimaximal mixing and the charged lepton mass hierarchies are reproduced simultaneously at leading order. Moreover, a realistic pattern of quark masses and mixing angles is generated with the exception of the mixing angle between the first two generations, which requires a small accidental enhancement. It is remarkable that the mass hierarchies are controlled by the spontaneous breaking of flavor symmetry in our model. The next to leading order contributions are studied, all the fermion masses and mixing angles receive corrections of relative order λ c 2 with respect to the leading order results. The phenomenological consequences of the model are analyzed, the neutrino mass spectrum can be normal hierarchy or inverted hierarchy, and the combined measurement of the 0ν2β decay effective mass m ββ and the lightest neutrino mass can distinguish the normal hierarchy from the inverted hierarchy.

  13. An open source Bayesian Monte Carlo isotope mixing model with applications in Earth surface processes

    Science.gov (United States)

    Arendt, Carli A.; Aciego, Sarah M.; Hetland, Eric A.

    2015-05-01

    The implementation of isotopic tracers as constraints on source contributions has become increasingly relevant to understanding Earth surface processes. Interpretation of these isotopic tracers has become more accessible with the development of Bayesian Monte Carlo (BMC) mixing models, which allow uncertainty in mixing end-members and provide methodology for systems with multicomponent mixing. This study presents an open source multiple isotope BMC mixing model that is applicable to Earth surface environments with sources exhibiting distinct end-member isotopic signatures. Our model is first applied to new δ18O and δD measurements from the Athabasca Glacier, which showed expected seasonal melt evolution trends and vigorously assessed the statistical relevance of the resulting fraction estimations. To highlight the broad applicability of our model to a variety of Earth surface environments and relevant isotopic systems, we expand our model to two additional case studies: deriving melt sources from δ18O, δD, and 222Rn measurements of Greenland Ice Sheet bulk water samples and assessing nutrient sources from ɛNd and 87Sr/86Sr measurements of Hawaiian soil cores. The model produces results for the Greenland Ice Sheet and Hawaiian soil data sets that are consistent with the originally published fractional contribution estimates. The advantage of this method is that it quantifies the error induced by variability in the end-member compositions, unrealized by the models previously applied to the above case studies. Results from all three case studies demonstrate the broad applicability of this statistical BMC isotopic mixing model for estimating source contribution fractions in a variety of Earth surface systems.

  14. Modeling the Bergeron-Findeisen Process Using PDF Methods With an Explicit Representation of Mixing

    Science.gov (United States)

    Jeffery, C.; Reisner, J.

    2005-12-01

    Currently, the accurate prediction of cloud droplet and ice crystal number concentration in cloud resolving, numerical weather prediction and climate models is a formidable challenge. The Bergeron-Findeisen process in which ice crystals grow by vapor deposition at the expense of super-cooled droplets is expected to be inhomogeneous in nature--some droplets will evaporate completely in centimeter-scale filaments of sub-saturated air during turbulent mixing while others remain unchanged [Baker et al., QJRMS, 1980]--and is unresolved at even cloud-resolving scales. Despite the large body of observational evidence in support of the inhomogeneous mixing process affecting cloud droplet number [most recently, Brenguier et al., JAS, 2000], it is poorly understood and has yet to be parameterized and incorporated into a numerical model. In this talk, we investigate the Bergeron-Findeisen process using a new approach based on simulations of the probability density function (PDF) of relative humidity during turbulent mixing. PDF methods offer a key advantage over Eulerian (spatial) models of cloud mixing and evaporation: the low probability (cm-scale) filaments of entrained air are explicitly resolved (in probability space) during the mixing event even though their spatial shape, size and location remain unknown. Our PDF approach reveals the following features of the inhomogeneous mixing process during the isobaric turbulent mixing of two parcels containing super-cooled water and ice, respectively: (1) The scavenging of super-cooled droplets is inhomogeneous in nature; some droplets evaporate completely at early times while others remain unchanged. (2) The degree of total droplet evaporation during the initial mixing period depends linearly on the mixing fractions of the two parcels and logarithmically on Damköhler number (Da)---the ratio of turbulent to evaporative time-scales. (3) Our simulations predict that the PDF of Lagrangian (time-integrated) subsaturation (S) goes as

  15. Computational Fluid Dynamics Modeling Of Scaled Hanford Double Shell Tank Mixing - CFD Modeling Sensitivity Study Results

    International Nuclear Information System (INIS)

    Jackson, V.L.

    2011-01-01

    The primary purpose of the tank mixing and sampling demonstration program is to mitigate the technical risks associated with the ability of the Hanford tank farm delivery and celtification systems to measure and deliver a uniformly mixed high-level waste (HLW) feed to the Waste Treatment and Immobilization Plant (WTP) Uniform feed to the WTP is a requirement of 24590-WTP-ICD-MG-01-019, ICD-19 - Interface Control Document for Waste Feed, although the exact definition of uniform is evolving in this context. Computational Fluid Dynamics (CFD) modeling has been used to assist in evaluating scaleup issues, study operational parameters, and predict mixing performance at full-scale.

  16. Modelling of Wheat-Flour Dough Mixing as an Open-Loop Hysteretic Process

    Czech Academy of Sciences Publication Activity Database

    Anderssen, R.; Kružík, Martin

    2013-01-01

    Roč. 18, č. 2 (2013), s. 283-293 ISSN 1531-3492 R&D Projects: GA AV ČR IAA100750802 Keywords : Dissipation * Dough mixing * Rate-independent systems Subject RIV: BA - General Mathematics Impact factor: 0.628, year: 2013 http://library.utia.cas.cz/separaty/2013/MTR/kruzik-modelling of wheat-flour dough mixing as an open-loop hysteretic process.pdf

  17. Water-rock interaction modelling and uncertainties of mixing modelling. SDM-Site Laxemar

    International Nuclear Information System (INIS)

    Gimeno, Maria J.; Auque, Luis F.; Gomez, Javier B.; Acero, Patricia

    2009-01-01

    , hydrogeochemistry, microbiology, geomicrobiology, analytical chemistry etc. The resulting site descriptive model version, mainly based on available primary data from the extended data freeze L2.3 at Laxemar (November 30 2007). The data interpretation was carried out during November 2007 to September 2008. Several groups within ChemNet were involved and the evaluation was conducted independently using different approaches ranging from expert knowledge to geochemical and mathematical modelling including transport modelling. During regular ChemNet meetings the results have been presented and discussed. The original works by the ChemNet modellers are presented in four level III reports containing complementary information for the bedrock hydrogeochemistry Laxemar Site Descriptive Model (SDM-Site Laxemar, R-08-93) level II report. There is also a fifth level III report: Fracture mineralogy of the Laxemar area (R-08-99). This report presents the modelling work performed by the UZ (Univ. of Zaragoza) group as part of the work plan for Laxemar-Simpevarp 2.2 and 2.3. The main processes determining the global geochemical evolution of the Laxemar-Simpevarp groundwaters system are mixing and reaction processes. Mixing has taken place between different types of waters (end members) over time, making the discrimination of the main influences not always straightforward. Several lines of evidence suggest the input of dilute waters (cold or warm), at different stages, into a bedrock with pre-existing very saline groundwaters. Subsequently, marine water entered the system over the Littorina period (when the topography and the distance to the coast allowed it) and mixed with pre-existent groundwaters of variable salinity. In the Laxemar subarea mainland, the Littorina input occurred only locally and it has mostly been flushed out by the subsequent input of warm meteoric waters with a distinctive modern isotopic signature. In addition to mixing processes and superimposed to their effects, different

  18. Water-rock interaction modelling and uncertainties of mixing modelling. SDM-Site Laxemar

    Energy Technology Data Exchange (ETDEWEB)

    Gimeno, Maria J.; Auque, Luis F.; Gomez, Javier B.; Acero, Patricia (Univ. of Zaragoza, Zaragoza (Spain))

    2009-01-15

    , hydrochemistry, hydrogeochemistry, microbiology, geomicrobiology, analytical chemistry etc. The resulting site descriptive model version, mainly based on available primary data from the extended data freeze L2.3 at Laxemar (November 30 2007). The data interpretation was carried out during November 2007 to September 2008. Several groups within ChemNet were involved and the evaluation was conducted independently using different approaches ranging from expert knowledge to geochemical and mathematical modelling including transport modelling. During regular ChemNet meetings the results have been presented and discussed. The original works by the ChemNet modellers are presented in four level III reports containing complementary information for the bedrock hydrogeochemistry Laxemar Site Descriptive Model (SDM-Site Laxemar, R-08-93) level II report. There is also a fifth level III report: Fracture mineralogy of the Laxemar area (R-08-99). This report presents the modelling work performed by the UZ (Univ. of Zaragoza) group as part of the work plan for Laxemar-Simpevarp 2.2 and 2.3. The main processes determining the global geochemical evolution of the Laxemar-Simpevarp groundwaters system are mixing and reaction processes. Mixing has taken place between different types of waters (end members) over time, making the discrimination of the main influences not always straightforward. Several lines of evidence suggest the input of dilute waters (cold or warm), at different stages, into a bedrock with pre-existing very saline groundwaters. Subsequently, marine water entered the system over the Littorina period (when the topography and the distance to the coast allowed it) and mixed with pre-existent groundwaters of variable salinity. In the Laxemar subarea mainland, the Littorina input occurred only locally and it has mostly been flushed out by the subsequent input of warm meteoric waters with a distinctive modern isotopic signature. In addition to mixing processes and superimposed to their

  19. A dynamic analysis of interfuel substitution for Swedish heating plants

    International Nuclear Information System (INIS)

    Braennlund, R.; Lundgren, T.

    2000-01-01

    This paper estimates a dynamic model of interfuel substitution for Swedish heating plants. We use the cost share linear logit model developed by Considine and Mount. All estimated own-price elasticities are negative and all cross-price elasticities are positive. The estimated dynamic adjustment rate parameter is small, however increasing with the size of the plant and time, indicating fast adjustments in the fuel mix when changing relative fuel prices. The estimated model is used to illustrate the effects of two different policy changes

  20. Evaluation of scalar mixing and time scale models in PDF simulations of a turbulent premixed flame

    Energy Technology Data Exchange (ETDEWEB)

    Stoellinger, Michael; Heinz, Stefan [Department of Mathematics, University of Wyoming, Laramie, WY (United States)

    2010-09-15

    Numerical simulation results obtained with a transported scalar probability density function (PDF) method are presented for a piloted turbulent premixed flame. The accuracy of the PDF method depends on the scalar mixing model and the scalar time scale model. Three widely used scalar mixing models are evaluated: the interaction by exchange with the mean (IEM) model, the modified Curl's coalescence/dispersion (CD) model and the Euclidean minimum spanning tree (EMST) model. The three scalar mixing models are combined with a simple model for the scalar time scale which assumes a constant C{sub {phi}}=12 value. A comparison of the simulation results with available measurements shows that only the EMST model calculates accurately the mean and variance of the reaction progress variable. An evaluation of the structure of the PDF's of the reaction progress variable predicted by the three scalar mixing models confirms this conclusion: the IEM and CD models predict an unrealistic shape of the PDF. Simulations using various C{sub {phi}} values ranging from 2 to 50 combined with the three scalar mixing models have been performed. The observed deficiencies of the IEM and CD models persisted for all C{sub {phi}} values considered. The value C{sub {phi}}=12 combined with the EMST model was found to be an optimal choice. To avoid the ad hoc choice for C{sub {phi}}, more sophisticated models for the scalar time scale have been used in simulations using the EMST model. A new model for the scalar time scale which is based on a linear blending between a model for flamelet combustion and a model for distributed combustion is developed. The new model has proven to be very promising as a scalar time scale model which can be applied from flamelet to distributed combustion. (author)

  1. Goodness-of-fit tests in mixed models

    KAUST Repository

    Claeskens, Gerda

    2009-05-12

    Mixed models, with both random and fixed effects, are most often estimated on the assumption that the random effects are normally distributed. In this paper we propose several formal tests of the hypothesis that the random effects and/or errors are normally distributed. Most of the proposed methods can be extended to generalized linear models where tests for non-normal distributions are of interest. Our tests are nonparametric in the sense that they are designed to detect virtually any alternative to normality. In case of rejection of the null hypothesis, the nonparametric estimation method that is used to construct a test provides an estimator of the alternative distribution. © 2009 Sociedad de Estadística e Investigación Operativa.

  2. Sensitivity of surface temperature to radiative forcing by contrail cirrus in a radiative-mixing model

    Directory of Open Access Journals (Sweden)

    U. Schumann

    2017-11-01

    Full Text Available Earth's surface temperature sensitivity to radiative forcing (RF by contrail cirrus and the related RF efficacy relative to CO2 are investigated in a one-dimensional idealized model of the atmosphere. The model includes energy transport by shortwave (SW and longwave (LW radiation and by mixing in an otherwise fixed reference atmosphere (no other feedbacks. Mixing includes convective adjustment and turbulent diffusion, where the latter is related to the vertical component of mixing by large-scale eddies. The conceptual study shows that the surface temperature sensitivity to given contrail RF depends strongly on the timescales of energy transport by mixing and radiation. The timescales are derived for steady layered heating (ghost forcing and for a transient contrail cirrus case. The radiative timescales are shortest at the surface and shorter in the troposphere than in the mid-stratosphere. Without mixing, a large part of the energy induced into the upper troposphere by radiation due to contrails or similar disturbances gets lost to space before it can contribute to surface warming. Because of the different radiative forcing at the surface and at top of atmosphere (TOA and different radiative heating rate profiles in the troposphere, the local surface temperature sensitivity to stratosphere-adjusted RF is larger for SW than for LW contrail forcing. Without mixing, the surface energy budget is more important for surface warming than the TOA budget. Hence, surface warming by contrails is smaller than suggested by the net RF at TOA. For zero mixing, cooling by contrails cannot be excluded. This may in part explain low efficacy values for contrails found in previous global circulation model studies. Possible implications of this study are discussed. Since the results of this study are model dependent, they should be tested with a comprehensive climate model in the future.

  3. CP violation and flavour mixing in the standard model

    International Nuclear Information System (INIS)

    Ali, A.; London, D.

    1995-08-01

    We review and update the constraints on the parameters of the quark flavour mixing matrix V CKM in the standard model and estimate the resulting CP asymmetries in B decays, taking into account recent experimental and theoretical developments. In performing our fits, we use inputs from the measurements of the following quantities: (i) vertical stroke εvertical stroke , the CP-violating parameter in K decays, (ii) ΔM d , the mass difference due to the B 0 d - anti B 0 d mixing, (iii) the matrix elements vertical stroke V cb vertical stroke and vertical stroke V ub vertical stroke , (iv) B-hadron lifetimes, and (v) the top quark mass. The experimental input in points (ii) - (v) has improved compared to our previous fits. With the updated CKM matrix we present the currently-allowed range of the ratios vertical stroke V td /V ts vertical stroke and vertical stroke V td /V ub vertical stroke , as well as the standard model predictions for the B s 0 - anti B s 0 mixing parameter x s , (or, equivalently, ΔM s ) and the quantities sin 2α, sin 2β and sin 2 γ, which characterize the CP-asymmetries in B-decays. Various theoretical issues related to the so-called ''penguin-pollution'', which are of importance for the determination of the phases α and γ from the CP-asymmetries in B decays, are also discussed. (orig.)

  4. Impact of Lateral Mixing in the Ocean on El Nino in Fully Coupled Climate Models

    Science.gov (United States)

    Gnanadesikan, A.; Russell, A.; Pradal, M. A. S.; Abernathey, R. P.

    2016-02-01

    Given the large number of processes that can affect El Nino, it is difficult to understand why different climate models simulate El Nino differently. This paper focusses on the role of lateral mixing by mesoscale eddies. There is significant disagreement about the value of the mixing coefficient ARedi which parameterizes the lateral mixing of tracers. Coupled climate models usually prescribe small values of this coefficient, ranging between a few hundred and a few thousand m2/s. Observations, however, suggest values that are much larger. We present a sensitivity study with a suite of Earth System Models that examines the impact of varying ARedi on the amplitude of El Nino. We examine the effect of varying a spatially constant ARedi over a range of values similar to that seen in the IPCC AR5 models, as well as looking at two spatially varying distributions based on altimetric velocity estimates. While the expectation that higher values of ARedi should damp anomalies is borne out in the model, it is more than compensated by a weaker damping due to vertical mixing and a stronger response of atmospheric winds to SST anomalies. Under higher mixing, a weaker zonal SST gradient causes the center of convection over the Warm pool to shift eastward and to become more sensitive to changes in cold tongue SSTs . Changes in the SST gradient also explain interdecadal ENSO variability within individual model runs.

  5. Estimation of oceanic subsurface mixing under a severe cyclonic storm using a coupled atmosphere–ocean–wave model

    Directory of Open Access Journals (Sweden)

    K. R. Prakash

    2018-04-01

    Full Text Available A coupled atmosphere–ocean–wave model was used to examine mixing in the upper-oceanic layers under the influence of a very severe cyclonic storm Phailin over the Bay of Bengal (BoB during 10–14 October 2013. The coupled model was found to improve the sea surface temperature over the uncoupled model. Model simulations highlight the prominent role of cyclone-induced near-inertial oscillations in subsurface mixing up to the thermocline depth. The inertial mixing introduced by the cyclone played a central role in the deepening of the thermocline and mixed layer depth by 40 and 15 m, respectively. For the first time over the BoB, a detailed analysis of inertial oscillation kinetic energy generation, propagation, and dissipation was carried out using an atmosphere–ocean–wave coupled model during a cyclone. A quantitative estimate of kinetic energy in the oceanic water column, its propagation, and its dissipation mechanisms were explained using the coupled atmosphere–ocean–wave model. The large shear generated by the inertial oscillations was found to overcome the stratification and initiate mixing at the base of the mixed layer. Greater mixing was found at the depths where the eddy kinetic diffusivity was large. The baroclinic current, holding a larger fraction of kinetic energy than the barotropic current, weakened rapidly after the passage of the cyclone. The shear induced by inertial oscillations was found to decrease rapidly with increasing depth below the thermocline. The dampening of the mixing process below the thermocline was explained through the enhanced dissipation rate of turbulent kinetic energy upon approaching the thermocline layer. The wave–current interaction and nonlinear wave–wave interaction were found to affect the process of downward mixing and cause the dissipation of inertial oscillations.

  6. Estimation of oceanic subsurface mixing under a severe cyclonic storm using a coupled atmosphere-ocean-wave model

    Science.gov (United States)

    Prakash, Kumar Ravi; Nigam, Tanuja; Pant, Vimlesh

    2018-04-01

    A coupled atmosphere-ocean-wave model was used to examine mixing in the upper-oceanic layers under the influence of a very severe cyclonic storm Phailin over the Bay of Bengal (BoB) during 10-14 October 2013. The coupled model was found to improve the sea surface temperature over the uncoupled model. Model simulations highlight the prominent role of cyclone-induced near-inertial oscillations in subsurface mixing up to the thermocline depth. The inertial mixing introduced by the cyclone played a central role in the deepening of the thermocline and mixed layer depth by 40 and 15 m, respectively. For the first time over the BoB, a detailed analysis of inertial oscillation kinetic energy generation, propagation, and dissipation was carried out using an atmosphere-ocean-wave coupled model during a cyclone. A quantitative estimate of kinetic energy in the oceanic water column, its propagation, and its dissipation mechanisms were explained using the coupled atmosphere-ocean-wave model. The large shear generated by the inertial oscillations was found to overcome the stratification and initiate mixing at the base of the mixed layer. Greater mixing was found at the depths where the eddy kinetic diffusivity was large. The baroclinic current, holding a larger fraction of kinetic energy than the barotropic current, weakened rapidly after the passage of the cyclone. The shear induced by inertial oscillations was found to decrease rapidly with increasing depth below the thermocline. The dampening of the mixing process below the thermocline was explained through the enhanced dissipation rate of turbulent kinetic energy upon approaching the thermocline layer. The wave-current interaction and nonlinear wave-wave interaction were found to affect the process of downward mixing and cause the dissipation of inertial oscillations.

  7. Ocean bio-geophysical modeling using mixed layer-isopycnal general circulation model coupled with photosynthesis process

    Digital Repository Service at National Institute of Oceanography (India)

    Nakamoto, S.; Saito, H.; Muneyama, K.; Sato, T.; PrasannaKumar, S.; Kumar, A.; Frouin, R.

    -chemical system that supports steady carbon circulation in geological time scale in the world ocean using Mixed Layer-Isopycnal ocean General Circulation model with remotely sensed Coastal Zone Color Scanner (CZCS) chlorophyll pigment concentration....

  8. An Investigation of a Hybrid Mixing Timescale Model for PDF Simulations of Turbulent Premixed Flames

    Science.gov (United States)

    Zhou, Hua; Kuron, Mike; Ren, Zhuyin; Lu, Tianfeng; Chen, Jacqueline H.

    2016-11-01

    Transported probability density function (TPDF) method features the generality for all combustion regimes, which is attractive for turbulent combustion simulations. However, the modeling of micromixing due to molecular diffusion is still considered to be a primary challenge for TPDF method, especially in turbulent premixed flames. Recently, a hybrid mixing rate model for TPDF simulations of turbulent premixed flames has been proposed, which recovers the correct mixing rates in the limits of flamelet regime and broken reaction zone regime while at the same time aims to properly account for the transition in between. In this work, this model is employed in TPDF simulations of turbulent premixed methane-air slot burner flames. The model performance is assessed by comparing the results from both direct numerical simulation (DNS) and conventional constant mechanical-to-scalar mixing rate model. This work is Granted by NSFC 51476087 and 91441202.

  9. Bayesian inference for two-part mixed-effects model using skew distributions, with application to longitudinal semicontinuous alcohol data.

    Science.gov (United States)

    Xing, Dongyuan; Huang, Yangxin; Chen, Henian; Zhu, Yiliang; Dagne, Getachew A; Baldwin, Julie

    2017-08-01

    Semicontinuous data featured with an excessive proportion of zeros and right-skewed continuous positive values arise frequently in practice. One example would be the substance abuse/dependence symptoms data for which a substantial proportion of subjects investigated may report zero. Two-part mixed-effects models have been developed to analyze repeated measures of semicontinuous data from longitudinal studies. In this paper, we propose a flexible two-part mixed-effects model with skew distributions for correlated semicontinuous alcohol data under the framework of a Bayesian approach. The proposed model specification consists of two mixed-effects models linked by the correlated random effects: (i) a model on the occurrence of positive values using a generalized logistic mixed-effects model (Part I); and (ii) a model on the intensity of positive values using a linear mixed-effects model where the model errors follow skew distributions including skew- t and skew-normal distributions (Part II). The proposed method is illustrated with an alcohol abuse/dependence symptoms data from a longitudinal observational study, and the analytic results are reported by comparing potential models under different random-effects structures. Simulation studies are conducted to assess the performance of the proposed models and method.

  10. A New Model for Inclusive Sports? An Evaluation of Participants’ Experiences of Mixed Ability Rugby

    Directory of Open Access Journals (Sweden)

    Martino Corazza

    2017-06-01

    Full Text Available Sport has been recognised as a potential catalyst for social inclusion. The Mixed Ability Model represents an innovative approach to inclusive sport by encouraging disabled and non-disabled players to interact in a mainstream club environment. However, research around the impacts of the Model is currently lacking. This paper aims to contribute empirical data to this gap by evaluating participants’ experiences of Mixed Ability Rugby and highlighting implications for future initiatives. Primary qualitative data were collected within two Mixed Ability Rugby teams in the UK and Italy through online questionnaires and focus groups. Data were analysed using Simplican et al.’s (2015 model of social inclusion. Data show that Mixed Ability Rugby has significant potential for achieving inclusionary outcomes. Positive social impacts, reported by all participants, regardless of (disability, include enhanced social networks, an increase in social capital, personal development and fundamental perception shifts. Factors relevant to the Mixed Ability Model are identified that enhance these impacts and inclusionary outcomes. The mainstream setting was reportedly the most important, with further aspects including a supportive club environment and promotion of self-advocacy. A ‘Wheel of Inclusion’ is developed that provides a useful basis for evaluating current inclusive sport initiatives and for designing new ones.

  11. Validation of mixing heights derived from the operational NWP models at the German weather service

    Energy Technology Data Exchange (ETDEWEB)

    Fay, B.; Schrodin, R.; Jacobsen, I. [Deutscher Wetterdienst, Offenbach (Germany); Engelbart, D. [Deutscher Wetterdienst, Meteorol. Observ. Lindenberg (Germany)

    1997-10-01

    NWP models incorporate an ever-increasing number of observations via four-dimensional data assimilation and are capable of providing comprehensive information about the atmosphere both in space and time. They describe not only near surface parameters but also the vertical structure of the atmosphere. They operate daily, are well verified and successfully used as meteorological pre-processors in large-scale dispersion modelling. Applications like ozone forecasts, emission or power plant control calculations require highly resolved, reliable, and routine values of the temporal evolution of the mixing height (MH) which is a critical parameter in determining the mixing and transformation of substances and the resulting pollution levels near the ground. The purpose of development at the German Weather Service is a straightforward mixing height scheme that uses only parameters derived from NWP model variables and thus automatically provides spatial and temporal fields of mixing heights on an operational basis. An universal parameter to describe stability is the Richardson number Ri. Compared to the usual diagnostic or rate equations, the Ri number concept of determining mixing heights has the advantage of using not only surface layer parameters but also regarding the vertical structure of the boundary layer resolved in the NWP models. (au)

  12. Formulation and Validation of an Efficient Computational Model for a Dilute, Settling Suspension Undergoing Rotational Mixing

    Energy Technology Data Exchange (ETDEWEB)

    Sprague, Michael A.; Stickel, Jonathan J.; Sitaraman, Hariswaran; Crawford, Nathan C.; Fischer, Paul F.

    2017-04-11

    Designing processing equipment for the mixing of settling suspensions is a challenging problem. Achieving low-cost mixing is especially difficult for the application of slowly reacting suspended solids because the cost of impeller power consumption becomes quite high due to the long reaction times (batch mode) or due to large-volume reactors (continuous mode). Further, the usual scale-up metrics for mixing, e.g., constant tip speed and constant power per volume, do not apply well for mixing of suspensions. As an alternative, computational fluid dynamics (CFD) can be useful for analyzing mixing at multiple scales and determining appropriate mixer designs and operating parameters. We developed a mixture model to describe the hydrodynamics of a settling cellulose suspension. The suspension motion is represented as a single velocity field in a computationally efficient Eulerian framework. The solids are represented by a scalar volume-fraction field that undergoes transport due to particle diffusion, settling, fluid advection, and shear stress. A settling model and a viscosity model, both functions of volume fraction, were selected to fit experimental settling and viscosity data, respectively. Simulations were performed with the open-source Nek5000 CFD program, which is based on the high-order spectral-finite-element method. Simulations were performed for the cellulose suspension undergoing mixing in a laboratory-scale vane mixer. The settled-bed heights predicted by the simulations were in semi-quantitative agreement with experimental observations. Further, the simulation results were in quantitative agreement with experimentally obtained torque and mixing-rate data, including a characteristic torque bifurcation. In future work, we plan to couple this CFD model with a reaction-kinetics model for the enzymatic digestion of cellulose, allowing us to predict enzymatic digestion performance for various mixing intensities and novel reactor designs.

  13. Metabolic modeling of mixed substrate uptake for polyhydroxyalkanoate (PHA) production

    NARCIS (Netherlands)

    Jiang, Y.; Hebly, M.; Kleerebezem, R.; Muyzer, G.; van Loosdrecht, M.C.M.

    2011-01-01

    Polyhydroxyalkanoate (PHA) production by mixed microbial communities can be established in a two-stage process, consisting of a microbial enrichment step and a PHA accumulation step. In this study, a mathematical model was constructed for evaluating the influence of the carbon substrate composition

  14. Disposición a pagar por reducir el tiempo de viaje en Tunja (Colombia): Comparación entre estudiantes y trabajadores con un modelo Logit mixto

    OpenAIRE

    Luis Gabriel Márquez Díaz

    2013-01-01

    Resumen: El estudio analiza la diferencia en la disposición a pagar de estudiantes y trabajadores por reducir el tiempo de viaje, en un contexto de elección de modo de transporte para la ciudad de Tunja (Colombia). Se utilizó un modelo logit mixto, calibrado con datos provenientes de una encuesta de preferencias declaradas. La especificación del modelo supuso la variación aleatoria de los coeficientes del tiempo de acceso, tiempo de espera y tiempo de viaje. Se encontró que la disposición a p...

  15. Sensitivity Analysis of Mixed Models for Incomplete Longitudinal Data

    Science.gov (United States)

    Xu, Shu; Blozis, Shelley A.

    2011-01-01

    Mixed models are used for the analysis of data measured over time to study population-level change and individual differences in change characteristics. Linear and nonlinear functions may be used to describe a longitudinal response, individuals need not be observed at the same time points, and missing data, assumed to be missing at random (MAR),…

  16. Particle-Resolved Modeling of Aerosol Mixing State in an Evolving Ship Plume

    Science.gov (United States)

    Riemer, N. S.; Tian, J.; Pfaffenberger, L.; Schlager, H.; Petzold, A.

    2011-12-01

    The aerosol mixing state is important since it impacts the particles' optical and CCN properties and thereby their climate impact. It evolves continuously during the particles' residence time in the atmosphere as a result of coagulation with other particles and condensation of secondary aerosol species. This evolution is challenging to represent in traditional aerosol models since they require the representation of a multi-dimensional particle distribution. While modal or sectional aerosol representations cannot practically resolve the aerosol mixing state for more than a few species, particle-resolved models store the composition of many individual aerosol particles directly. They thus sample the high-dimensional composition state space very efficiently and so can deal with tens of species, fully resolving the mixing state. Here we use the capabilities of the particle-resolved model PartMC-MOSAIC to simulate the evolution of particulate matter emitted from marine diesel engines and compare the results to aircraft measurements made in the English Channel in 2007 as part of the European campaign QUANTIFY. The model was initialized with values of gas concentrations and particle size distributions and compositions representing fresh ship emissions. These values were obtained from a test rig study in the European project HERCULES in 2006 using a serial four-stroke marine diesel engine operating on high-sulfur heavy fuel oil. The freshly emitted particles consisted of sulfate, black carbon, organic carbon and ash. We then tracked the particle population for several hours as it evolved undergoing coagulation, dilution with the background air, and chemical transformations in the aerosol and gas phase. This simulation was used to compute the evolution of CCN properties and optical properties of the plume on a per-particle basis. We compared our results to size-resolved data of aged ship plumes from the QUANTIFY Study in 2007 and showed that the model was able to reproduce

  17. Individual taper models for natural cedar and Taurus fir mixed stands of Bucak Region, Turkey

    Directory of Open Access Journals (Sweden)

    Ramazan Özçelik

    2017-11-01

    Full Text Available In this study, we assessed the performance of different types of taper equations for predicting tree diameters at specific heights and total stem volumes for mixed stands of Taurus cedar (Cedrus libani A. Rich. and Taurus fir (Abies cilicica Carr.. We used data from mixed stands containing a total of 131 cedar and 124 Taurus fir trees. We evaluated six commonly used and well-known forestry taper functions developed by a variety of researchers (Biging (1984, Zakrzewski (1999, Muhairwe (1999, Fang et al. (2000, Kozak (2004, and Sharma and Zhang (2004. To address problems related to autocorrelation and multicollinearity in the hierarchical data associated with the construction of taper models, we used appropriate statistical procedures for the model fitting. We compared model performances based on the analysis of three goodness-of-fit statistics and found the compatible segmented model of Fang et al. (2000 to be superior in describing the stem profile and stem volume of both tree species in mixed stands. The equation used by Zakrzewski (1999 exhibited the poorest fitting results of the three taper equations. In general, we found segmented taper equations to provide more accurate predictions than variable-form models for both tree species. Results from the non-linear extra sum of squares method indicate that stem tapers differ among tree species in mixed stands. Therefore, a different taper function should be used for each tree species in mixed stands in the Bucak district. Using individual-specific taper equations yields more robust estimations and, therefore, will enhance the prediction accuracy of diameters at different heights and volumes in mixed stands.

  18. Normal and Special Models of Neutrino Masses and Mixings

    CERN Document Server

    Altarelli, Guido

    2005-01-01

    One can make a distinction between "normal" and "special" models. For normal models $\\theta_{23}$ is not too close to maximal and $\\theta_{13}$ is not too small, typically a small power of the self-suggesting order parameter $\\sqrt{r}$, with $r=\\Delta m_{sol}^2/\\Delta m_{atm}^2 \\sim 1/35$. Special models are those where some symmetry or dynamical feature assures in a natural way the near vanishing of $\\theta_{13}$ and/or of $\\theta_{23}- \\pi/4$. Normal models are conceptually more economical and much simpler to construct. Here we focus on special models, in particular a recent one based on A4 discrete symmetry and extra dimensions that leads in a natural way to a Harrison-Perkins-Scott mixing matrix.

  19. A Linear Mixed-Effects Model of Wireless Spectrum Occupancy

    Directory of Open Access Journals (Sweden)

    Pagadarai Srikanth

    2010-01-01

    Full Text Available We provide regression analysis-based statistical models to explain the usage of wireless spectrum across four mid-size US cities in four frequency bands. Specifically, the variations in spectrum occupancy across space, time, and frequency are investigated and compared between different sites within the city as well as with other cities. By applying the mixed-effects models, several conclusions are drawn that give the occupancy percentage and the ON time duration of the licensed signal transmission as a function of several predictor variables.

  20. Interpretable inference on the mixed effect model with the Box-Cox transformation.

    Science.gov (United States)

    Maruo, K; Yamaguchi, Y; Noma, H; Gosho, M

    2017-07-10

    We derived results for inference on parameters of the marginal model of the mixed effect model with the Box-Cox transformation based on the asymptotic theory approach. We also provided a robust variance estimator of the maximum likelihood estimator of the parameters of this model in consideration of the model misspecifications. Using these results, we developed an inference procedure for the difference of the model median between treatment groups at the specified occasion in the context of mixed effects models for repeated measures analysis for randomized clinical trials, which provided interpretable estimates of the treatment effect. From simulation studies, it was shown that our proposed method controlled type I error of the statistical test for the model median difference in almost all the situations and had moderate or high performance for power compared with the existing methods. We illustrated our method with cluster of differentiation 4 (CD4) data in an AIDS clinical trial, where the interpretability of the analysis results based on our proposed method is demonstrated. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  1. An integrated logit model for contamination event detection in water distribution systems.

    Science.gov (United States)

    Housh, Mashor; Ostfeld, Avi

    2015-05-15

    The problem of contamination event detection in water distribution systems has become one of the most challenging research topics in water distribution systems analysis. Current attempts for event detection utilize a variety of approaches including statistical, heuristics, machine learning, and optimization methods. Several existing event detection systems share a common feature in which alarms are obtained separately for each of the water quality indicators. Unifying those single alarms from different indicators is usually performed by means of simple heuristics. A salient feature of the current developed approach is using a statistically oriented model for discrete choice prediction which is estimated using the maximum likelihood method for integrating the single alarms. The discrete choice model is jointly calibrated with other components of the event detection system framework in a training data set using genetic algorithms. The fusing process of each indicator probabilities, which is left out of focus in many existing event detection system models, is confirmed to be a crucial part of the system which could be modelled by exploiting a discrete choice model for improving its performance. The developed methodology is tested on real water quality data, showing improved performances in decreasing the number of false positive alarms and in its ability to detect events with higher probabilities, compared to previous studies. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. A day in the city : using conjoint choice experiments to model urban tourists' choice of activity packages

    NARCIS (Netherlands)

    Dellaert, B.G.C.; Borgers, A.W.J.; Timmermans, H.J.P.

    1995-01-01

    This paper introduces and tests a conjoint choice experiment approach to modeling urban tourists' choice of activity packages. The joint logit model is introduced as a tool to model choices between combinations of activities and an experimental design approach is proposed that includes attributes

  3. Mixing Phenomena in a Bottom Blown Copper Smelter: A Water Model Study

    Science.gov (United States)

    Shui, Lang; Cui, Zhixiang; Ma, Xiaodong; Akbar Rhamdhani, M.; Nguyen, Anh; Zhao, Baojun

    2015-03-01

    The first commercial bottom blown oxygen copper smelting furnace has been installed and operated at Dongying Fangyuan Nonferrous Metals since 2008. Significant advantages have been demonstrated in this technology mainly due to its bottom blown oxygen-enriched gas. In this study, a scaled-down 1:12 model was set up to simulate the flow behavior for understanding the mixing phenomena in the furnace. A single lance was used in the present study for gas blowing to establish a reliable research technique and quantitative characterisation of the mixing behavior. Operating parameters such as horizontal distance from the blowing lance, detector depth, bath height, and gas flow rate were adjusted to investigate the mixing time under different conditions. It was found that when the horizontal distance between the lance and detector is within an effective stirring range, the mixing time decreases slightly with increasing the horizontal distance. Outside this range, the mixing time was found to increase with increasing the horizontal distance and it is more significant on the surface. The mixing time always decreases with increasing gas flow rate and bath height. An empirical relationship of mixing time as functions of gas flow rate and bath height has been established first time for the horizontal bottom blowing furnace.

  4. Modeling the oxygen uptake kinetics during exercise testing of patients with chronic obstructive pulmonary diseases using nonlinear mixed models

    DEFF Research Database (Denmark)

    Baty, Florent; Ritz, Christian; van Gestel, Arnoldus

    2016-01-01

    describe functionality of the R package medrc that extends the framework of the commonly used packages drc and nlme and allows fitting nonlinear mixed effects models for automated nonlinear regression modeling. The methodology was applied to a data set including 6MWT [Formula: see text]O2 kinetics from 61...... patients with chronic obstructive pulmonary disease (disease severity stage II to IV). The mixed effects approach was compared to a traditional curve-by-curve approach. RESULTS: A six-parameter nonlinear regression model was jointly fitted to the set of [Formula: see text]O2 kinetics. Significant...

  5. CP violation for electroweak baryogenesis from mixing of standard model and heavy vector quarks

    International Nuclear Information System (INIS)

    McDonald, J.

    1996-01-01

    It is known that the CP violation in the minimal standard model is insufficient to explain the observed baryon asymmetry of the Universe in the context electroweak baryogenesis. In this paper we consider the possibility that the additional CP violation required could originate in the mixing of the standard model quarks and heavy vector quark pairs. We consider the baryon asymmetry in the context of the spontaneous baryogenesis scenario. It is shown that, in general, the CP-violating phase entering the mass matrix of the standard model and heavy vector quarks must be space dependent in order to produce a baryon asymmetry, suggesting that the additional CP violation must be spontaneous in nature. This is true for the case of the simplest models which mix the standard model and heavy vector quarks. We derive a charge potential term for the model by diagonalizing the quark mass matrix in the presence of the electroweak bubble wall, which turns out to be quite different from the fermionic hypercharge potentials usually considered in spontaneous baryogenesis models, and obtain the rate of baryon number generation within the wall. We find, for the particular example where the standard model quarks mix with weak-isodoublet heavy vector quarks via the expectation value of a gauge singlet scalar, that we can account for the observed baryon asymmetry with conservative estimates for the uncertain parameters of electroweak baryogenesis, provided that the heavy vector quarks are not heavier than a few hundred GeV and that the coupling of the standard model quarks to the heavy vector quarks and gauge singlet scalars is not much smaller than order of 1, corresponding to a mixing angle of the heavy vector quarks and standard model quarks not much smaller than order of 10 -1 . copyright 1996 The American Physical Society

  6. Analysis of the type II robotic mixed-model assembly line balancing problem

    Science.gov (United States)

    Çil, Zeynel Abidin; Mete, Süleyman; Ağpak, Kürşad

    2017-06-01

    In recent years, there has been an increasing trend towards using robots in production systems. Robots are used in different areas such as packaging, transportation, loading/unloading and especially assembly lines. One important step in taking advantage of robots on the assembly line is considering them while balancing the line. On the other hand, market conditions have increased the importance of mixed-model assembly lines. Therefore, in this article, the robotic mixed-model assembly line balancing problem is studied. The aim of this study is to develop a new efficient heuristic algorithm based on beam search in order to minimize the sum of cycle times over all models. In addition, mathematical models of the problem are presented for comparison. The proposed heuristic is tested on benchmark problems and compared with the optimal solutions. The results show that the algorithm is very competitive and is a promising tool for further research.

  7. A novel modeling approach to the mixing process in twin-screw extruders

    Science.gov (United States)

    Kennedy, Amedu Osaighe; Penlington, Roger; Busawon, Krishna; Morgan, Andy

    2014-05-01

    In this paper, a theoretical model for the mixing process in a self-wiping co-rotating twin screw extruder by combination of statistical techniques and mechanistic modelling has been proposed. The approach was to examine the mixing process in the local zones via residence time distribution and the flow dynamics, from which predictive models of the mean residence time and mean time delay were determined. Increase in feed rate at constant screw speed was found to narrow the shape of the residence time distribution curve, reduction in the mean residence time and time delay and increase in the degree of fill. Increase in screw speed at constant feed rate was found to narrow the shape of the residence time distribution curve, decrease in the degree of fill in the extruder and thus an increase in the time delay. Experimental investigation was also done to validate the modeling approach.

  8. Linear mixed models in sensometrics

    DEFF Research Database (Denmark)

    Kuznetsova, Alexandra

    quality of decision making in Danish as well as international food companies and other companies using the same methods. The two open-source R packages lmerTest and SensMixed implement and support the methodological developments in the research papers as well as the ANOVA modelling part of the Consumer...... an open-source software tool ConsumerCheck was developed in this project and now is available for everyone. will represent a major step forward when concerns this important problem in modern consumer driven product development. Standard statistical software packages can be used for some of the purposes......Today’s companies and researchers gather large amounts of data of different kind. In consumer studies the objective is the collection of the data to better understand consumer acceptance of products. In such studies a number of persons (generally not trained) are selected in order to score products...

  9. Solving large mixed linear models using preconditioned conjugate gradient iteration.

    Science.gov (United States)

    Strandén, I; Lidauer, M

    1999-12-01

    Continuous evaluation of dairy cattle with a random regression test-day model requires a fast solving method and algorithm. A new computing technique feasible in Jacobi and conjugate gradient based iterative methods using iteration on data is presented. In the new computing technique, the calculations in multiplication of a vector by a matrix were recorded to three steps instead of the commonly used two steps. The three-step method was implemented in a general mixed linear model program that used preconditioned conjugate gradient iteration. Performance of this program in comparison to other general solving programs was assessed via estimation of breeding values using univariate, multivariate, and random regression test-day models. Central processing unit time per iteration with the new three-step technique was, at best, one-third that needed with the old technique. Performance was best with the test-day model, which was the largest and most complex model used. The new program did well in comparison to other general software. Programs keeping the mixed model equations in random access memory required at least 20 and 435% more time to solve the univariate and multivariate animal models, respectively. Computations of the second best iteration on data took approximately three and five times longer for the animal and test-day models, respectively, than did the new program. Good performance was due to fast computing time per iteration and quick convergence to the final solutions. Use of preconditioned conjugate gradient based methods in solving large breeding value problems is supported by our findings.

  10. Testing the family replication-model through Bsup(O)-Bsup(-O) mixing

    International Nuclear Information System (INIS)

    Datta, A.; Pati, J.C.

    1985-07-01

    It is observed that the family-replication idea, proposed in the context of a minimal preon-model, necessarily implies a maximal mixing (i.e. ΔM>>GAMMA) either in the Bsub(s)sup(O)-B-barsub(s)sup(O) or the Bsub(d)sup(O)-B-barsub(d)sup(O) system, in contrast to the standard model. (author)

  11. BWR MARK I pressure suppression pool mixing and stratification analysis using GOTHIC lumped parameter modeling methodology

    International Nuclear Information System (INIS)

    Ozdemir, Ozkan Emre; George, Thomas L.

    2015-01-01

    As a part of the GOTHIC (GOTHIC incorporates technology developed for the electric power industry under the sponsorship of EPRI.) Fukushima Technical Evaluation project (EPRI, 2014a, b, 2015), GOTHIC (EPRI, 2014c) has been benchmarked against test data for pool stratification (EPRI, 2014a, b, Ozdemir and George, 2013). These tests confirmed GOTHIC’s ability to simulate pool mixing and stratification under a variety of anticipated suppression pool operating conditions. The multidimensional modeling requires long simulation times for events that may occur over a period of hours or days. For these scenarios a lumped model of the pressure suppression chamber is desirable to maintain reasonable simulation times. However, a lumped model for the pool is not able to predict the effects of pool stratification that can influence the overall containment response. The main objective of this work is on the development of a correlation that can be used to estimate pool mixing and stratification effects in a lumped modeling approach. A simplified lumped GOTHIC model that includes a two zone model for the suppression pool with controlled circulation between the upper and lower zones was constructed. A pump and associated flow connections are included to provide mixing between the upper and lower pool volumes. Using numerically generated data from a multidimensional GOTHIC model for the suppression pool, a correlation was developed for the mixing rate between the upper and lower pool volumes in a two-zone, lumped model. The mixing rate depends on the pool subcooling, the steam injection rate and the injection depth

  12. Skew-t partially linear mixed-effects models for AIDS clinical studies.

    Science.gov (United States)

    Lu, Tao

    2016-01-01

    We propose partially linear mixed-effects models with asymmetry and missingness to investigate the relationship between two biomarkers in clinical studies. The proposed models take into account irregular time effects commonly observed in clinical studies under a semiparametric model framework. In addition, commonly assumed symmetric distributions for model errors are substituted by asymmetric distribution to account for skewness. Further, informative missing data mechanism is accounted for. A Bayesian approach is developed to perform parameter estimation simultaneously. The proposed model and method are applied to an AIDS dataset and comparisons with alternative models are performed.

  13. A vine copula mixed effect model for trivariate meta-analysis of diagnostic test accuracy studies accounting for disease prevalence.

    Science.gov (United States)

    Nikoloulopoulos, Aristidis K

    2017-10-01

    A bivariate copula mixed model has been recently proposed to synthesize diagnostic test accuracy studies and it has been shown that it is superior to the standard generalized linear mixed model in this context. Here, we call trivariate vine copulas to extend the bivariate meta-analysis of diagnostic test accuracy studies by accounting for disease prevalence. Our vine copula mixed model includes the trivariate generalized linear mixed model as a special case and can also operate on the original scale of sensitivity, specificity, and disease prevalence. Our general methodology is illustrated by re-analyzing the data of two published meta-analyses. Our study suggests that there can be an improvement on trivariate generalized linear mixed model in fit to data and makes the argument for moving to vine copula random effects models especially because of their richness, including reflection asymmetric tail dependence, and computational feasibility despite their three dimensionality.

  14. Analysis of a PDF model in a mixing layer case

    International Nuclear Information System (INIS)

    Minier, J.P.; Pozorski, J.

    1996-04-01

    A recent turbulence model put forward by Pope (1991) in the context of PDF modeling has been applied to a mixing layer case. This model solves the one-point joint velocity-dissipation pdf equation by simulating the instantaneous behaviour of a large number of Lagrangian fluid particles. Closure of the evolution equations of these Lagrangian particles is based on diffusion stochastic processes. The paper reports numerical results and tries to analyse the physical meaning of some variables, in particular the dissipation-weighted kinetic energy and its relation with external intermittency. (authors). 14 refs., 7 figs

  15. A size-composition resolved aerosol model for simulating the dynamics of externally mixed particles: SCRAM (v 1.0)

    Science.gov (United States)

    Zhu, S.; Sartelet, K. N.; Seigneur, C.

    2015-06-01

    The Size-Composition Resolved Aerosol Model (SCRAM) for simulating the dynamics of externally mixed atmospheric particles is presented. This new model classifies aerosols by both composition and size, based on a comprehensive combination of all chemical species and their mass-fraction sections. All three main processes involved in aerosol dynamics (coagulation, condensation/evaporation and nucleation) are included. The model is first validated by comparison with a reference solution and with results of simulations using internally mixed particles. The degree of mixing of particles is investigated in a box model simulation using data representative of air pollution in Greater Paris. The relative influence on the mixing state of the different aerosol processes (condensation/evaporation, coagulation) and of the algorithm used to model condensation/evaporation (bulk equilibrium, dynamic) is studied.

  16. Large mixing of light and heavy neutrinos in seesaw models and the LHC

    International Nuclear Information System (INIS)

    He Xiaogang; Oh, Sechul; Tandean, Jusak; Wen, C.-C.

    2009-01-01

    In the type-I seesaw model the size of mixing between light and heavy neutrinos, ν and N, respectively, is of order the square root of their mass ratio, (m ν /m N ) 1/2 , with only one generation of the neutrinos. Since the light-neutrino mass must be less than an eV or so, the mixing would be very small, even for a heavy-neutrino mass of order a few hundred GeV. This would make it unlikely to test the model directly at the LHC, as the amplitude for producing the heavy neutrino is proportional to the mixing size. However, it has been realized for some time that, with more than one generation of light and heavy neutrinos, the mixing can be significantly larger in certain situations. In this paper we explore this possibility further and consider specific examples in detail in the context of type-I seesaw. We study its implications for the single production of the heavy neutrinos at the LHC via the main channel qq ' →W*→lN involving an ordinary charged lepton l. We then extend the discussion to the type-III seesaw model, which has richer phenomenology due to presence of the charged partners of the heavy neutrinos, and examine the implications for the single production of these heavy leptons at the LHC. In the latter model the new kinds of solutions that we find also make it possible to have sizable flavor-changing neutral-current effects in processes involving ordinary charged leptons.

  17. Estimating preferences for local public services using migration data.

    Science.gov (United States)

    Dahlberg, Matz; Eklöf, Matias; Fredriksson, Peter; Jofre-Monseny, Jordi

    2012-01-01

    Using Swedish micro data, the paper examines the impact of local public services on community choice. The choice of community is modelled as a choice between a discrete set of alternatives. It is found that, given taxes, high spending on child care attracts migrants. Less conclusive results are obtained with respect to the role of spending on education and elderly care. High local taxes deter migrants. Relaxing the independence of the irrelevant alternatives assumption, by estimating a mixed logit model, has a significant impact on the results.

  18. Deviations from tribimaximal mixing due to the vacuum expectation value misalignment in A4 models

    International Nuclear Information System (INIS)

    Barry, James; Rodejohann, Werner

    2010-01-01

    The addition of an A 4 family symmetry and extended Higgs sector to the standard model can generate the tribimaximal mixing pattern for leptons, assuming the correct vacuum expectation value alignment of the Higgs scalars. Deviating this alignment affects the predictions for the neutrino oscillation and neutrino mass observables. An attempt is made to classify the plethora of models in the literature, with respect to the chosen A 4 particle assignments. Of these models, two particularly popular examples have been analyzed for deviations from tribimaximal mixing by perturbing the vacuum expectation value alignments. The effect of perturbations on the mixing angle observables is studied. However, it is only investigation of the mass-related observables (the effective mass for neutrinoless double beta decay and the sum of masses from cosmology) that can lead to the exclusion of particular models by constraints from future data, which indicates the importance of neutrino mass in disentangling models. The models have also been tested for fine-tuning of the parameters. Furthermore, a well-known seesaw model is generalized to include additional scalars, which transform as representations of A 4 not included in the original model.

  19. TAFV Alternative Fuels and Vehicles Choice Model Documentation; TOPICAL

    International Nuclear Information System (INIS)

    Greene, D.L.

    2001-01-01

    A model for predicting choice of alternative fuel and among alternative vehicle technologies for light-duty motor vehicles is derived. The nested multinomial logit (NML) mathematical framework is used. Calibration of the model is based on information in the existing literature and deduction based on assuming a small number of key parameters, such as the value of time and discount rates. A spreadsheet model has been developed for calibration and preliminary testing of the model

  20. Confidence Intervals for Assessing Heterogeneity in Generalized Linear Mixed Models

    Science.gov (United States)

    Wagler, Amy E.

    2014-01-01

    Generalized linear mixed models are frequently applied to data with clustered categorical outcomes. The effect of clustering on the response is often difficult to practically assess partly because it is reported on a scale on which comparisons with regression parameters are difficult to make. This article proposes confidence intervals for…

  1. Modeling of RFID-Enabled Real-Time Manufacturing Execution System in Mixed-Model Assembly Lines

    Directory of Open Access Journals (Sweden)

    Zhixin Yang

    2015-01-01

    Full Text Available To quickly respond to the diverse product demands, mixed-model assembly lines are well adopted in discrete manufacturing industries. Besides the complexity in material distribution, mixed-model assembly involves a variety of components, different process plans and fast production changes, which greatly increase the difficulty for agile production management. Aiming at breaking through the bottlenecks in existing production management, a novel RFID-enabled manufacturing execution system (MES, which is featured with real-time and wireless information interaction capability, is proposed to identify various manufacturing objects including WIPs, tools, and operators, etc., and to trace their movements throughout the production processes. However, being subject to the constraints in terms of safety stock, machine assignment, setup, and scheduling requirements, the optimization of RFID-enabled MES model for production planning and scheduling issues is a NP-hard problem. A new heuristical generalized Lagrangian decomposition approach has been proposed for model optimization, which decomposes the model into three subproblems: computation of optimal configuration of RFID senor networks, optimization of production planning subjected to machine setup cost and safety stock constraints, and optimization of scheduling for minimized overtime. RFID signal processing methods that could solve unreliable, redundant, and missing tag events are also described in detail. The model validity is discussed through algorithm analysis and verified through numerical simulation. The proposed design scheme has important reference value for the applications of RFID in multiple manufacturing fields, and also lays a vital research foundation to leverage digital and networked manufacturing system towards intelligence.

  2. Choice experiments versus revealed choice models : a before-after study of consumer spatial shopping behavior

    NARCIS (Netherlands)

    Timmermans, H.J.P.; Borgers, A.W.J.; Waerden, van der P.J.H.J.

    1992-01-01

    The purpose of this article is to compare a set of multinomial logit models derived from revealed choice data and a decompositional choice model derived from experimental data in terms of predictive success in the context of consumer spatial shopping behavior. Data on consumer shopping choice

  3. Evaluation of Statistical Methods for Modeling Historical Resource Production and Forecasting

    Science.gov (United States)

    Nanzad, Bolorchimeg

    This master's thesis project consists of two parts. Part I of the project compares modeling of historical resource production and forecasting of future production trends using the logit/probit transform advocated by Rutledge (2011) with conventional Hubbert curve fitting, using global coal production as a case study. The conventional Hubbert/Gaussian method fits a curve to historical production data whereas a logit/probit transform uses a linear fit to a subset of transformed production data. Within the errors and limitations inherent in this type of statistical modeling, these methods provide comparable results. That is, despite that apparent goodness-of-fit achievable using the Logit/Probit methodology, neither approach provides a significant advantage over the other in either explaining the observed data or in making future projections. For mature production regions, those that have already substantially passed peak production, results obtained by either method are closely comparable and reasonable, and estimates of ultimately recoverable resources obtained by either method are consistent with geologically estimated reserves. In contrast, for immature regions, estimates of ultimately recoverable resources generated by either of these alternative methods are unstable and thus, need to be used with caution. Although the logit/probit transform generates high quality-of-fit correspondence with historical production data, this approach provides no new information compared to conventional Gaussian or Hubbert-type models and may have the effect of masking the noise and/or instability in the data and the derived fits. In particular, production forecasts for immature or marginally mature production systems based on either method need to be regarded with considerable caution. Part II of the project investigates the utility of a novel alternative method for multicyclic Hubbert modeling tentatively termed "cycle-jumping" wherein overlap of multiple cycles is limited. The

  4. Euler-Lagrange CFD modelling of unconfined gas mixing in anaerobic digestion.

    Science.gov (United States)

    Dapelo, Davide; Alberini, Federico; Bridgeman, John

    2015-11-15

    A novel Euler-Lagrangian (EL) computational fluid dynamics (CFD) finite volume-based model to simulate the gas mixing of sludge for anaerobic digestion is developed and described. Fluid motion is driven by momentum transfer from bubbles to liquid. Model validation is undertaken by assessing the flow field in a labscale model with particle image velocimetry (PIV). Conclusions are drawn about the upscaling and applicability of the model to full-scale problems, and recommendations are given for optimum application. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. Mixed Portmanteau Test for Diagnostic Checking of Time Series Models

    Directory of Open Access Journals (Sweden)

    Sohail Chand

    2014-01-01

    Full Text Available Model criticism is an important stage of model building and thus goodness of fit tests provides a set of tools for diagnostic checking of the fitted model. Several tests are suggested in literature for diagnostic checking. These tests use autocorrelation or partial autocorrelation in the residuals to criticize the adequacy of fitted model. The main idea underlying these portmanteau tests is to identify if there is any dependence structure which is yet unexplained by the fitted model. In this paper, we suggest mixed portmanteau tests based on autocorrelation and partial autocorrelation functions of the residuals. We derived the asymptotic distribution of the mixture test and studied its size and power using Monte Carlo simulations.

  6. Optimization model of energy mix taking into account the environmental impact

    International Nuclear Information System (INIS)

    Gruenwald, O.; Oprea, D.

    2012-01-01

    At present, the energy system in the Czech Republic needs to decide some important issues regarding limited fossil resources, greater efficiency in producing of electrical energy and reducing emission levels of pollutants. These problems can be decided only by formulating and implementing an energy mix that will meet these conditions: rational, reliable, sustainable and competitive. The aim of this article is to find a new way of determining an optimal mix for the energy system in the Czech Republic. To achieve the aim, the linear optimization model comprising several economics, environmental and technical aspects will be applied. (Authors)

  7. Models to understand the population-level impact of mixed strain M. tuberculosis infections.

    Science.gov (United States)

    Sergeev, Rinat; Colijn, Caroline; Cohen, Ted

    2011-07-07

    Over the past decade, numerous studies have identified tuberculosis patients in whom more than one distinct strain of Mycobacterium tuberculosis is present. While it has been shown that these mixed strain infections can reduce the probability of treatment success for individuals simultaneously harboring both drug-sensitive and drug-resistant strains, it is not yet known if and how this phenomenon impacts the long-term dynamics for tuberculosis within communities. Strain-specific differences in immunogenicity and associations with drug resistance suggest that a better understanding of how strains compete within hosts will be necessary to project the effects of mixed strain infections on the future burden of drug-sensitive and drug-resistant tuberculosis. In this paper, we develop a modeling framework that allows us to investigate mechanisms of strain competition within hosts and to assess the long-term effects of such competition on the ecology of strains in a population. These models permit us to systematically evaluate the importance of unknown parameters and to suggest priority areas for future experimental research. Despite the current scarcity of data to inform the values of several model parameters, we are able to draw important qualitative conclusions from this work. We find that mixed strain infections may promote the coexistence of drug-sensitive and drug-resistant strains in two ways. First, mixed strain infections allow a strain with a lower basic reproductive number to persist in a population where it would otherwise be outcompeted if has competitive advantages within a co-infected host. Second, some individuals progressing to phenotypically drug-sensitive tuberculosis from a state of mixed drug-sensitive and drug-resistant infection may retain small subpopulations of drug-resistant bacteria that can flourish once the host is treated with antibiotics. We propose that these types of mixed infections, by increasing the ability of low fitness drug

  8. Sneutrino mixing

    International Nuclear Information System (INIS)

    Grossman, Y.

    1997-10-01

    In supersymmetric models with nonvanishing Majorana neutrino masses, the sneutrino and antisneutrino mix. The conditions under which this mixing is experimentally observable are studied, and mass-splitting of the sneutrino mass eigenstates and sneutrino oscillation phenomena are analyzed

  9. Modeling Magma Mixing: Evidence from U-series age dating and Numerical Simulations

    Science.gov (United States)

    Philipp, R.; Cooper, K. M.; Bergantz, G. W.

    2007-12-01

    Magma mixing and recharge is an ubiquitous process in the shallow crust, which can trigger eruption and cause magma hybridization. Phenocrysts in mixed magmas are recorders for magma mixing and can be studied by in- situ techniques and analyses of bulk mineral separates. To better understand if micro-textural and compositional information reflects local or reservoir-scale events, a physical model for gathering and dispersal of crystals is necessary. We present the results of a combined geochemical and fluid dynamical study of magma mixing processes at Volcan Quizapu, Chile; two large (1846/47 AD and 1932 AD) dacitic eruptions from the same vent area were triggered by andesitic recharge magma and show various degrees of magma mixing. Employing a multiphase numerical fluid dynamic model, we simulated a simple mixing process of vesiculated mafic magma intruded into a crystal-bearing silicic reservoir. This unstable condition leads to overturn and mixing. In a second step we use the velocity field obtained to calculate the flow path of 5000 crystals randomly distributed over the entire system. Those particles mimic the phenocryst response to the convective motion. There is little local relative motion between silicate liquid and crystals due to the high viscosity of the melts and the rapid overturn rate of the system. Of special interest is the crystal dispersal and gathering, which is quantified by comparing the distance at the beginning and end of the simulation for all particle pairs that are initially closer than a length scale chosen between 1 and 10 m. At the start of the simulation, both the resident and new intruding (mafic) magmas have a unique particle population. Depending on the Reynolds number (Re) and the chosen characteristic length scale of different phenocryst-pairs, we statistically describe the heterogeneity of crystal populations on the thin section scale. For large Re (approx. 25) and a short characteristic length scale of particle

  10. Markov and semi-Markov switching linear mixed models used to identify forest tree growth components.

    Science.gov (United States)

    Chaubert-Pereira, Florence; Guédon, Yann; Lavergne, Christian; Trottier, Catherine

    2010-09-01

    Tree growth is assumed to be mainly the result of three components: (i) an endogenous component assumed to be structured as a succession of roughly stationary phases separated by marked change points that are asynchronous among individuals, (ii) a time-varying environmental component assumed to take the form of synchronous fluctuations among individuals, and (iii) an individual component corresponding mainly to the local environment of each tree. To identify and characterize these three components, we propose to use semi-Markov switching linear mixed models, i.e., models that combine linear mixed models in a semi-Markovian manner. The underlying semi-Markov chain represents the succession of growth phases and their lengths (endogenous component) whereas the linear mixed models attached to each state of the underlying semi-Markov chain represent-in the corresponding growth phase-both the influence of time-varying climatic covariates (environmental component) as fixed effects, and interindividual heterogeneity (individual component) as random effects. In this article, we address the estimation of Markov and semi-Markov switching linear mixed models in a general framework. We propose a Monte Carlo expectation-maximization like algorithm whose iterations decompose into three steps: (i) sampling of state sequences given random effects, (ii) prediction of random effects given state sequences, and (iii) maximization. The proposed statistical modeling approach is illustrated by the analysis of successive annual shoots along Corsican pine trunks influenced by climatic covariates. © 2009, The International Biometric Society.

  11. Stochastic scalar mixing models accounting for turbulent frequency multiscale fluctuations

    International Nuclear Information System (INIS)

    Soulard, Olivier; Sabel'nikov, Vladimir; Gorokhovski, Michael

    2004-01-01

    Two new scalar micromixing models accounting for a turbulent frequency scale distribution are investigated. These models were derived by Sabel'nikov and Gorokhovski [Second International Symposium on Turbulence and Shear FLow Phenomena, Royal Institute of technology (KTH), Stockholm, Sweden, June 27-29, 2001] using a multiscale extension of the classical interaction by exchange with the mean (IEM) and Langevin models. They are, respectively, called Extended IEM (EIEM) and Extended Langevin (ELM) models. The EIEM and ELM models are tested against DNS results in the case of the decay of a homogeneous scalar field in homogeneous turbulence. This comparison leads to a reformulation of the law governing the mixing frequency distribution. Finally, the asymptotic behaviour of the modeled PDF is discussed

  12. ρ-ω mixing and the Nolen-Schiffer anomaly in the Walecka model

    International Nuclear Information System (INIS)

    Barreiro, L.A.; Galeao, A.P.; Krein, G.

    1995-01-01

    The Nolen-Schiffer anomaly is the long standing discrepancy between theory and experiment of binding energy differences of mirror nuclei. It appears that the anomaly is largely explained by the charge symmetry breaking force generated by the ρ 0 -ω mixing. In the present contribution we present the results of a calculation of the effect of the ρ 0 -ω mixing to the binding energy differences for nuclei with A = 15, 17, 39, 41 using the Walecka model for the nuclear structure. (author)

  13. Mixed integer linear programming model for dynamic supplier selection problem considering discounts

    Directory of Open Access Journals (Sweden)

    Adi Wicaksono Purnawan

    2018-01-01

    Full Text Available Supplier selection is one of the most important elements in supply chain management. This function involves evaluation of many factors such as, material costs, transportation costs, quality, delays, supplier capacity, storage capacity and others. Each of these factors varies with time, therefore, supplier identified for one period is not necessarily be same for the next period to supply the same product. So, mixed integer linear programming (MILP was developed to overcome the dynamic supplier selection problem (DSSP. In this paper, a mixed integer linear programming model is built to solve the lot-sizing problem with multiple suppliers, multiple periods, multiple products and quantity discounts. The buyer has to make a decision for some products which will be supplied by some suppliers for some periods cosidering by discount. To validate the MILP model with randomly generated data. The model is solved by Lingo 16.

  14. Modeling policy mix to improve the competitiveness of Indonesian palm oil industry

    Energy Technology Data Exchange (ETDEWEB)

    Silitonga, R. Y.H.; Siswanto, J.; Simatupang, T.; Bahagia, S.N.

    2016-07-01

    The purpose of this research is to develop a model that will explain the impact of government policies to the competitiveness of palm oil industry. The model involves two commodities in this industry, namely crude palm oil (CPO) and refined palm oil (RPO), each has different added value. The model built will define the behavior of government in controlling palm oil industry, and their interactions with macro-environment, in order to improve the competitiveness of the industry. Therefore the first step was to map the main activities in this industry using value chain analysis. After that a conceptual model was built, where the output of the model is competitiveness of the industry based on market share. The third step was model formulation. The model is then utilized to simulate the policy mix given by government in improving the competitiveness of Palm Oil Industry. The model was developed using only some policies which give direct impact to the competitiveness of the industry. For macro environment input, only price is considered in this model. The model can simulate the output of the industry for various government policies mix given to the industry. This research develops a model that can represent the structure and relationship between industry, government and macro environment, using value chain analysis and hierarchical multilevel system approach. (Author)

  15. Modeling policy mix to improve the competitiveness of Indonesian palm oil industry

    Directory of Open Access Journals (Sweden)

    Roland Y H Silitonga

    2016-04-01

    Full Text Available Purpose: The purpose of this research is to develop a model that will explain the impact of government policies to the competitiveness of palm oil industry. The model involves two commodities in this industry, namely crude palm oil (CPO and refined palm oil (RPO, each has different added value. Design/methodology/approach: The model built will define the behavior of government in controlling palm oil industry, and their interactions with macro-environment, in order to improve the competitiveness of the industry. Therefore the first step was to map the main activities in this industry using value chain analysis. After that a conceptual model was built, where the output of the model is competitiveness of the industry based on market share. The third step was model formulation. The model is then utilized to simulate the policy mix given by government in improving the competitiveness of Palm Oil Industry. Research limitations/implications: The model was developed using only some policies which give direct impact to the competitiveness of the industry. For macro environment input, only price is considered in this model. Practical implications: The model can simulate the output of the industry for various government policies mix given to the industry. Originality/value: This research develops a model that can represent the structure and relationship between industry, government and macro environment, using value chain analysis and hierarchical multilevel system approach.

  16. Canards and mixed-mode oscillations in a forest pest model

    DEFF Research Database (Denmark)

    Brøns, Morten; Kaasen, Rune

    2010-01-01

    of high pest concentration. For small values of the timescale of the young trees, the model can be reduced to a two-dimensional model. By a geometrical analysis we identify a canard explosion in the reduced model, that is, a change over a narrow parameter interval from outbreak dynamics to small...... oscillations around an endemic state. For larger values of the timescale of the young trees the two-dimensional approximation breaks down, and a broader parameter interval with mixed-mode oscillations appear, replacing the simple canard explosion. The analysis only relies on simple and generic properties...

  17. Bs0–B-bars0 mixing within minimal flavor-violating two-Higgs-doublet models

    International Nuclear Information System (INIS)

    Chang, Qin; Li, Pei-Fu; Li, Xin-Qiang

    2015-01-01

    In the “Higgs basis” for a generic 2HDM, only one scalar doublet gets a nonzero vacuum expectation value and, under the criterion of minimal flavor violation, the other one is fixed to be either color-singlet or color-octet, which are named as the type-III and type-C models, respectively. In this paper, the charged-Higgs effects of these two models on B s 0 –B -bar s 0 mixing are studied. First of all, we perform a complete one-loop computation of the electro-weak corrections to the amplitudes of B s 0 –B -bar s 0 mixing. Together with the up-to-date experimental measurements, a detailed phenomenological analysis is then performed in the cases of both real and complex Yukawa couplings of charged scalars to quarks. The spaces of model parameters allowed by the current experimental data on B s 0 –B -bar s 0 mixing are obtained and the differences between type-III and type-C models are investigated, which is helpful to distinguish between these two models

  18. Trending in Probability of Collision Measurements via a Bayesian Zero-Inflated Beta Mixed Model

    Science.gov (United States)

    Vallejo, Jonathon; Hejduk, Matt; Stamey, James

    2015-01-01

    We investigate the performance of a generalized linear mixed model in predicting the Probabilities of Collision (Pc) for conjunction events. Specifically, we apply this model to the log(sub 10) transformation of these probabilities and argue that this transformation yields values that can be considered bounded in practice. Additionally, this bounded random variable, after scaling, is zero-inflated. Consequently, we model these values using the zero-inflated Beta distribution, and utilize the Bayesian paradigm and the mixed model framework to borrow information from past and current events. This provides a natural way to model the data and provides a basis for answering questions of interest, such as what is the likelihood of observing a probability of collision equal to the effective value of zero on a subsequent observation.

  19. An applied model for the height of the daytime mixed layer and the entrainment zone

    DEFF Research Database (Denmark)

    Batchvarova, E.; Gryning, Sven-Erik

    1994-01-01

    A model is presented for the height of the mixed layer and the depth of the entrainment zone under near-neutral and unstable atmospheric conditions. It is based on the zero-order mixed layer height model of Batchvarova and Gryning (1991) and the parameterization of the entrainment zone depth......-layer height: friction velocity, kinematic heat flux near the ground and potential temperature gradient in the free atmosphere above the entrainment zone. When information is available on the horizontal divergence of the large-scale flow field, the model also takes into account the effect of subsidence...

  20. Subgrid models for mass and thermal diffusion in turbulent mixing

    Energy Technology Data Exchange (ETDEWEB)

    Sharp, David H [Los Alamos National Laboratory; Lim, Hyunkyung [STONY BROOK UNIV; Li, Xiao - Lin [STONY BROOK UNIV; Gilmm, James G [STONY BROOK UNIV

    2008-01-01

    We are concerned with the chaotic flow fields of turbulent mixing. Chaotic flow is found in an extreme form in multiply shocked Richtmyer-Meshkov unstable flows. The goal of a converged simulation for this problem is twofold: to obtain converged solutions for macro solution features, such as the trajectories of the principal shock waves, mixing zone edges, and mean densities and velocities within each phase, and also for such micro solution features as the joint probability distributions of the temperature and species concentration. We introduce parameterized subgrid models of mass and thermal diffusion, to define large eddy simulations (LES) that replicate the micro features observed in the direct numerical simulation (DNS). The Schmidt numbers and Prandtl numbers are chosen to represent typical liquid, gas and plasma parameter values. Our main result is to explore the variation of the Schmidt, Prandtl and Reynolds numbers by three orders of magnitude, and the mesh by a factor of 8 per linear dimension (up to 3200 cells per dimension), to allow exploration of both DNS and LES regimes and verification of the simulations for both macro and micro observables. We find mesh convergence for key properties describing the molecular level of mixing, including chemical reaction rates between the distinct fluid species. We find results nearly independent of Reynolds number for Re 300, 6000, 600K . Methodologically, the results are also new. In common with the shock capturing community, we allow and maintain sharp solution gradients, and we enhance these gradients through use of front tracking. In common with the turbulence modeling community, we include subgrid scale models with no adjustable parameters for LES. To the authors' knowledge, these two methodologies have not been previously combined. In contrast to both of these methodologies, our use of Front Tracking, with DNS or LES resolution of the momentum equation at or near the Kolmogorov scale, but without

  1. Partially linear mixed-effects joint models for skewed and missing longitudinal competing risks outcomes.

    Science.gov (United States)

    Lu, Tao; Lu, Minggen; Wang, Min; Zhang, Jun; Dong, Guang-Hui; Xu, Yong

    2017-12-18

    Longitudinal competing risks data frequently arise in clinical studies. Skewness and missingness are commonly observed for these data in practice. However, most joint models do not account for these data features. In this article, we propose partially linear mixed-effects joint models to analyze skew longitudinal competing risks data with missingness. In particular, to account for skewness, we replace the commonly assumed symmetric distributions by asymmetric distribution for model errors. To deal with missingness, we employ an informative missing data model. The joint models that couple the partially linear mixed-effects model for the longitudinal process, the cause-specific proportional hazard model for competing risks process and missing data process are developed. To estimate the parameters in the joint models, we propose a fully Bayesian approach based on the joint likelihood. To illustrate the proposed model and method, we implement them to an AIDS clinical study. Some interesting findings are reported. We also conduct simulation studies to validate the proposed method.

  2. Multilevel nonlinear mixed-effects models for the modeling of earlywood and latewood microfibril angle

    Science.gov (United States)

    Lewis Jordon; Richard F. Daniels; Alexander Clark; Rechun He

    2005-01-01

    Earlywood and latewood microfibril angle (MFA) was determined at I-millimeter intervals from disks at 1.4 meters, then at 3-meter intervals to a height of 13.7 meters, from 18 loblolly pine (Pinus taeda L.) trees grown in southeastern Texas. A modified three-parameter logistic function with mixed effects is used for modeling earlywood and latewood...

  3. A general mixed boundary model reduction method for component mode synthesis

    International Nuclear Information System (INIS)

    Voormeeren, S N; Van der Valk, P L C; Rixen, D J

    2010-01-01

    A classic issue in component mode synthesis (CMS) methods is the choice for fixed or free boundary conditions at the interface degrees of freedom (DoF) and the associated vibration modes in the components reduction base. In this paper, a novel mixed boundary CMS method called the 'Mixed Craig-Bampton' method is proposed. The method is derived by dividing the substructure DoF into a set of internal DoF, free interface DoF and fixed interface DoF. To this end a simple but effective scheme is introduced that, for every pair of interface DoF, selects a free or fixed boundary condition for each DoF individually. Based on this selection a reduction basis is computed consisting of vibration modes, static constraint modes and static residual flexibility modes. In order to assemble the reduced substructures a novel mixed assembly procedure is developed. It is shown that this approach leads to relatively sparse reduced matrices, whereas other mixed boundary methods often lead to full matrices. As such, the Mixed Craig-Bampton method forms a natural generalization of the classic Craig-Bampton and more recent Dual Craig-Bampton methods. Finally, the method is applied to a finite element test model. Analysis reveals that the proposed method has comparable or better accuracy and superior versatility with respect to the existing methods.

  4. Mixed Mediation of Supersymmetry Breaking in Models with Anomalous U(1) Gauge Symmetry

    International Nuclear Information System (INIS)

    Choi, Kiwoon

    2010-01-01

    There can be various built-in sources of supersymmetry breaking in models with anomalous U(1) gauge symmetry, e.g. the U(1) D-term, the F-components of the modulus superfield required for the Green-Schwarz anomaly cancellation mechanism and the chiral matter superfields required to cancel the Fayet-Iliopoulos term, and finally the supergravity auxiliary component which can be parameterized by the F-component of chiral compensator. The relative strength between these supersymmetry breaking sources depends crucially on the characteristics of D-flat direction and also on how the D-flat direction is stabilized at a vacuum with nearly vanishing cosmological constant. We examine the possible pattern of the mediation of supersymmetry breaking in models with anomalous U(1) gauge symmetry, and find that various different mixed mediation scenarios can be realized, including the mirage mediation which corresponds to a mixed modulus-anomaly mediation, D-term domination giving a split sparticle spectrum, and also a mixed gauge-D-term mediation scenario.

  5. Multi-environment QTL mixed models for drought stress adaptation in wheat

    NARCIS (Netherlands)

    Mathews, K.L.; Malosetti, M.; Chapman, S.; McIntyre, L.; Reynolds, M.; Shorter, R.; Eeuwijk, van F.A.

    2008-01-01

    Many quantitative trait loci (QTL) detection methods ignore QTL-by-environment interaction (QEI) and are limited in accommodation of error and environment-specific variance. This paper outlines a mixed model approach using a recombinant inbred spring wheat population grown in six drought stress

  6. A Smooth Transition Logit Model of the Effects of Deregulation in the Electricity Market

    DEFF Research Database (Denmark)

    Hurn, A.S.; Silvennoinen, Annastiina; Teräsvirta, Timo

    We consider a nonlinear vector model called the logistic vector smooth transition autoregressive model. The bivariate single-transition vector smooth transition regression model of Camacho (2004) is generalised to a multivariate and multitransition one. A modelling strategy consisting of specific......We consider a nonlinear vector model called the logistic vector smooth transition autoregressive model. The bivariate single-transition vector smooth transition regression model of Camacho (2004) is generalised to a multivariate and multitransition one. A modelling strategy consisting...... of specification, including testing linearity, estimation and evaluation of these models is constructed. Nonlinear least squares estimation of the parameters of the model is discussed. Evaluation by misspecification tests is carried out using tests derived in a companion paper. The use of the modelling strategy...

  7. Dark matter and electroweak phase transition in the mixed scalar dark matter model

    Science.gov (United States)

    Liu, Xuewen; Bian, Ligong

    2018-03-01

    We study the electroweak phase transition in the framework of the scalar singlet-doublet mixed dark matter model, in which the particle dark matter candidate is the lightest neutral Higgs that comprises the C P -even component of the inert doublet and a singlet scalar. The dark matter can be dominated by the inert doublet or singlet scalar depending on the mixing. We present several benchmark models to investigate the two situations after imposing several theoretical and experimental constraints. An additional singlet scalar and the inert doublet drive the electroweak phase transition to be strongly first order. A strong first-order electroweak phase transition and a viable dark matter candidate can be accomplished in two benchmark models simultaneously, for which a proper mass splitting among the neutral and charged Higgs masses is needed.

  8. An overview of modeling methods for thermal mixing and stratification in large enclosures for reactor safety analysis

    Energy Technology Data Exchange (ETDEWEB)

    Haihua Zhao; Per F. Peterson

    2010-10-01

    Thermal mixing and stratification phenomena play major roles in the safety of reactor systems with large enclosures, such as containment safety in current fleet of LWRs, long-term passive containment cooling in Gen III+ plants including AP-1000 and ESBWR, the cold and hot pool mixing in pool type sodium cooled fast reactor systems (SFR), and reactor cavity cooling system behavior in high temperature gas cooled reactors (HTGR), etc. Depending on the fidelity requirement and computational resources, 0-D steady state models (heat transfer correlations), 0-D lumped parameter based transient models, 1-D physical-based coarse grain models, and 3-D CFD models are available. Current major system analysis codes either have no models or only 0-D models for thermal stratification and mixing, which can only give highly approximate results for simple cases. While 3-D CFD methods can be used to analyze simple configurations, these methods require very fine grid resolution to resolve thin substructures such as jets and wall boundaries. Due to prohibitive computational expenses for long transients in very large volumes, 3-D CFD simulations remain impractical for system analyses. For mixing in stably stratified large enclosures, UC Berkeley developed 1-D models basing on Zuber’s hierarchical two-tiered scaling analysis (HTTSA) method where the ambient fluid volume is represented by 1-D transient partial differential equations and substructures such as free or wall jets are modeled with 1-D integral models. This allows very large reductions in computational effort compared to 3-D CFD modeling. This paper will present an overview on important thermal mixing and stratification phenomena in large enclosures for different reactors, major modeling methods and their advantages and limits, potential paths to improve simulation capability and reduce analysis uncertainty in this area for advanced reactor system analysis tools.

  9. Mixing methodology, nursing theory and research design for a practice model of district nursing advocacy.

    Science.gov (United States)

    Reed, Frances M; Fitzgerald, Les; Rae, Melanie

    2016-01-01

    To highlight philosophical and theoretical considerations for planning a mixed methods research design that can inform a practice model to guide rural district nursing end of life care. Conceptual models of nursing in the community are general and lack guidance for rural district nursing care. A combination of pragmatism and nurse agency theory can provide a framework for ethical considerations in mixed methods research in the private world of rural district end of life care. Reflection on experience gathered in a two-stage qualitative research phase, involving rural district nurses who use advocacy successfully, can inform a quantitative phase for testing and complementing the data. Ongoing data analysis and integration result in generalisable inferences to achieve the research objective. Mixed methods research that creatively combines philosophical and theoretical elements to guide design in the particular ethical situation of community end of life care can be used to explore an emerging field of interest and test the findings for evidence to guide quality nursing practice. Combining philosophy and nursing theory to guide mixed methods research design increases the opportunity for sound research outcomes that can inform a nursing model of care.

  10. Right-handed quark mixings in minimal left-right symmetric model with general CP violation

    International Nuclear Information System (INIS)

    Zhang Yue; Ji Xiangdong; An Haipeng; Mohapatra, R. N.

    2007-01-01

    We solve systematically for the right-handed quark mixings in the minimal left-right symmetric model which generally has both explicit and spontaneous CP violations. The leading-order result has the same hierarchical structure as the left-handed Cabibbo-Kobayashi-Maskawa mixing, but with additional CP phases originating from a spontaneous CP-violating phase in the Higgs vacuum expectation values. We explore the phenomenology entailed by the new right-handed mixing matrix, particularly the bounds on the mass of W R and the CP phase of the Higgs vacuum expectation values

  11. A Mixed Land Cover Spatio-temporal Data Model Based on Object-oriented and Snapshot

    Directory of Open Access Journals (Sweden)

    LI Yinchao

    2016-07-01

    Full Text Available Spatio-temporal data model (STDM is one of the hot topics in the domains of spatio-temporal database and data analysis. There is a common view that a universal STDM is always of high complexity due to the various situation of spatio-temporal data. In this article, a mixed STDM is proposed based on object-oriented and snapshot models for modelling and analyzing landcover change (LCC. This model uses the object-oriented STDM to describe the spatio-temporal processes of land cover patches and organize their spatial and attributive properties. In the meantime, it uses the snapshot STDM to present the spatio-temporal distribution of LCC on the whole via snapshot images. The two types of models are spatially and temporally combined into a mixed version. In addition to presenting the spatio-temporal events themselves, this model could express the transformation events between different classes of spatio-temporal objects. It can be used to create database for historical data of LCC, do spatio-temporal statistics, simulation and data mining with the data. In this article, the LCC data in Heilongjiang province is used for case study to validate spatio-temporal data management and analysis abilities of mixed STDM, including creating database, spatio-temporal query, global evolution analysis and patches spatio-temporal process expression.

  12. Modeling Bimolecular Reactive Transport With Mixing-Limitation: Theory and Application to Column Experiments

    Science.gov (United States)

    Ginn, T. R.

    2018-01-01

    The challenge of determining mixing extent of solutions undergoing advective-dispersive-diffusive transport is well known. In particular, reaction extent between displacing and displaced solutes depends on mixing at the pore scale, that is, generally smaller than continuum scale quantification that relies on dispersive fluxes. Here a novel mobile-mobile mass transfer approach is developed to distinguish diffusive mixing from dispersive spreading in one-dimensional transport involving small-scale velocity variations with some correlation, such as occurs in hydrodynamic dispersion, in which short-range ballistic transports give rise to dispersed but not mixed segregation zones, termed here ballisticules. When considering transport of a single solution, this approach distinguishes self-diffusive mixing from spreading, and in the case of displacement of one solution by another, each containing a participant reactant of an irreversible bimolecular reaction, this results in time-delayed diffusive mixing of reactants. The approach generates models for both kinetically controlled and equilibrium irreversible reaction cases, while honoring independently measured reaction rates and dispersivities. The mathematical solution for the equilibrium case is a simple analytical expression. The approach is applied to published experimental data on bimolecular reactions for homogeneous porous media under postasymptotic dispersive conditions with good results.

  13. Numerical Modeling of Mixing of Chemically Reacting, Non-Newtonian Slurry for Tank Waste Retrieval

    International Nuclear Information System (INIS)

    Yuen, David A.; Onishi, Yasuo; Rustad, James R.; Michener, Thomas E.; Felmy, Andrew R.; Ten, Arkady A.; Hier, Catherine A.

    2000-01-01

    Many highly radioactive wastes will be retrieved by installing mixer pumps that inject high-speed jets to stir up the sludge, saltcake, and supernatant liquid in the tank, blending them into a slurry. This slurry will then be pumped out of the tank into a waste treatment facility. Our objectives are to investigate interactions-chemical reactions, waste rheology, and slurry mixing-occurring during the retrieval operation and to provide a scientific basis for the waste retrieval decision-making process. Specific objectives are to: (1) Evaluate numerical modeling of chemically active, non-Newtonian tank waste mixing, coupled with chemical reactions and realistic rheology; (2) Conduct numerical modeling analysis of local and global mixing of non-Newtonian and Newtonian slurries; and (3) Provide the bases to develop a scientifically justifiable, decision-making support tool for the tank waste retrieval operation

  14. Renormalisation group corrections to the littlest seesaw model and maximal atmospheric mixing

    International Nuclear Information System (INIS)

    King, Stephen F.; Zhang, Jue; Zhou, Shun

    2016-01-01

    The Littlest Seesaw (LS) model involves two right-handed neutrinos and a very constrained Dirac neutrino mass matrix, involving one texture zero and two independent Dirac masses, leading to a highly predictive scheme in which all neutrino masses and the entire PMNS matrix is successfully predicted in terms of just two real parameters. We calculate the renormalisation group (RG) corrections to the LS predictions, with and without supersymmetry, including also the threshold effects induced by the decoupling of heavy Majorana neutrinos both analytically and numerically. We find that the predictions for neutrino mixing angles and mass ratios are rather stable under RG corrections. For example we find that the LS model with RG corrections predicts close to maximal atmospheric mixing, θ_2_3=45"∘±1"∘, in most considered cases, in tension with the latest NOvA results. The techniques used here apply to other seesaw models with a strong normal mass hierarchy.

  15. Renormalisation group corrections to the littlest seesaw model and maximal atmospheric mixing

    Energy Technology Data Exchange (ETDEWEB)

    King, Stephen F. [School of Physics and Astronomy, University of Southampton,SO17 1BJ Southampton (United Kingdom); Zhang, Jue [Center for High Energy Physics, Peking University,Beijing 100871 (China); Zhou, Shun [Center for High Energy Physics, Peking University,Beijing 100871 (China); Institute of High Energy Physics, Chinese Academy of Sciences,Beijing 100049 (China)

    2016-12-06

    The Littlest Seesaw (LS) model involves two right-handed neutrinos and a very constrained Dirac neutrino mass matrix, involving one texture zero and two independent Dirac masses, leading to a highly predictive scheme in which all neutrino masses and the entire PMNS matrix is successfully predicted in terms of just two real parameters. We calculate the renormalisation group (RG) corrections to the LS predictions, with and without supersymmetry, including also the threshold effects induced by the decoupling of heavy Majorana neutrinos both analytically and numerically. We find that the predictions for neutrino mixing angles and mass ratios are rather stable under RG corrections. For example we find that the LS model with RG corrections predicts close to maximal atmospheric mixing, θ{sub 23}=45{sup ∘}±1{sup ∘}, in most considered cases, in tension with the latest NOvA results. The techniques used here apply to other seesaw models with a strong normal mass hierarchy.

  16. Local and/or organic: A study on consumer preferences for organic food and food from different origins

    OpenAIRE

    Feldmann , Corinna; Hamm, Ulrich

    2014-01-01

    The purpose of this paper is to get a deeper insight into consumer preferences for different food products varying in their places of origin (i.e. local, Germany, neighbouring country, non-EU country) and production practices (i.e. organic vs. non-organic). Therefore, consumer surveys combined with choice experiments were conducted with 641 consumers in eight supermarkets in different parts of Germany. Multinomial and mixed logit models were estimated to draw conclusions on the preference str...

  17. New Evidence on Measuring Financial Constraints: Moving Beyond the KZ Index

    OpenAIRE

    Charles J. Hadlock; Joshua R. Pierce

    2010-01-01

    We collect detailed qualitative information from financial filings to categorize financial constraints for a random sample of firms from 1995 to 2004. Using this categorization, we estimate ordered logit models predicting constraints as a function of different quantitative factors. Our findings cast serious doubt on the validity of the KZ index as a measure of financial constraints, while offering mixed evidence on the validity of other common measures of constraints. We find that firm size a...

  18. Examples of mixed-effects modeling with crossed random effects and with binomial data

    NARCIS (Netherlands)

    Quené, H.; van den Bergh, H.

    2008-01-01

    Psycholinguistic data are often analyzed with repeated-measures analyses of variance (ANOVA), but this paper argues that mixed-effects (multilevel) models provide a better alternative method. First, models are discussed in which the two random factors of participants and items are crossed, and not

  19. Dynamic Roughness Ratio-Based Framework for Modeling Mixed Mode of Droplet Evaporation.

    Science.gov (United States)

    Gunjan, Madhu Ranjan; Raj, Rishi

    2017-07-18

    The spatiotemporal evolution of an evaporating sessile droplet and its effect on lifetime is crucial to various disciplines of science and technology. Although experimental investigations suggest three distinct modes through which a droplet evaporates, namely, the constant contact radius (CCR), the constant contact angle (CCA), and the mixed, only the CCR and the CCA modes have been modeled reasonably. Here we use experiments with water droplets on flat and micropillared silicon substrates to characterize the mixed mode. We visualize that a perfect CCA mode after the initial CCR mode is an idealization on a flat silicon substrate, and the receding contact line undergoes intermittent but recurring pinning (CCR mode) as it encounters fresh contaminants on the surface. The resulting increase in roughness lowers the contact angle of the droplet during these intermittent CCR modes until the next depinning event, followed by the CCA mode of evaporation. The airborne contaminants in our experiments are mostly loosely adhered to the surface and travel along with the receding contact line. The resulting gradual increase in the apparent roughness and hence the extent of CCR mode over CCA mode forces appreciable decrease in the contact angle observed during the mixed mode of evaporation. Unlike loosely adhered airborne contaminants on flat samples, micropillars act as fixed roughness features. The apparent roughness fluctuates about the mean value as the contact line recedes between pillars. Evaporation on these surfaces exhibits stick-jump motion with a short-duration mixed mode toward the end when the droplet size becomes comparable to the pillar spacing. We incorporate this dynamic roughness into a classical evaporation model to accurately predict the droplet evolution throughout the three modes, for both flat and micropillared silicon surfaces. We believe that this framework can also be extended to model the evaporation of nanofluids and the coffee-ring effect, among

  20. Comment on Hoffman and Rovine (2007): SPSS MIXED can estimate models with heterogeneous variances.

    Science.gov (United States)

    Weaver, Bruce; Black, Ryan A

    2015-06-01

    Hoffman and Rovine (Behavior Research Methods, 39:101-117, 2007) have provided a very nice overview of how multilevel models can be useful to experimental psychologists. They included two illustrative examples and provided both SAS and SPSS commands for estimating the models they reported. However, upon examining the SPSS syntax for the models reported in their Table 3, we found no syntax for models 2B and 3B, both of which have heterogeneous error variances. Instead, there is syntax that estimates similar models with homogeneous error variances and a comment stating that SPSS does not allow heterogeneous errors. But that is not correct. We provide SPSS MIXED commands to estimate models 2B and 3B with heterogeneous error variances and obtain results nearly identical to those reported by Hoffman and Rovine in their Table 3. Therefore, contrary to the comment in Hoffman and Rovine's syntax file, SPSS MIXED can estimate models with heterogeneous error variances.

  1. Chlorophyll modulation of mixed layer thermodynamics in a mixed-layer isopycnal General Circulation Model - An example from Arabian Sea and equatorial Pacific

    Digital Repository Service at National Institute of Oceanography (India)

    Nakamoto, S.; PrasannaKumar, S.; Oberhuber, J.M.; Saito, H.; Muneyama, K.; Frouin, R.

    is influenced not only by local vertical mixing but also by horizontal con- vergence of mass and heat, a mixed layer model must consider both full dynamics due to the use of primitive equations and a parameterization for the vertical mass transfer and related... is dynamically determined without such a con- straint. Instantaneous atmospheric elds are inter- polated from the monthly means. Monthly mean climatology of chlorophyll pigment concentrations were obtained from the Coastal Zone Color Scan- ner (CZCS) from...

  2. Influence of an urban canopy model and PBL schemes on vertical mixing for air quality modeling over Greater Paris

    Science.gov (United States)

    Kim, Youngseob; Sartelet, Karine; Raut, Jean-Christophe; Chazette, Patrick

    2015-04-01

    Impacts of meteorological modeling in the planetary boundary layer (PBL) and urban canopy model (UCM) on the vertical mixing of pollutants are studied. Concentrations of gaseous chemical species, including ozone (O3) and nitrogen dioxide (NO2), and particulate matter over Paris and the near suburbs are simulated using the 3-dimensional chemistry-transport model Polair3D of the Polyphemus platform. Simulated concentrations of O3, NO2 and PM10/PM2.5 (particulate matter of aerodynamic diameter lower than 10 μm/2.5 μm, respectively) are first evaluated using ground measurements. Higher surface concentrations are obtained for PM10, PM2.5 and NO2 with the MYNN PBL scheme than the YSU PBL scheme because of lower PBL heights in the MYNN scheme. Differences between simulations using different PBL schemes are lower than differences between simulations with and without the UCM and the Corine land-use over urban areas. Regarding the root mean square error, the simulations using the UCM and the Corine land-use tend to perform better than the simulations without it. At urban stations, the PM10 and PM2.5 concentrations are over-estimated and the over-estimation is reduced using the UCM and the Corine land-use. The ability of the model to reproduce vertical mixing is evaluated using NO2 measurement data at the upper air observation station of the Eiffel Tower, and measurement data at a ground station near the Eiffel Tower. Although NO2 is under-estimated in all simulations, vertical mixing is greatly improved when using the UCM and the Corine land-use. Comparisons of the modeled PM10 vertical distributions to distributions deduced from surface and mobile lidar measurements are performed. The use of the UCM and the Corine land-use is crucial to accurately model PM10 concentrations during nighttime in the center of Paris. In the nocturnal stable boundary layer, PM10 is relatively well modeled, although it is over-estimated on 24 May and under-estimated on 25 May. However, PM10 is

  3. An update on modeling dose-response relationships: Accounting for correlated data structure and heterogeneous error variance in linear and nonlinear mixed models.

    Science.gov (United States)

    Gonçalves, M A D; Bello, N M; Dritz, S S; Tokach, M D; DeRouchey, J M; Woodworth, J C; Goodband, R D

    2016-05-01

    Advanced methods for dose-response assessments are used to estimate the minimum concentrations of a nutrient that maximizes a given outcome of interest, thereby determining nutritional requirements for optimal performance. Contrary to standard modeling assumptions, experimental data often present a design structure that includes correlations between observations (i.e., blocking, nesting, etc.) as well as heterogeneity of error variances; either can mislead inference if disregarded. Our objective is to demonstrate practical implementation of linear and nonlinear mixed models for dose-response relationships accounting for correlated data structure and heterogeneous error variances. To illustrate, we modeled data from a randomized complete block design study to evaluate the standardized ileal digestible (SID) Trp:Lys ratio dose-response on G:F of nursery pigs. A base linear mixed model was fitted to explore the functional form of G:F relative to Trp:Lys ratios and assess model assumptions. Next, we fitted 3 competing dose-response mixed models to G:F, namely a quadratic polynomial (QP) model, a broken-line linear (BLL) ascending model, and a broken-line quadratic (BLQ) ascending model, all of which included heteroskedastic specifications, as dictated by the base model. The GLIMMIX procedure of SAS (version 9.4) was used to fit the base and QP models and the NLMIXED procedure was used to fit the BLL and BLQ models. We further illustrated the use of a grid search of initial parameter values to facilitate convergence and parameter estimation in nonlinear mixed models. Fit between competing dose-response models was compared using a maximum likelihood-based Bayesian information criterion (BIC). The QP, BLL, and BLQ models fitted on G:F of nursery pigs yielded BIC values of 353.7, 343.4, and 345.2, respectively, thus indicating a better fit of the BLL model. The BLL breakpoint estimate of the SID Trp:Lys ratio was 16.5% (95% confidence interval [16.1, 17.0]). Problems with

  4. The use of copulas to practical estimation of multivariate stochastic differential equation mixed effects models

    International Nuclear Information System (INIS)

    Rupšys, P.

    2015-01-01

    A system of stochastic differential equations (SDE) with mixed-effects parameters and multivariate normal copula density function were used to develop tree height model for Scots pine trees in Lithuania. A two-step maximum likelihood parameter estimation method is used and computational guidelines are given. After fitting the conditional probability density functions to outside bark diameter at breast height, and total tree height, a bivariate normal copula distribution model was constructed. Predictions from the mixed-effects parameters SDE tree height model calculated during this research were compared to the regression tree height equations. The results are implemented in the symbolic computational language MAPLE

  5. The use of copulas to practical estimation of multivariate stochastic differential equation mixed effects models

    Energy Technology Data Exchange (ETDEWEB)

    Rupšys, P. [Aleksandras Stulginskis University, Studenų g. 11, Akademija, Kaunas district, LT – 53361 Lithuania (Lithuania)

    2015-10-28

    A system of stochastic differential equations (SDE) with mixed-effects parameters and multivariate normal copula density function were used to develop tree height model for Scots pine trees in Lithuania. A two-step maximum likelihood parameter estimation method is used and computational guidelines are given. After fitting the conditional probability density functions to outside bark diameter at breast height, and total tree height, a bivariate normal copula distribution model was constructed. Predictions from the mixed-effects parameters SDE tree height model calculated during this research were compared to the regression tree height equations. The results are implemented in the symbolic computational language MAPLE.

  6. The probabilistic model of the process mixing of animal feed ingredients into a continuous mixer-reactor

    Directory of Open Access Journals (Sweden)

    L. I. Lytkina

    2016-01-01

    Full Text Available A mathematical model of the polydisperse medium mixing process reflects its stochastic features in the form of uneven distribution of phase elements on the time of their presence in apparatus, particle size, ripple retention of the apparatus, random distribution of the material and thermal phase flows of the working volume, heterogeneity of the medium physical- and chemical properties, complicated by chemical reaction. For the mathematical description of the mixing process of animal feed ingredients in the presence of chemical reaction the system of differential equations of Academician V.V. Kafarov was used. Proposed by him hypothesis based on the theory of Markov’s processes stating that "any multicomponent mixture can be considered as the result of an iterative process of mixing the two components to achieve the desired uniformity of all the ingredients in the mixture" allows us to consider a process of mixing binary composition in a paddle mixer in the form of differential equations of two ingredients concentration numerous changes until it becomes a homogenous mixture. It was found out that the mixing process of the two-component mixture is determined in a paddle mixer with a constant mixing speed and a limit (equilibrium dispersion of the ingredients in the mixture i.e. with its uniformity. Adjustment of the model parameters was carried out according to the results of experimental studies on mixing the crushed wheat with metallomagnetic impurity, which was a key (indicator component. According to the best values of the constant of the continuous mixing speed and the equilibrium disperse values of the ingredients contents, the mathematical model parameters identification was carried out. The results obtained are used to develop a new generation mixer design.

  7. Effects of Precipitation on Ocean Mixed-Layer Temperature and Salinity as Simulated in a 2-D Coupled Ocean-Cloud Resolving Atmosphere Model

    Science.gov (United States)

    Li, Xiaofan; Sui, C.-H.; Lau, K-M.; Adamec, D.

    1999-01-01

    A two-dimensional coupled ocean-cloud resolving atmosphere model is used to investigate possible roles of convective scale ocean disturbances induced by atmospheric precipitation on ocean mixed-layer heat and salt budgets. The model couples a cloud resolving model with an embedded mixed layer-ocean circulation model. Five experiment are performed under imposed large-scale atmospheric forcing in terms of vertical velocity derived from the TOGA COARE observations during a selected seven-day period. The dominant variability of mixed-layer temperature and salinity are simulated by the coupled model with imposed large-scale forcing. The mixed-layer temperatures in the coupled experiments with 1-D and 2-D ocean models show similar variations when salinity effects are not included. When salinity effects are included, however, differences in the domain-mean mixed-layer salinity and temperature between coupled experiments with 1-D and 2-D ocean models could be as large as 0.3 PSU and 0.4 C respectively. Without fresh water effects, the nocturnal heat loss over ocean surface causes deep mixed layers and weak cooling rates so that the nocturnal mixed-layer temperatures tend to be horizontally-uniform. The fresh water flux, however, causes shallow mixed layers over convective areas while the nocturnal heat loss causes deep mixed layer over convection-free areas so that the mixed-layer temperatures have large horizontal fluctuations. Furthermore, fresh water flux exhibits larger spatial fluctuations than surface heat flux because heavy rainfall occurs over convective areas embedded in broad non-convective or clear areas, whereas diurnal signals over whole model areas yield high spatial correlation of surface heat flux. As a result, mixed-layer salinities contribute more to the density differences than do mixed-layer temperatures.

  8. Improving navigability on the Kromme River Estuary: A choice ...

    African Journals Online (AJOL)

    2013-03-14

    Mar 14, 2013 ... logit model, random parameters logit model. INTRODUCTION .... tives, is treated by the RUM as a stochastic, utility-maximising choice (Louviere et ..... comparable to the one estimated for a linear regression model. (the ones ...

  9. Swell impact on wind stress and atmospheric mixing in a regional coupled atmosphere-wave model

    DEFF Research Database (Denmark)

    Wu, Lichuan; Rutgersson, Anna; Sahlée, Erik

    2016-01-01

    Over the ocean, the atmospheric turbulence can be significantly affected by swell waves. Change in the atmospheric turbulence affects the wind stress and atmospheric mixing over swell waves. In this study, the influence of swell on atmospheric mixing and wind stress is introduced into an atmosphere-wave-coupled...... regional climate model, separately and combined. The swell influence on atmospheric mixing is introduced into the atmospheric mixing length formula by adding a swell-induced contribution to the mixing. The swell influence on the wind stress under wind-following swell, moderate-range wind, and near......-neutral and unstable stratification conditions is introduced by changing the roughness length. Five year simulation results indicate that adding the swell influence on atmospheric mixing has limited influence, only slightly increasing the near-surface wind speed; in contrast, adding the swell influence on wind stress...

  10. A SUB-GRID VOLUME-OF-FLUIDS (VOF) MODEL FOR MIXING IN RESOLVED SCALE AND IN UNRESOLVED SCALE COMPUTATIONS

    International Nuclear Information System (INIS)

    Vold, Erik L.; Scannapieco, Tony J.

    2007-01-01

    A sub-grid mix model based on a volume-of-fluids (VOF) representation is described for computational simulations of the transient mixing between reactive fluids, in which the atomically mixed components enter into the reactivity. The multi-fluid model allows each fluid species to have independent values for density, energy, pressure and temperature, as well as independent velocities and volume fractions. Fluid volume fractions are further divided into mix components to represent their 'mixedness' for more accurate prediction of reactivity. Time dependent conversion from unmixed volume fractions (denoted cf) to atomically mixed (af) fluids by diffusive processes is represented in resolved scale simulations with the volume fractions (cf, af mix). In unresolved scale simulations, the transition to atomically mixed materials begins with a conversion from unmixed material to a sub-grid volume fraction (pf). This fraction represents the unresolved small scales in the fluids, heterogeneously mixed by turbulent or multi-phase mixing processes, and this fraction then proceeds in a second step to the atomically mixed fraction by diffusion (cf, pf, af mix). Species velocities are evaluated with a species drift flux, ρ i u di = ρ i (u i -u), used to describe the fluid mixing sources in several closure options. A simple example of mixing fluids during 'interfacial deceleration mixing with a small amount of diffusion illustrates the generation of atomically mixed fluids in two cases, for resolved scale simulations and for unresolved scale simulations. Application to reactive mixing, including Inertial Confinement Fusion (ICF), is planned for future work.

  11. Spatial generalised linear mixed models based on distances.

    Science.gov (United States)

    Melo, Oscar O; Mateu, Jorge; Melo, Carlos E

    2016-10-01

    Risk models derived from environmental data have been widely shown to be effective in delineating geographical areas of risk because they are intuitively easy to understand. We present a new method based on distances, which allows the modelling of continuous and non-continuous random variables through distance-based spatial generalised linear mixed models. The parameters are estimated using Markov chain Monte Carlo maximum likelihood, which is a feasible and a useful technique. The proposed method depends on a detrending step built from continuous or categorical explanatory variables, or a mixture among them, by using an appropriate Euclidean distance. The method is illustrated through the analysis of the variation in the prevalence of Loa loa among a sample of village residents in Cameroon, where the explanatory variables included elevation, together with maximum normalised-difference vegetation index and the standard deviation of normalised-difference vegetation index calculated from repeated satellite scans over time. © The Author(s) 2013.

  12. A Mixed Method Research for Finding a Model of Administrative Decentralization

    OpenAIRE

    Tahereh Feizy; Alireza Moghali; Masuod Geramipoor; Reza Zare

    2015-01-01

    One of the critical issues of administrative decentralization in translating theory into practice is understanding its meaning. An important method to identify administrative decentralization is to address how it can be planned and implemented, and what are its implications, and how it would overcome challenges. The purpose of this study is finding a model for analyzing and evaluating administrative decentralization, so a mixed method research was used to explore and confirm the model of Admi...

  13. Mixing of ν/sub e/ and ν/sub μ/ in SO(10) models

    International Nuclear Information System (INIS)

    Milton, K.; Nandi, S.; Tanaka, K.

    1982-01-01

    We found previously in SO(10) grand unified theories that if the neutrinos have a Dirac mass and a right-handed Majorana mass (approx.10 15 GeV) but no left-handed Majorana mass, there is small ν/sub e/ mixing but ν/sub μ/-ν/sub tau/ mixing can be substantial. We reexamine this problem on the basis of a formalism that assumes that the up, down, lepton, and neutrino mass matrices arise from a single complex 10 and a single 126 Higgs boson. This formalism determines the Majorana mass matrix in terms of quark mass matrices. Adopting three different sets of quark mass matrices that produce acceptable fermion mass ratios and Cabbibo mixing, we obtain results consistent with the above; however, in the optimum case, ν/sub e/-ν/sub μ/ mixing can be of the order of the Cabbibo angle. In an extension of this model wherein the Witten mechanism generates the Majorana mass, we illustrate quantitatively how the parameter characterizing the Majorana sector must be tuned in order to achieve large ν/sub e/-ν/sub μ/ mixing

  14. A Mixed Prediction Model of Ground Subsidence for Civil Infrastructures on Soft Ground

    Directory of Open Access Journals (Sweden)

    Kiyoshi Kobayashi

    2012-01-01

    Full Text Available The estimation of ground subsidence processes is an important subject for the asset management of civil infrastructures on soft ground, such as airport facilities. In the planning and design stage, there exist many uncertainties in geotechnical conditions, and it is impossible to estimate the ground subsidence process by deterministic methods. In this paper, the sets of sample paths designating ground subsidence processes are generated by use of a one-dimensional consolidation model incorporating inhomogeneous ground subsidence. Given the sample paths, the mixed subsidence model is presented to describe the probabilistic structure behind the sample paths. The mixed model can be updated by the Bayesian methods based upon the newly obtained monitoring data. Concretely speaking, in order to estimate the updating models, Markov Chain Monte Calro method, which is the frontier technique in Bayesian statistics, is applied. Through a case study, this paper discussed the applicability of the proposed method and illustrated its possible application and future works.

  15. Translational mixed-effects PKPD modelling of recombinant human growth hormone - from hypophysectomized rat to patients

    DEFF Research Database (Denmark)

    Thorsted, A; Thygesen, P; Agersø, H

    2016-01-01

    BACKGROUND AND PURPOSE: We aimed to develop a mechanistic mixed-effects pharmacokinetic (PK)-pharmacodynamic (PD) (PKPD) model for recombinant human growth hormone (rhGH) in hypophysectomized rats and to predict the human PKPD relationship. EXPERIMENTAL APPROACH: A non-linear mixed-effects model...... was developed from experimental PKPD studies of rhGH and effects of long-term treatment as measured by insulin-like growth factor 1 (IGF-1) and bodyweight gain in rats. Modelled parameter values were scaled to human values using the allometric approach with fixed exponents for PKs and unscaled for PDs...... s.c. administration was over predicted. After correction of the human s.c. absorption model, the induction model for IGF-1 well described the human PKPD data. CONCLUSIONS: A translational mechanistic PKPD model for rhGH was successfully developed from experimental rat data. The model links...

  16. A joint model of mode and shipment size choice using the first generation of Commodity Flow Survey Public Use Microdata

    Directory of Open Access Journals (Sweden)

    Monique Stinson

    2017-12-01

    Full Text Available A behavior-based supply chain and freight transportation model was developed and implemented for the Maricopa Association of Governments (MAG and Pima Association of Governments (PAG. This innovative, data-driven modeling system simulates commodity flows to, from and within Phoenix and Tucson Megaregion and is used for regional planning purposes. This paper details the logistics choice component of the system and describes the position and functioning of this component in the overall framework. The logistics choice model uses a nested logit formulation to evaluate mode choice and shipment size jointly. Modeling decisions related to integrating this component within the overall framework are discussed. This paper also describes practical insights gained from using the 2012 Commodity Flow Survey Public Use Microdata (released in 2015, which was the principal data source used to estimate the joint shipment size-mode choice nested logit model. Finally, the validation effort and related lessons learned are described.

  17. TRANSP modeling of minority ion sawtooth mixing in ICRF + NBI heated discharges in TFTR

    International Nuclear Information System (INIS)

    Goldfinger, R.C.; Batchelor, D.B.; Murakami, M.; Phillips, C.K.; Budny, R.; Hammett, G.W.; McCune, D.M.; Wilson, J.R.; Zarnstorff, M.C.

    1995-01-01

    Time independent code analysis indicates that the sawtooth relaxation phenomenon affects RF power deposition profiles through the mixing of fast ions. Predicted central electron heating rates are substantially above experimental values unless sawtooth relaxation is included. The PPPL time dependent transport analysis code, TRANSP, currently has a model to redistribute thermal electron and ion species, energy densities, plasma current density, and fast ions from neutral beam injection at each sawtooth event using the Kadomtsev (3) prescription. Results are presented here in which the set of models is extended to include sawtooth mixing effects on the hot ion population generated from ICRF heating. The ICRF generated hot ion distribution function, line-integral(ν parallel , ν perpendicular ), which is strongly peaked at the center before each sawtooth, is replaced throughout the sawtooth mixing volume by its volume averaged value at each sawtooth. The modified line-integral(ν parallel ,ν perpendicular ) is then used to recalculate the collisional transfer of power from the minority species to the background species. Results demonstrate that neglect of sawtooth mixing of ICRF-induced fast ions leads to prediction of faster central electron reheat rates than are measured experimentally

  18. A local mixing model for deuterium replacement in solids

    International Nuclear Information System (INIS)

    Doyle, B.L.; Brice, D.K.; Wampler, W.R.

    1980-01-01

    A new model for hydrogen isotope exchange by ion implantation has been developed. The basic difference between the present approach and previous work is that the depth distribution of the implanted species is included. The outstanding feature of this local mixing model is that the only adjustable parameter is the saturation hydrogen concentration which is specific to the target material and dependent only on temperature. The model is shown to give excellent agreement both with new data on H/D exchange in the low Z coating materials VB 2 , TiC, TiB 2 , and B reported here and with previously reported data on stainless steel. The saturation hydrogen concentrations used to fit these data were 0.15, 0.25, 0.15, 0.45, and 1.00 times atomic density respectively. This model should be useful in predicting the recycling behavior of hydrogen isotopes in tokamak limiter and wall materials. (author)

  19. BAYESIAN PARAMETER ESTIMATION IN A MIXED-ORDER MODEL OF BOD DECAY. (U915590)

    Science.gov (United States)

    We describe a generalized version of the BOD decay model in which the reaction is allowed to assume an order other than one. This is accomplished by making the exponent on BOD concentration a free parameter to be determined by the data. This "mixed-order" model may be ...

  20. Modeling of Cd(II) sorption on mixed oxide

    International Nuclear Information System (INIS)

    Waseem, M.; Mustafa, S.; Naeem, A.; Shah, K.H.; Hussain, S.Y.; Safdar, M.

    2011-01-01

    Mixed oxide of iron and silicon (0.75 M Fe(OH)3:0.25 M SiO/sub 2/) was synthesized and characterized by various techniques like surface area analysis, point of zero charge (PZC), energy dispersive X-rays (EDX) spectroscopy, Thermogravimetric and differential thermal analysis (TG-DTA), Fourier transform infrared spectroscopy (FTIR) and X-rays diffraction (XRD) analysis. The uptake of Cd/sup 2+/ ions on mixed oxide increased with pH, temperature and metal ion concentration. Sorption data have been interpreted in terms of both Langmuir and Freundlich models. The Xm values at pH 7 are found to be almost twice as compared to pH 5. The values of both DH and DS were found to be positive indicating that the sorption process was endothermic and accompanied by the dehydration of Cd/sup 2+/. Further, the negative value of DG confirms the spontaneity of the reaction. The ion exchange mechanism was suggested to take place for each Cd/sup 2+/ ions at pH 5, whereas ion exchange was found coupled with non specific adsorption of metal cations at pH 7. (author)

  1. Constraints on the mixing angle between ordinary and heavy leptons in a (V - A) model

    International Nuclear Information System (INIS)

    Hioki, Zenro

    1977-01-01

    The possibility of the mixing between ordinary and heavy leptons in a pure (V-A) model with SU(2) x U(1) gauge group is investigated. It is shown that to be consistent with the present experimental data on various neutral current reactions, this mixing must be small for any choice of the Weinberg angle in the case M sub(W)=M sub(Z) cos theta sub(W). The tri-muon production from the leptonic vertex through this mixing is also discussed. (auth.)

  2. On the use of the Prandtl mixing length model in the cutting torch modeling

    Energy Technology Data Exchange (ETDEWEB)

    Mancinelli, B [Grupo de Descargas Electricas, Departamento Ing. Electromecanica, Universidad Tecnologica Nacional, Regional Venado Tuerto, Laprida 651, Venado Tuerto (2600), Santa Fe (Argentina); Minotti, F O; Kelly, H, E-mail: bmancinelli@arnet.com.ar [Instituto de Fisica del Plasma (CONICET), Departamento de Fisica, Facultad de Ciencias Exactas y Naturales (UBA) Ciudad Universitaria Pab. I, 1428 Buenos Aires (Argentina)

    2011-05-01

    The Prandtl mixing length model has been used to take into account the turbulent effects in a 30 A high-energy density cutting torch model. In particular, the model requires the introduction of only one adjustable coefficient c corresponding to the length of action of the turbulence. It is shown that the c value has little effect on the plasma temperature profiles outside the nozzle (the differences being less than 10 %), but severely affects the plasma velocity distribution, with differences reaching about 100% at the middle of the nozzle-anode gap. Within the experimental uncertainties it was also found that the value c = 0.08 allows to reproduce both, the experimental data of velocity and temperature

  3. On the use of the Prandtl mixing length model in the cutting torch modeling

    International Nuclear Information System (INIS)

    Mancinelli, B; Minotti, F O; Kelly, H

    2011-01-01

    The Prandtl mixing length model has been used to take into account the turbulent effects in a 30 A high-energy density cutting torch model. In particular, the model requires the introduction of only one adjustable coefficient c corresponding to the length of action of the turbulence. It is shown that the c value has little effect on the plasma temperature profiles outside the nozzle (the differences being less than 10 %), but severely affects the plasma velocity distribution, with differences reaching about 100% at the middle of the nozzle-anode gap. Within the experimental uncertainties it was also found that the value c = 0.08 allows to reproduce both, the experimental data of velocity and temperature

  4. Random Coefficient Logit Model for Large Datasets

    NARCIS (Netherlands)

    C. Hernández-Mireles (Carlos); D. Fok (Dennis)

    2010-01-01

    textabstractWe present an approach for analyzing market shares and products price elasticities based on large datasets containing aggregate sales data for many products, several markets and for relatively long time periods. We consider the recently proposed Bayesian approach of Jiang et al [Jiang,

  5. Modeling Types of Pedal Applications Using a Driving Simulator.

    Science.gov (United States)

    Wu, Yuqing; Boyle, Linda Ng; McGehee, Daniel; Roe, Cheryl A; Ebe, Kazutoshi; Foley, James

    2015-11-01

    The aim of this study was to examine variations in drivers' foot behavior and identify factors associated with pedal misapplications. Few studies have focused on the foot behavior while in the vehicle and the mishaps that a driver can encounter during a potentially hazardous situation. A driving simulation study was used to understand how drivers move their right foot toward the pedals. The study included data from 43 drivers as they responded to a series of rapid traffic signal phase changes. Pedal application types were classified as (a) direct hit, (b) hesitated, (c) corrected trajectory, and (d) pedal errors (incorrect trajectories, misses, slips, or pressed both pedals). A mixed-effects multinomial logit model was used to predict the likelihood of one of these pedal applications, and linear mixed models with repeated measures were used to examine the response time and pedal duration given the various experimental conditions (stimuli color and location). Younger drivers had higher probabilities of direct hits when compared to other age groups. Participants tended to have more pedal errors when responding to a red signal or when the signal appeared to be closer. Traffic signal phases and locations were associated with pedal response time and duration. The response time and pedal duration affected the likelihood of being in one of the four pedal application types. Findings from this study suggest that age-related and situational factors may play a role in pedal errors, and the stimuli locations could affect the type of pedal application. © 2015, Human Factors and Ergonomics Society.

  6. The Impact of Varied Discrimination Parameters on Mixed-Format Item Response Theory Model Selection

    Science.gov (United States)

    Whittaker, Tiffany A.; Chang, Wanchen; Dodd, Barbara G.

    2013-01-01

    Whittaker, Chang, and Dodd compared the performance of model selection criteria when selecting among mixed-format IRT models and found that the criteria did not perform adequately when selecting the more parameterized models. It was suggested by M. S. Johnson that the problems when selecting the more parameterized models may be because of the low…

  7. Detecting treatment-subgroup interactions in clustered data with generalized linear mixed-effects model trees.

    Science.gov (United States)

    Fokkema, M; Smits, N; Zeileis, A; Hothorn, T; Kelderman, H

    2017-10-25

    Identification of subgroups of patients for whom treatment A is more effective than treatment B, and vice versa, is of key importance to the development of personalized medicine. Tree-based algorithms are helpful tools for the detection of such interactions, but none of the available algorithms allow for taking into account clustered or nested dataset structures, which are particularly common in psychological research. Therefore, we propose the generalized linear mixed-effects model tree (GLMM tree) algorithm, which allows for the detection of treatment-subgroup interactions, while accounting for the clustered structure of a dataset. The algorithm uses model-based recursive partitioning to detect treatment-subgroup interactions, and a GLMM to estimate the random-effects parameters. In a simulation study, GLMM trees show higher accuracy in recovering treatment-subgroup interactions, higher predictive accuracy, and lower type II error rates than linear-model-based recursive partitioning and mixed-effects regression trees. Also, GLMM trees show somewhat higher predictive accuracy than linear mixed-effects models with pre-specified interaction effects, on average. We illustrate the application of GLMM trees on an individual patient-level data meta-analysis on treatments for depression. We conclude that GLMM trees are a promising exploratory tool for the detection of treatment-subgroup interactions in clustered datasets.

  8. Model for transversal turbulent mixing in axial flow in rod bundles

    International Nuclear Information System (INIS)

    Carajilescov, P.

    1990-01-01

    The present work consists in the development of a model for the transversal eddy diffusivity to account for the effect of turbulent thermal mixing in axial flows in rod bundles. The results were compared to existing correlations that are currently being used in reactor thermalhydraulic analysis and considered satisfactory. (author)

  9. A Comparison of Two-Stage Approaches for Fitting Nonlinear Ordinary Differential Equation Models with Mixed Effects.

    Science.gov (United States)

    Chow, Sy-Miin; Bendezú, Jason J; Cole, Pamela M; Ram, Nilam

    2016-01-01

    Several approaches exist for estimating the derivatives of observed data for model exploration purposes, including functional data analysis (FDA; Ramsay & Silverman, 2005 ), generalized local linear approximation (GLLA; Boker, Deboeck, Edler, & Peel, 2010 ), and generalized orthogonal local derivative approximation (GOLD; Deboeck, 2010 ). These derivative estimation procedures can be used in a two-stage process to fit mixed effects ordinary differential equation (ODE) models. While the performance and utility of these routines for estimating linear ODEs have been established, they have not yet been evaluated in the context of nonlinear ODEs with mixed effects. We compared properties of the GLLA and GOLD to an FDA-based two-stage approach denoted herein as functional ordinary differential equation with mixed effects (FODEmixed) in a Monte Carlo (MC) study using a nonlinear coupled oscillators model with mixed effects. Simulation results showed that overall, the FODEmixed outperformed both the GLLA and GOLD across all the embedding dimensions considered, but a novel use of a fourth-order GLLA approach combined with very high embedding dimensions yielded estimation results that almost paralleled those from the FODEmixed. We discuss the strengths and limitations of each approach and demonstrate how output from each stage of FODEmixed may be used to inform empirical modeling of young children's self-regulation.

  10. A dependent stress-strength interference model based on mixed copula function

    Energy Technology Data Exchange (ETDEWEB)

    Gao, Jian Xiong; An, Zong Wen; Liu, Bo [School of Mechatronics Engineering, Lanzhou University of Technology, Lanzhou (China)

    2016-10-15

    In the traditional Stress-strength interference (SSI) model, stress and strength must satisfy the basic assumption of mutual independence. However, a complex dependence between stress and strength exists in practical engineering. To evaluate structural reliability under the case that stress and strength are dependent, a mixed copula function is introduced to a new dependent SSI model. This model can fully characterize the dependence between stress and strength. The residual square sum method and genetic algorithm are also used to estimate the unknown parameters of the model. Finally, the validity of the proposed model is demonstrated via a practical case. Results show that traditional SSI model ignoring the dependence between stress and strength more easily overestimates product reliability than the new dependent SSI model.

  11. Continuous synthesis of drug-loaded nanoparticles using microchannel emulsification and numerical modeling: effect of passive mixing

    Directory of Open Access Journals (Sweden)

    Ortiz de Solorzano I

    2016-07-01

    Full Text Available Isabel Ortiz de Solorzano,1,2,* Laura Uson,1,2,* Ane Larrea,1,2,* Mario Miana,3 Victor Sebastian,1,2 Manuel Arruebo1,2 1Department of Chemical Engineering and Environmental Technologies, Institute of Nanoscience of Aragon (INA, University of Zaragoza, 2CIBER de Bioingeniería, Biomateriales y Nanomedicina (CIBER-BBN, Centro de Investigación Biomédica en Red, Madrid, 3ITAINNOVA, Instituto Tecnológico de Aragón, Materials & Components, Zaragoza, Spain *These authors contributed equally to this work Abstract: By using interdigital microfluidic reactors, monodisperse poly(d,l lactic-co-glycolic acid nanoparticles (NPs can be produced in a continuous manner and at a large scale (~10 g/h. An optimized synthesis protocol was obtained by selecting the appropriated passive mixer and fluid flow conditions to produce monodisperse NPs. A reduced NP polydispersity was obtained when using the microfluidic platform compared with the one obtained with NPs produced in a conventional discontinuous batch reactor. Cyclosporin, an immunosuppressant drug, was used as a model to validate the efficiency of the microfluidic platform to produce drug-loaded monodisperse poly(d,l lactic-co-glycolic acid NPs. The influence of the mixer geometries and temperatures were analyzed, and the experimental results were corroborated by using computational fluid dynamic three-dimensional simulations. Flow patterns, mixing times, and mixing efficiencies were calculated, and the model supported with experimental results. The progress of mixing in the interdigital mixer was quantified by using the volume fractions of the organic and aqueous phases used during the emulsification–evaporation process. The developed model and methods were applied to determine the required time for achieving a complete mixing in each microreactor at different fluid flow conditions, temperatures, and mixing rates. Keywords: microchannel emulsification, high-throughput synthesis, drug-loaded polymer

  12. Evaluating the market splitting determinants: evidence from the Iberian spot electricity prices

    International Nuclear Information System (INIS)

    Figueiredo, Nuno Carvalho; Silva, Patrícia Pereira da; Cerqueira, Pedro A.

    2015-01-01

    This paper aims to assess the main determinants on the market splitting behaviour of the Iberian electricity spot markets. Iberia stands as an ideal case-study, where the high level deployment of wind power is observed, together with the implementation of the market splitting arrangement between the Portuguese and the Spanish spot electricity markets. Logit and non-parametric models are used to express the probability response for market splitting of day-ahead spot electricity prices as a function of the explanatory variables representing the main technologies in the generation mix: wind, hydro, thermal and nuclear power, together with the available transfer capacity and electricity demand. Logit models give preliminary indications about market splitting behaviour, and then, notwithstanding the demanding computational challenge, a non-parametric model is applied in order to overcome the limitations of the former models. Results show an increase of market splitting probability with higher wind power generation or, more generally, with higher availability of low marginal cost electricity such as nuclear power generation. The European interconnection capacity target of 10% of the peak demand of the smallest interconnected market might be insufficient to maintain electricity market integration. Therefore, pro-active coordination policies, governing both interconnections and renewables deployment, should be further developed. -- Highlights: •Assess determinants on market splitting behaviour of Iberian electricity markets. •Logit and non-parametric models to express market splitting probability response. •Explanatory variables: wind, hydro, thermal and nuclear power; ATC and demand. •Results: increase of market splitting probability with higher availability of low marginal cost electricity. •Coordination policies governing both interconnections and renewables deployment

  13. Bayesian Option Pricing using Mixed Normal Heteroskedasticity Models

    DEFF Research Database (Denmark)

    Rombouts, Jeroen; Stentoft, Lars

    2014-01-01

    Option pricing using mixed normal heteroscedasticity models is considered. It is explained how to perform inference and price options in a Bayesian framework. The approach allows to easily compute risk neutral predictive price densities which take into account parameter uncertainty....... In an application to the S&P 500 index, classical and Bayesian inference is performed on the mixture model using the available return data. Comparing the ML estimates and posterior moments small differences are found. When pricing a rich sample of options on the index, both methods yield similar pricing errors...... measured in dollar and implied standard deviation losses, and it turns out that the impact of parameter uncertainty is minor. Therefore, when it comes to option pricing where large amounts of data are available, the choice of the inference method is unimportant. The results are robust to different...

  14. GUT and flavor models for neutrino masses and mixing

    Science.gov (United States)

    Meloni, Davide

    2017-10-01

    In the recent years experiments have established the existence of neutrino oscillations and most of the oscillation parameters have been measured with a good accuracy. However, in spite of many interesting ideas, no real illumination was sparked on the problem of flavor in the lepton sector. In this review, we discuss the state of the art of models for neutrino masses and mixings formulated in the context of flavor symmetries, with particular emphasis on the role played by grand unified gauge groups.

  15. Groundwater contamination from an inactive uranium mill tailings pile. 2. Application of a dynamic mixing model

    International Nuclear Information System (INIS)

    Narashimhan, T.N.; White, A.F.; Tokunaga, T.

    1986-01-01

    At Riverton, Wyoming, low pH process waters from an abandoned uranium mill tailings pile have been infiltrating into and contaminating the shallow water table aquifer. The contamination process has been governed by transient infiltration rates, saturated-unsaturated flow, as well as transient chemical reactions between the many chemical species present in the mixing waters and the sediments. In the first part of this two-part series the authors presented field data as well as an interpretation based on a static mixing models. As an upper bound, the authors estimated that 1.7% of the tailings water had mixed with the native groundwater. In the present work they present the results of numerical investigation of the dynamic mixing process. The model, DYNAMIX (DYNamic MIXing), couples a chemical speciation algorithm, PHREEQE, with a modified form of the transport algorithm, TRUMP, specifically designed to handle the simultaneous migration of several chemical constituents. The overall problem of simulating the evolution and migration of the contaminant plume was divided into three sub problems that were solved in sequential stages. These were the infiltration problem, the reactive mixing problem, and the plume-migration problem. The results of the application agree reasonably with the detailed field data. The methodology developed in the present study demonstrates the feasibility of analyzing the evolution of natural hydrogeochemical systems through a coupled analysis of transient fluid flow as well as chemical reactions. It seems worthwhile to devote further effort toward improving the physicochemical capabilities of the model as well as to enhance its computational efficiency

  16. Quantifying atmospheric transport, chemistry, and mixing using a new trajectory-box model and a global atmospheric-chemistry GCM

    Directory of Open Access Journals (Sweden)

    H. Riede

    2009-12-01

    Full Text Available We present a novel method for the quantification of transport, chemistry, and mixing along atmospheric trajectories based on a consistent model hierarchy. The hierarchy consists of the new atmospheric-chemistry trajectory-box model CAABA/MJT and the three-dimensional (3-D global ECHAM/MESSy atmospheric-chemistry (EMAC general circulation model. CAABA/MJT employs the atmospheric box model CAABA in a configuration using the atmospheric-chemistry submodel MECCA (M, the photochemistry submodel JVAL (J, and the new trajectory submodel TRAJECT (T, to simulate chemistry along atmospheric trajectories, which are provided offline. With the same chemistry submodels coupled to the 3-D EMAC model and consistent initial conditions and physical parameters, a unique consistency between the two models is achieved. Since only mixing processes within the 3-D model are excluded from the model consistency, comparisons of results from the two models allow to separate and quantify contributions of transport, chemistry, and mixing along the trajectory pathways. Consistency of transport between the trajectory-box model CAABA/MJT and the 3-D EMAC model is achieved via calculation of kinematic trajectories based on 3-D wind fields from EMAC using the trajectory model LAGRANTO. The combination of the trajectory-box model CAABA/MJT and the trajectory model LAGRANTO can be considered as a Lagrangian chemistry-transport model (CTM moving isolated air parcels. The procedure for obtaining the necessary statistical basis for the quantification method is described as well as the comprehensive diagnostics with respect to chemistry.

    The quantification method presented here allows to investigate the characteristics of transport, chemistry, and mixing in a grid-based 3-D model. The analysis of chemical processes within the trajectory-box model CAABA/MJT is easily extendable to include, for example, the impact of different transport pathways or of mixing processes onto

  17. An uncertainty inclusive un-mixing model to identify tracer non-conservativeness

    Science.gov (United States)

    Sherriff, Sophie; Rowan, John; Franks, Stewart; Fenton, Owen; Jordan, Phil; hUallacháin, Daire Ó.

    2015-04-01

    Sediment fingerprinting is being increasingly recognised as an essential tool for catchment soil and water management. Selected physico-chemical properties (tracers) of soils and river sediments are used in a statistically-based 'un-mixing' model to apportion sediment delivered to the catchment outlet (target) to its upstream sediment sources. Development of uncertainty-inclusive approaches, taking into account uncertainties in the sampling, measurement and statistical un-mixing, are improving the robustness of results. However, methodological challenges remain including issues of particle size and organic matter selectivity and non-conservative behaviour of tracers - relating to biogeochemical transformations along the transport pathway. This study builds on our earlier uncertainty-inclusive approach (FR2000) to detect and assess the impact of tracer non-conservativeness using synthetic data before applying these lessons to new field data from Ireland. Un-mixing was conducted on 'pristine' and 'corrupted' synthetic datasets containing three to fifty tracers (in the corrupted dataset one target tracer value was manually corrupted to replicate non-conservative behaviour). Additionally, a smaller corrupted dataset was un-mixed using a permutation version of the algorithm. Field data was collected in an 11 km2 river catchment in Ireland. Source samples were collected from topsoils, subsoils, channel banks, open field drains, damaged road verges and farm tracks. Target samples were collected using time integrated suspended sediment samplers at the catchment outlet at 6-12 week intervals from July 2012 to June 2013. Samples were dried (affected whereas uncertainty was only marginally impacted by the corrupted tracer. Improvement of uncertainty resulted from increasing the number of tracers in both the perfect and corrupted datasets. FR2000 was capable of detecting non-conservative tracer behaviour within the range of mean source values, therefore, it provided a more

  18. A brief introduction to mixed effects modelling and multi-model inference in ecology.

    Science.gov (United States)

    Harrison, Xavier A; Donaldson, Lynda; Correa-Cano, Maria Eugenia; Evans, Julian; Fisher, David N; Goodwin, Cecily E D; Robinson, Beth S; Hodgson, David J; Inger, Richard

    2018-01-01

    The use of linear mixed effects models (LMMs) is increasingly common in the analysis of biological data. Whilst LMMs offer a flexible approach to modelling a broad range of data types, ecological data are often complex and require complex model structures, and the fitting and interpretation of such models is not always straightforward. The ability to achieve robust biological inference requires that practitioners know how and when to apply these tools. Here, we provide a general overview of current methods for the application of LMMs to biological data, and highlight the typical pitfalls that can be encountered in the statistical modelling process. We tackle several issues regarding methods of model selection, with particular reference to the use of information theory and multi-model inference in ecology. We offer practical solutions and direct the reader to key references that provide further technical detail for those seeking a deeper understanding. This overview should serve as a widely accessible code of best practice for applying LMMs to complex biological problems and model structures, and in doing so improve the robustness of conclusions drawn from studies investigating ecological and evolutionary questions.

  19. Mixed butanols addition to gasoline surrogates: Shock tube ignition delay time measurements and chemical kinetic modeling

    KAUST Repository

    AlRamadan, Abdullah S.

    2015-10-01

    The demand for fuels with high anti-knock quality has historically been rising, and will continue to increase with the development of downsized and turbocharged spark-ignition engines. Butanol isomers, such as 2-butanol and tert-butanol, have high octane ratings (RON of 105 and 107, respectively), and thus mixed butanols (68.8% by volume of 2-butanol and 31.2% by volume of tert-butanol) can be added to the conventional petroleum-derived gasoline fuels to improve octane performance. In the present work, the effect of mixed butanols addition to gasoline surrogates has been investigated in a high-pressure shock tube facility. The ignition delay times of mixed butanols stoichiometric mixtures were measured at 20 and 40bar over a temperature range of 800-1200K. Next, 10vol% and 20vol% of mixed butanols (MB) were blended with two different toluene/n-heptane/iso-octane (TPRF) fuel blends having octane ratings of RON 90/MON 81.7 and RON 84.6/MON 79.3. These MB/TPRF mixtures were investigated in the shock tube conditions similar to those mentioned above. A chemical kinetic model was developed to simulate the low- and high-temperature oxidation of mixed butanols and MB/TPRF blends. The proposed model is in good agreement with the experimental data with some deviations at low temperatures. The effect of mixed butanols addition to TPRFs is marginal when examining the ignition delay times at high temperatures. However, when extended to lower temperatures (T < 850K), the model shows that the mixed butanols addition to TPRFs causes the ignition delay times to increase and hence behaves like an octane booster at engine-like conditions. © 2015 The Combustion Institute.

  20. Forecasting Costa Rican Quarterly Growth with Mixed-frequency Models

    Directory of Open Access Journals (Sweden)

    Adolfo Rodríguez Vargas

    2014-11-01

    Full Text Available We assess the utility of mixed-frequency models to forecast the quarterly growth rate of Costa Rican real GDP: we estimate bridge and MiDaS models with several lag lengths using information of the IMAE and compute forecasts (horizons of 0-4 quarters which are compared between themselves, with those of ARIMA models and with those resulting from forecast combinations. Combining the most accurate forecasts is most useful when forecasting in real time, whereas MiDaS forecasts are the best-performing overall: as the forecasting horizon increases, their precisionis affected relatively little; their success rates in predicting the direction of changes in the growth rate are stable, and several forecastsremain unbiased. In particular, forecasts computed from simple MiDaS with 9 and 12 lags are unbiased at all horizons and information sets assessed, and show the highest number of significant differences in forecasting ability in comparison with all other models.

  1. Assessment of RANS and LES Turbulence Modeling for Buoyancy-Aided/Opposed Forced and Mixed Convection

    Science.gov (United States)

    Clifford, Corey; Kimber, Mark

    2017-11-01

    Over the last 30 years, an industry-wide shift within the nuclear community has led to increased utilization of computational fluid dynamics (CFD) to supplement nuclear reactor safety analyses. One such area that is of particular interest to the nuclear community, specifically to those performing loss-of-flow accident (LOFA) analyses for next-generation very-high temperature reactors (VHTR), is the capacity of current computational models to predict heat transfer across a wide range of buoyancy conditions. In the present investigation, a critical evaluation of Reynolds-averaged Navier-Stokes (RANS) and large-eddy simulation (LES) turbulence modeling techniques is conducted based on CFD validation data collected from the Rotatable Buoyancy Tunnel (RoBuT) at Utah State University. Four different experimental flow conditions are investigated: (1) buoyancy-aided forced convection; (2) buoyancy-opposed forced convection; (3) buoyancy-aided mixed convection; (4) buoyancy-opposed mixed convection. Overall, good agreement is found for both forced convection-dominated scenarios, but an overly-diffusive prediction of the normal Reynolds stress is observed for the RANS-based turbulence models. Low-Reynolds number RANS models perform adequately for mixed convection, while higher-order RANS approaches underestimate the influence of buoyancy on the production of turbulence.

  2. Loss given default models incorporating macroeconomic variables for credit cards

    OpenAIRE

    Crook, J.; Bellotti, T.

    2012-01-01

    Based on UK data for major retail credit cards, we build several models of Loss Given Default based on account level data, including Tobit, a decision tree model, a Beta and fractional logit transformation. We find that Ordinary Least Squares models with macroeconomic variables perform best for forecasting Loss Given Default at the account and portfolio levels on independent hold-out data sets. The inclusion of macroeconomic conditions in the model is important, since it provides a means to m...

  3. An Investigation of a Hybrid Mixing Model for PDF Simulations of Turbulent Premixed Flames

    Science.gov (United States)

    Zhou, Hua; Li, Shan; Wang, Hu; Ren, Zhuyin

    2015-11-01

    Predictive simulations of turbulent premixed flames over a wide range of Damköhler numbers in the framework of Probability Density Function (PDF) method still remain challenging due to the deficiency in current micro-mixing models. In this work, a hybrid micro-mixing model, valid in both the flamelet regime and broken reaction zone regime, is proposed. A priori testing of this model is first performed by examining the conditional scalar dissipation rate and conditional scalar diffusion in a 3-D direct numerical simulation dataset of a temporally evolving turbulent slot jet flame of lean premixed H2-air in the thin reaction zone regime. Then, this new model is applied to PDF simulations of the Piloted Premixed Jet Burner (PPJB) flames, which are a set of highly shear turbulent premixed flames and feature strong turbulence-chemistry interaction at high Reynolds and Karlovitz numbers. Supported by NSFC 51476087 and NSFC 91441202.

  4. Species Distribution Modeling: Comparison of Fixed and Mixed Effects Models Using INLA

    Directory of Open Access Journals (Sweden)

    Lara Dutra Silva

    2017-12-01

    Full Text Available Invasive alien species are among the most important, least controlled, and least reversible of human impacts on the world’s ecosystems, with negative consequences affecting biodiversity and socioeconomic systems. Species distribution models have become a fundamental tool in assessing the potential spread of invasive species in face of their native counterparts. In this study we compared two different modeling techniques: (i fixed effects models accounting for the effect of ecogeographical variables (EGVs; and (ii mixed effects models including also a Gaussian random field (GRF to model spatial correlation (Matérn covariance function. To estimate the potential distribution of Pittosporum undulatum and Morella faya (respectively, invasive and native trees, we used geo-referenced data of their distribution in Pico and São Miguel islands (Azores and topographic, climatic and land use EGVs. Fixed effects models run with maximum likelihood or the INLA (Integrated Nested Laplace Approximation approach provided very similar results, even when reducing the size of the presences data set. The addition of the GRF increased model adjustment (lower Deviance Information Criterion, particularly for the less abundant tree, M. faya. However, the random field parameters were clearly affected by sample size and species distribution pattern. A high degree of spatial autocorrelation was found and should be taken into account when modeling species distribution.

  5. A comment on the quark mixing in the supersymmetric SU(4)xO(4) GUT model

    International Nuclear Information System (INIS)

    Ranfone, S.

    1992-08-01

    The SU(4) x O(4) and the ''flipped'' SU(5) x U(1) models seem to be the only possible Grand Universal Theories (GUT's) derivable from string theories with Kac-Moody level K=1. Naively, the SU(4) x O(4) model, at least in its minimal GUT version, is characterized by the lack of any mixing in the quark sector. In this ''Comment'' we show that, although some mixing may be generated as a consequence of large vacuum-expectation-values for the scalar partners of the right-handed neutrinos, it turns out to be too small by several orders of magnitude, in net contrast with our experimental information concerning the Cabibbo mixing. Our result, which therefore rules out the minimal SU(4) x O(4) GUT model, also applies to ''flipped'' SU(5) x U(1) in the case of the embedding in SO(10). (Author)

  6. A Passenger Travel Demand Model for Copenhagen

    DEFF Research Database (Denmark)

    Overgård, Christian Hansen; Jovicic, Goran

    2003-01-01

    The passenger travel model for Copenhagen is a state-of-practice nested logit model in which the sub-models - i.e. generation, distribution and mode choice models - are connected via measure of accessibility. The model includes in its structure a large set of explanatory variables at all three...... aims to provide a detailed description of the model, which can be used as a guide to the future development of similar models. Also, an application of the model in a study of road pricing in denmark is described. This gives the reader an idea of how such a policy measure can be modelled as well...

  7. [Home health resource utilization measures using a case-mix adjustor model].

    Science.gov (United States)

    You, Sun-Ju; Chang, Hyun-Sook

    2005-08-01

    The purpose of this study was to measure home health resource utilization using a Case-Mix Adjustor Model developed in the U.S. The subjects of this study were 484 patients who had received home health care more than 4 visits during a 60-day episode at 31 home health care institutions. Data on the 484 patients had to be merged onto a 60-day payment segment. Based on the results, the researcher classified home health resource groups (HHRG). The subjects were classified into 34 HHRGs in Korea. Home health resource utilization according to clinical severity was in order of Minimum (C0) service utilization moderate), and the lowest 97,000 won in group C2F3S1, so the former was 5.82 times higher than the latter. Resource utilization in home health care has become an issue of concern due to rising costs for home health care. The results suggest the need for more analytical attention on the utilization and expenditures for home care using a Case-Mix Adjustor Model.

  8. Modelling the multilevel structure and mixed effects of the factors influencing the energy consumption of electric vehicles

    International Nuclear Information System (INIS)

    Liu, Kai; Wang, Jiangbo; Yamamoto, Toshiyuki; Morikawa, Takayuki

    2016-01-01

    Highlights: • The impacts of driving heterogeneity on EVs’ energy efficiency are examined. • Several multilevel mixed-effects regression models are proposed and compared. • The most reasonable nested structure is extracted from the long term GPS data. • Proposed model improves the energy estimation accuracy by 7.5%. - Abstract: To improve the accuracy of estimation of the energy consumption of electric vehicles (EVs) and to enable the alleviation of range anxiety through the introduction of EV charging stations at suitable locations for the near future, multilevel mixed-effects linear regression models were used in this study to estimate the actual energy efficiency of EVs. The impacts of the heterogeneity in driving behaviour among various road environments and traffic conditions on EV energy efficiency were extracted from long-term daily trip-based energy consumption data, which were collected over 12 months from 68 in-use EVs in Aichi Prefecture in Japan. Considering the variations in energy efficiency associated with different types of EV ownership, different external environments, and different driving habits, a two-level random intercept model, three two-level mixed-effects models, and two three-level mixed-effects models were developed and compared. The most reasonable nesting structure was determined by comparing the models, which were designed with different nesting structures and different random variance component specifications, thereby revealing the potential correlations and non-constant variability of the energy consumption per kilometre (ECPK) and improving the estimation accuracy by 7.5%.

  9. Simulation of annual cycles of phytoplankton, zooplankton and nutrients using a mixed layer model coupled with a biological model

    OpenAIRE

    Troupin, Charles

    2006-01-01

    In oceanography, the mixed layer refers to the near surface part of the water column where physical and biological variables are distributed quasi homogeneously. Its depth depends on conditions at the air-sea interface (heat and freshwater fluxes, wind stress) and on the characteristics of the flow (stratification, shear), and has a strong influence on biological dynamics. The aim of this work is to model the behaviour of the mixed layer in waters situated to the south of Gr...

  10. Performance of nonlinear mixed effects models in the presence of informative dropout.

    Science.gov (United States)

    Björnsson, Marcus A; Friberg, Lena E; Simonsson, Ulrika S H

    2015-01-01

    Informative dropout can lead to bias in statistical analyses if not handled appropriately. The objective of this simulation study was to investigate the performance of nonlinear mixed effects models with regard to bias and precision, with and without handling informative dropout. An efficacy variable and dropout depending on that efficacy variable were simulated and model parameters were reestimated, with or without including a dropout model. The Laplace and FOCE-I estimation methods in NONMEM 7, and the stochastic simulations and estimations (SSE) functionality in PsN, were used in the analysis. For the base scenario, bias was low, less than 5% for all fixed effects parameters, when a dropout model was used in the estimations. When a dropout model was not included, bias increased up to 8% for the Laplace method and up to 21% if the FOCE-I estimation method was applied. The bias increased with decreasing number of observations per subject, increasing placebo effect and increasing dropout rate, but was relatively unaffected by the number of subjects in the study. This study illustrates that ignoring informative dropout can lead to biased parameters in nonlinear mixed effects modeling, but even in cases with few observations or high dropout rate, the bias is relatively low and only translates into small effects on predictions of the underlying effect variable. A dropout model is, however, crucial in the presence of informative dropout in order to make realistic simulations of trial outcomes.

  11. Area Based Models of New Highway Route Growth

    OpenAIRE

    David Levinson; Wei Chen

    2007-01-01

    Empirical data and statistical models are used to answer the question of where new highway routes are most likely to be located. High-quality land-use, population distribution and highway network GIS data for the Twin CitiesMetropolitan Area from 1958 to 1990 are developed for this study. The highway system is classified into three levels, Interstate highways, divided highways, and secondary highways. Binary logit models estimate the new route growth probability of divided highways and second...

  12. A mixed-effects model approach for the statistical analysis of vocal fold viscoelastic shear properties.

    Science.gov (United States)

    Xu, Chet C; Chan, Roger W; Sun, Han; Zhan, Xiaowei

    2017-11-01

    A mixed-effects model approach was introduced in this study for the statistical analysis of rheological data of vocal fold tissues, in order to account for the data correlation caused by multiple measurements of each tissue sample across the test frequency range. Such data correlation had often been overlooked in previous studies in the past decades. The viscoelastic shear properties of the vocal fold lamina propria of two commonly used laryngeal research animal species (i.e. rabbit, porcine) were measured by a linear, controlled-strain simple-shear rheometer. Along with published canine and human rheological data, the vocal fold viscoelastic shear moduli of these animal species were compared to those of human over a frequency range of 1-250Hz using the mixed-effects models. Our results indicated that tissues of the rabbit, canine and porcine vocal fold lamina propria were significantly stiffer and more viscous than those of human. Mixed-effects models were shown to be able to more accurately analyze rheological data generated from repeated measurements. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Mixed Effects Modeling Using Stochastic Differential Equations: Illustrated by Pharmacokinetic Data of Nicotinic Acid in Obese Zucker Rats.

    Science.gov (United States)

    Leander, Jacob; Almquist, Joachim; Ahlström, Christine; Gabrielsson, Johan; Jirstrand, Mats

    2015-05-01

    Inclusion of stochastic differential equations in mixed effects models provides means to quantify and distinguish three sources of variability in data. In addition to the two commonly encountered sources, measurement error and interindividual variability, we also consider uncertainty in the dynamical model itself. To this end, we extend the ordinary differential equation setting used in nonlinear mixed effects models to include stochastic differential equations. The approximate population likelihood is derived using the first-order conditional estimation with interaction method and extended Kalman filtering. To illustrate the application of the stochastic differential mixed effects model, two pharmacokinetic models are considered. First, we use a stochastic one-compartmental model with first-order input and nonlinear elimination to generate synthetic data in a simulated study. We show that by using the proposed method, the three sources of variability can be successfully separated. If the stochastic part is neglected, the parameter estimates become biased, and the measurement error variance is significantly overestimated. Second, we consider an extension to a stochastic pharmacokinetic model in a preclinical study of nicotinic acid kinetics in obese Zucker rats. The parameter estimates are compared between a deterministic and a stochastic NiAc disposition model, respectively. Discrepancies between model predictions and observations, previously described as measurement noise only, are now separated into a comparatively lower level of measurement noise and a significant uncertainty in model dynamics. These examples demonstrate that stochastic differential mixed effects models are useful tools for identifying incomplete or inaccurate model dynamics and for reducing potential bias in parameter estimates due to such model deficiencies.

  14. Multi-modal imaging, model-based tracking, and mixed reality visualisation for orthopaedic surgery

    Science.gov (United States)

    Fuerst, Bernhard; Tateno, Keisuke; Johnson, Alex; Fotouhi, Javad; Osgood, Greg; Tombari, Federico; Navab, Nassir

    2017-01-01

    Orthopaedic surgeons are still following the decades old workflow of using dozens of two-dimensional fluoroscopic images to drill through complex 3D structures, e.g. pelvis. This Letter presents a mixed reality support system, which incorporates multi-modal data fusion and model-based surgical tool tracking for creating a mixed reality environment supporting screw placement in orthopaedic surgery. A red–green–blue–depth camera is rigidly attached to a mobile C-arm and is calibrated to the cone-beam computed tomography (CBCT) imaging space via iterative closest point algorithm. This allows real-time automatic fusion of reconstructed surface and/or 3D point clouds and synthetic fluoroscopic images obtained through CBCT imaging. An adapted 3D model-based tracking algorithm with automatic tool segmentation allows for tracking of the surgical tools occluded by hand. This proposed interactive 3D mixed reality environment provides an intuitive understanding of the surgical site and supports surgeons in quickly localising the entry point and orienting the surgical tool during screw placement. The authors validate the augmentation by measuring target registration error and also evaluate the tracking accuracy in the presence of partial occlusion. PMID:29184659

  15. Item Response Theory Models for Wording Effects in Mixed-Format Scales

    Science.gov (United States)

    Wang, Wen-Chung; Chen, Hui-Fang; Jin, Kuan-Yu

    2015-01-01

    Many scales contain both positively and negatively worded items. Reverse recoding of negatively worded items might not be enough for them to function as positively worded items do. In this study, we commented on the drawbacks of existing approaches to wording effect in mixed-format scales and used bi-factor item response theory (IRT) models to…

  16. Model and measurements of linear mixing in thermal IR ground leaving radiance spectra

    Science.gov (United States)

    Balick, Lee; Clodius, William; Jeffery, Christopher; Theiler, James; McCabe, Matthew; Gillespie, Alan; Mushkin, Amit; Danilina, Iryna

    2007-10-01

    Hyperspectral thermal IR remote sensing is an effective tool for the detection and identification of gas plumes and solid materials. Virtually all remotely sensed thermal IR pixels are mixtures of different materials and temperatures. As sensors improve and hyperspectral thermal IR remote sensing becomes more quantitative, the concept of homogeneous pixels becomes inadequate. The contributions of the constituents to the pixel spectral ground leaving radiance are weighted by their spectral emissivities and their temperature, or more correctly, temperature distributions, because real pixels are rarely thermally homogeneous. Planck's Law defines a relationship between temperature and radiance that is strongly wavelength dependent, even for blackbodies. Spectral ground leaving radiance (GLR) from mixed pixels is temperature and wavelength dependent and the relationship between observed radiance spectra from mixed pixels and library emissivity spectra of mixtures of 'pure' materials is indirect. A simple model of linear mixing of subpixel radiance as a function of material type, the temperature distribution of each material and the abundance of the material within a pixel is presented. The model indicates that, qualitatively and given normal environmental temperature variability, spectral features remain observable in mixtures as long as the material occupies more than roughly 10% of the pixel. Field measurements of known targets made on the ground and by an airborne sensor are presented here and serve as a reality check on the model. Target spectral GLR from mixtures as a function of temperature distribution and abundance within the pixel at day and night are presented and compare well qualitatively with model output.

  17. The Pediatric Home Care/Expenditure Classification Model (P/ECM): A Home Care Case-Mix Model for Children Facing Special Health Care Challenges

    Science.gov (United States)

    Phillips, Charles D.

    2015-01-01

    Case-mix classification and payment systems help assure that persons with similar needs receive similar amounts of care resources, which is a major equity concern for consumers, providers, and programs. Although health service programs for adults regularly use case-mix payment systems, programs providing health services to children and youth rarely use such models. This research utilized Medicaid home care expenditures and assessment data on 2,578 children receiving home care in one large state in the USA. Using classification and regression tree analyses, a case-mix model for long-term pediatric home care was developed. The Pediatric Home Care/Expenditure Classification Model (P/ECM) grouped children and youth in the study sample into 24 groups, explaining 41% of the variance in annual home care expenditures. The P/ECM creates the possibility of a more equitable, and potentially more effective, allocation of home care resources among children and youth facing serious health care challenges. PMID:26740744

  18. The Pediatric Home Care/Expenditure Classification Model (P/ECM): A Home Care Case-Mix Model for Children Facing Special Health Care Challenges.

    Science.gov (United States)

    Phillips, Charles D

    2015-01-01

    Case-mix classification and payment systems help assure that persons with similar needs receive similar amounts of care resources, which is a major equity concern for consumers, providers, and programs. Although health service programs for adults regularly use case-mix payment systems, programs providing health services to children and youth rarely use such models. This research utilized Medicaid home care expenditures and assessment data on 2,578 children receiving home care in one large state in the USA. Using classification and regression tree analyses, a case-mix model for long-term pediatric home care was developed. The Pediatric Home Care/Expenditure Classification Model (P/ECM) grouped children and youth in the study sample into 24 groups, explaining 41% of the variance in annual home care expenditures. The P/ECM creates the possibility of a more equitable, and potentially more effective, allocation of home care resources among children and youth facing serious health care challenges.

  19. The Pediatric Home Care/Expenditure Classification Model (P/ECM: A Home Care Case-Mix Model for Children Facing Special Health Care Challenges

    Directory of Open Access Journals (Sweden)

    Charles D. Phillips

    2015-01-01

    Full Text Available Case-mix classification and payment systems help assure that persons with similar needs receive similar amounts of care resources, which is a major equity concern for consumers, providers, and programs. Although health service programs for adults regularly use case-mix payment systems, programs providing health services to children and youth rarely use such models. This research utilized Medicaid home care expenditures and assessment data on 2,578 children receiving home care in one large state in the USA. Using classification and regression tree analyses, a case-mix model for long-term pediatric home care was developed. The Pediatric Home Care/Expenditure Classification Model (P/ECM grouped children and youth in the study sample into 24 groups, explaining 41% of the variance in annual home care expenditures. The P/ECM creates the possibility of a more equitable, and potentially more effective, allocation of home care resources among children and youth facing serious health care challenges.

  20. Analytical model for transient fluid mixing in upper outlet plenum of an LMFBR

    International Nuclear Information System (INIS)

    Yang, J.W.; Agrawal, A.K.

    1976-01-01

    A two-zone mixing model based on the lumped-parameter approach was developed for the analysis of transient thermal response in the outlet plenum of an LMFBR. The maximum penetration of core flow is used as the criterion for dividing the sodium region into two mixing zones. The model considers the transient sodium temperature affected by the thermal expansion of sodium, heat transfer with cover gas, heat capacity of different sections of metal and the addition of by-pass flow into the plenum. The results of numerical calculations indicate that effects of flow stratification, chimney height, metal heat capacity and by-pass flow are important for transient sodium temperature calculation. Thermal expansion of sodium and heat transfer with the cover gas do not play any significant role on sodium temperature