WorldWideScience

Sample records for mixed logit model

  1. Modelling Stochastic Route Choice Behaviours with a Closed-Form Mixed Logit Model

    Directory of Open Access Journals (Sweden)

    Xinjun Lai

    2015-01-01

    Full Text Available A closed-form mixed Logit approach is proposed to model the stochastic route choice behaviours. It combines both the advantages of Probit and Logit to provide a flexible form in alternatives correlation and a tractable form in expression; besides, the heterogeneity in alternative variance can also be addressed. Paths are compared by pairs where the superiority of the binary Probit can be fully used. The Probit-based aggregation is also used for a nested Logit structure. Case studies on both numerical and empirical examples demonstrate that the new method is valid and practical. This paper thus provides an operational solution to incorporate the normal distribution in route choice with an analytical expression.

  2. Associating Crash Avoidance Maneuvers with Driver Attributes and Accident Characteristics: A Mixed Logit Model Approach

    DEFF Research Database (Denmark)

    Kaplan, Sigal; Prato, Carlo Giacomo

    2012-01-01

    as from the key role of the ability of drivers to perform effective corrective maneuvers for the success of automated in-vehicle warning and driver assistance systems. The analysis is conducted by means of a mixed logit model that accommodates correlations across alternatives and heteroscedasticity. Data...

  3. Mixed logit model of intended residential mobility in renovated historical blocks in China

    NARCIS (Netherlands)

    Jiang, W.; Timmermans, H.J.P.; Li, H.; Feng, T.

    2016-01-01

    Using data from 8 historical blocks in China, the influence of socialdemographic characteristics and residential satisfaction on intended residentialmobility is analysed. The results of a mixed logit model indicate that higher residential satisfaction will lead to a lower intention to move house,

  4. Time-varying mixed logit model for vehicle merging behavior in work zone merging areas.

    Science.gov (United States)

    Weng, Jinxian; Du, Gang; Li, Dan; Yu, Yao

    2018-08-01

    This study aims to develop a time-varying mixed logit model for the vehicle merging behavior in work zone merging areas during the merging implementation period from the time of starting a merging maneuver to that of completing the maneuver. From the safety perspective, vehicle crash probability and severity between the merging vehicle and its surrounding vehicles are regarded as major factors influencing vehicle merging decisions. Model results show that the model with the use of vehicle crash risk probability and severity could provide higher prediction accuracy than previous models with the use of vehicle speeds and gap sizes. It is found that lead vehicle type, through lead vehicle type, through lag vehicle type, crash probability of the merging vehicle with respect to the through lag vehicle, crash severities of the merging vehicle with respect to the through lead and lag vehicles could exhibit time-varying effects on the merging behavior. One important finding is that the merging vehicle could become more and more aggressive in order to complete the merging maneuver as quickly as possible over the elapsed time, even if it has high vehicle crash risk with respect to the through lead and lag vehicles. Copyright © 2018 Elsevier Ltd. All rights reserved.

  5. Investigating the Differences of Single-Vehicle and Multivehicle Accident Probability Using Mixed Logit Model

    Directory of Open Access Journals (Sweden)

    Bowen Dong

    2018-01-01

    Full Text Available Road traffic accidents are believed to be associated with not only road geometric feature and traffic characteristic, but also weather condition. To address these safety issues, it is of paramount importance to understand how these factors affect the occurrences of the crashes. Existing studies have suggested that the mechanisms of single-vehicle (SV accidents and multivehicle (MV accidents can be very different. Few studies were conducted to examine the difference of SV and MV accident probability by addressing unobserved heterogeneity at the same time. To investigate the different contributing factors on SV and MV, a mixed logit model is employed using disaggregated data with the response variable categorized as no accidents, SV accidents, and MV accidents. The results indicate that, in addition to speed gap, length of segment, and wet road surfaces which are significant for both SV and MV accidents, most of other variables are significant only for MV accidents. Traffic, road, and surface characteristics are main influence factors of SV and MV accident possibility. Hourly traffic volume, inside shoulder width, and wet road surface are found to produce statistically significant random parameters. Their effects on the possibility of SV and MV accident vary across different road segments.

  6. Associating crash avoidance maneuvers with driver attributes and accident characteristics: a mixed logit model approach.

    Science.gov (United States)

    Kaplan, Sigal; Prato, Carlo Giacomo

    2012-01-01

    The current study focuses on the propensity of drivers to engage in crash avoidance maneuvers in relation to driver attributes, critical events, crash characteristics, vehicles involved, road characteristics, and environmental conditions. The importance of avoidance maneuvers derives from the key role of proactive and state-aware road users within the concept of sustainable safety systems, as well as from the key role of effective corrective maneuvers in the success of automated in-vehicle warning and driver assistance systems. The analysis is conducted by means of a mixed logit model that represents the selection among 5 emergency lateral and speed control maneuvers (i.e., "no avoidance maneuvers," "braking," "steering," "braking and steering," and "other maneuvers) while accommodating correlations across maneuvers and heteroscedasticity. Data for the analysis were retrieved from the General Estimates System (GES) crash database for the year 2009 by considering drivers for which crash avoidance maneuvers are known. The results show that (1) the nature of the critical event that made the crash imminent greatly influences the choice of crash avoidance maneuvers, (2) women and elderly have a relatively lower propensity to conduct crash avoidance maneuvers, (3) drowsiness and fatigue have a greater negative marginal effect on the tendency to engage in crash avoidance maneuvers than alcohol and drug consumption, (4) difficult road conditions increase the propensity to perform crash avoidance maneuvers, and (5) visual obstruction and artificial illumination decrease the probability to carry out crash avoidance maneuvers. The results emphasize the need for public awareness campaigns to promote safe driving style for senior drivers and warning about the risks of driving under fatigue and distraction being comparable to the risks of driving under the influence of alcohol and drugs. Moreover, the results suggest the need to educate drivers about hazard perception, designing

  7. Valuing Non-market Benefits of Rehabilitation of Hydrologic Cycle Improvements in the Anyangcheon Watershed: Using Mixed Logit Models

    Science.gov (United States)

    Yoo, J.; Kong, K.

    2010-12-01

    This research the findings from a discrete-choice experiment designed to estimate the economic benefits associated with the Anyangcheon watershed improvements in Rep. of Korea. The Anyangcheon watershed has suffered from streamflow depletion and poor stream quality, which often negatively affect instream and near-stream ecologic integrity, as well as water supply. Such distortions in the hydrologic cycle mainly result from rapid increase of impermeable area due to urbanization, decreases of baseflow runoff due to groundwater pumping, and reduced precipitation inputs driven by climate forcing. As well, combined sewer overflows and increase of non-point source pollution from urban regions decrease water quality. The appeal of choice experiments (CE) in economic analysis is that it is based on random utility theory (McFadden, 1974; Ben-Akiva and Lerman, 1985). In contrast to contingent valuation method (CVM), which asks people to choose between a base case and a specific alternative, CE asks people to choice between cases that are described by attributes. The attributes of this study were selected from hydrologic vulnerability components that represent flood damage possibility, instreamflow depletion, water quality deterioration, form of the watershed and tax. Their levels were divided into three grades include status quo. Two grades represented the ideal conditions. These scenarios were constructed from a 35 orthogonal main effect design. This design resulted in twenty-seven choice sets. The design had nine different choice scenarios presented to each respondent. The most popular choice models in use are the conditional logit (CNL). This model provides closed-form choice probability calculation. The shortcoming of CNL comes from irrelevant alternatives (IIA). In this paper, the mixed logit (ML) is applied to allow the coefficient’s variation for random taste heterogeneity in the population. The mixed logit model(with normal distributions for the attributes) fit the

  8. Analysis of hourly crash likelihood using unbalanced panel data mixed logit model and real-time driving environmental big data.

    Science.gov (United States)

    Chen, Feng; Chen, Suren; Ma, Xiaoxiang

    2018-06-01

    Driving environment, including road surface conditions and traffic states, often changes over time and influences crash probability considerably. It becomes stretched for traditional crash frequency models developed in large temporal scales to capture the time-varying characteristics of these factors, which may cause substantial loss of critical driving environmental information on crash prediction. Crash prediction models with refined temporal data (hourly records) are developed to characterize the time-varying nature of these contributing factors. Unbalanced panel data mixed logit models are developed to analyze hourly crash likelihood of highway segments. The refined temporal driving environmental data, including road surface and traffic condition, obtained from the Road Weather Information System (RWIS), are incorporated into the models. Model estimation results indicate that the traffic speed, traffic volume, curvature and chemically wet road surface indicator are better modeled as random parameters. The estimation results of the mixed logit models based on unbalanced panel data show that there are a number of factors related to crash likelihood on I-25. Specifically, weekend indicator, November indicator, low speed limit and long remaining service life of rutting indicator are found to increase crash likelihood, while 5-am indicator and number of merging ramps per lane per mile are found to decrease crash likelihood. The study underscores and confirms the unique and significant impacts on crash imposed by the real-time weather, road surface, and traffic conditions. With the unbalanced panel data structure, the rich information from real-time driving environmental big data can be well incorporated. Copyright © 2018 National Safety Council and Elsevier Ltd. All rights reserved.

  9. Associating crash avoidance maneuvers with driver attributes and accident characteristics: a mixed logit model approach

    DEFF Research Database (Denmark)

    Kaplan, Sigal; Prato, Carlo Giacomo

    2012-01-01

    from the key role of proactive and state-aware road users within the concept of sustainable safety systems, as well as from the key role of effective corrective maneuvers in the success of automated in-vehicle warning and driver assistance systems. Methods: The analysis is conducted by means of a mixed...... about the risks of driving under fatigue and distraction being comparable to the risks of driving under the influence of alcohol and drugs. Moreover, the results suggest the need to educate drivers about hazard perception, designing a forgiving infrastructure within a sustainable safety systems......Objective: The current study focuses on the propensity of drivers to engage in crash avoidance maneuvers in relation to driver attributes, critical events, crash characteristics, vehicles involved, road characteristics, and environmental conditions. The importance of avoidance maneuvers derives...

  10. Sequential and Simultaneous Logit: A Nested Model.

    NARCIS (Netherlands)

    van Ophem, J.C.M.; Schram, A.J.H.C.

    1997-01-01

    A nested model is presented which has both the sequential and the multinomial logit model as special cases. This model provides a simple test to investigate the validity of these specifications. Some theoretical properties of the model are discussed. In the analysis a distribution function is

  11. Interpreting Results from the Multinomial Logit Model

    DEFF Research Database (Denmark)

    Wulff, Jesper

    2015-01-01

    This article provides guidelines and illustrates practical steps necessary for an analysis of results from the multinomial logit model (MLM). The MLM is a popular model in the strategy literature because it allows researchers to examine strategic choices with multiple outcomes. However, there see...... suitable for both interpretation and communication of results. The pratical steps are illustrated through an application of the MLM to the choice of foreign market entry mode.......This article provides guidelines and illustrates practical steps necessary for an analysis of results from the multinomial logit model (MLM). The MLM is a popular model in the strategy literature because it allows researchers to examine strategic choices with multiple outcomes. However, there seem...... to be systematic issues with regard to how researchers interpret their results when using the MLM. In this study, I present a set of guidelines critical to analyzing and interpreting results from the MLM. The procedure involves intuitive graphical representations of predicted probabilities and marginal effects...

  12. Spatial age-length key modelling using continuation ratio logits

    DEFF Research Database (Denmark)

    Berg, Casper W.; Kristensen, Kasper

    2012-01-01

    -called age-length key (ALK) is then used to obtain the age distribution. Regional differences in ALKs are not uncommon, but stratification is often problematic due to a small number of samples. Here, we combine generalized additive modelling with continuation ratio logits to model the probability of age...

  13. STAS and Logit Modeling of Advertising and Promotion Effects

    DEFF Research Database (Denmark)

    Hansen, Flemming; Yssing Hansen, Lotte; Grønholdt, Lars

    2002-01-01

    This paper describes the preliminary studies of the effect of advertising and promotion on purchases using the British single-source database Adlab. STAS and logit modeling are the two measures studied. Results from the two measures have been compared to determine the extent to which, they give...

  14. A nested recursive logit model for route choice analysis

    DEFF Research Database (Denmark)

    Mai, Tien; Frejinger, Emma; Fosgerau, Mogens

    2015-01-01

    choices and the model does not require any sampling of choice sets. Furthermore, the model can be consistently estimated and efficiently used for prediction.A key challenge lies in the computation of the value functions, i.e. the expected maximum utility from any position in the network to a destination....... The value functions are the solution to a system of non-linear equations. We propose an iterative method with dynamic accuracy that allows to efficiently solve these systems.We report estimation results and a cross-validation study for a real network. The results show that the NRL model yields sensible......We propose a route choice model that relaxes the independence from irrelevant alternatives property of the logit model by allowing scale parameters to be link specific. Similar to the recursive logit (RL) model proposed by Fosgerau et al. (2013), the choice of path is modeled as a sequence of link...

  15. Total, Direct, and Indirect Effects in Logit Models

    DEFF Research Database (Denmark)

    Karlson, Kristian Bernt; Holm, Anders; Breen, Richard

    It has long been believed that the decomposition of the total effect of one variable on another into direct and indirect effects, while feasible in linear models, is not possible in non-linear probability models such as the logit and probit. In this paper we present a new and simple method...... average partial effects, as defined by Wooldridge (2002). We present the method graphically and illustrate it using the National Educational Longitudinal Study of 1988...

  16. Efficiency Loss of Mixed Equilibrium Associated with Altruistic Users and Logit-based Stochastic Users in Transportation Network

    Directory of Open Access Journals (Sweden)

    Xiao-Jun Yu

    2014-02-01

    Full Text Available The efficiency loss of mixed equilibrium associated with two categories of users is investigated in this paper. The first category of users are altruistic users (AU who have the same altruism coefficient and try to minimize their own perceived cost that assumed to be a linear combination of selfish com­ponent and altruistic component. The second category of us­ers are Logit-based stochastic users (LSU who choose the route according to the Logit-based stochastic user equilib­rium (SUE principle. The variational inequality (VI model is used to formulate the mixed route choice behaviours associ­ated with AU and LSU. The efficiency loss caused by the two categories of users is analytically derived and the relations to some network parameters are discussed. The numerical tests validate our analytical results. Our result takes the re­sults in the existing literature as its special cases.

  17. Interpreting Marginal Effects in the Multinomial Logit Model

    DEFF Research Database (Denmark)

    Wulff, Jesper

    2014-01-01

    with a substantial increase in the probability of entering a foreign market using a joint venture, while increases in the unpredictability in the host country environment are associated with a lower probability of wholly owned subsidiaries and a higher probability of exporting entries....... that have entered foreign markets. Through the application of a multinomial logit model, careful analysis of the marginal effects is performed through graphical representations, marginal effects at the mean, average marginal effects and elasticities. I show that increasing cultural distance is associated......This paper presents the challenges when researchers interpret results about relationships between variables from discrete choice models with multiple outcomes. The recommended approach is demonstrated by testing predictions from transaction cost theory on a sample of 246 Scandinavian firms...

  18. Ordered LOGIT Model approach for the determination of financial distress.

    Science.gov (United States)

    Kinay, B

    2010-01-01

    Nowadays, as a result of the global competition encountered, numerous companies come up against financial distresses. To predict and take proactive approaches for those problems is quite important. Thus, the prediction of crisis and financial distress is essential in terms of revealing the financial condition of companies. In this study, financial ratios relating to 156 industrial firms that are quoted in the Istanbul Stock Exchange are used and probabilities of financial distress are predicted by means of an ordered logit regression model. By means of Altman's Z Score, the dependent variable is composed by scaling the level of risk. Thus, a model that can compose an early warning system and predict financial distress is proposed.

  19. A mixed logit analysis of two-vehicle crash severities involving a motorcycle.

    Science.gov (United States)

    Shaheed, Mohammad Saad B; Gkritza, Konstantina; Zhang, Wei; Hans, Zachary

    2013-12-01

    Using motorcycle crash data for Iowa from 2001 to 2008, this paper estimates a mixed logit model to investigate the factors that affect crash severity outcomes in a collision between a motorcycle and another vehicle. These include crash-specific factors (such as manner of collision, motorcycle rider and non-motorcycle driver and vehicle actions), roadway and environmental conditions, location and time, motorcycle rider and non-motorcycle driver and vehicle attributes. The methodological approach allows the parameters to vary across observations as opposed to a single parameter representing all observations. Our results showed non-uniform effects of rear-end collisions on minor injury crashes, as well as of the roadway speed limit greater or equal to 55mph, the type of area (urban), the riding season (summer) and motorcyclist's gender on low severity crashes. We also found significant effects of the roadway surface condition, clear vision (not obscured by moving vehicles, trees, buildings, or other), light conditions, speed limit, and helmet use on severe injury outcomes. Copyright © 2013 Elsevier Ltd. All rights reserved.

  20. Essays on pricing dynamics, price dispersion, and nested logit modelling

    Science.gov (United States)

    Verlinda, Jeremy Alan

    The body of this dissertation comprises three standalone essays, presented in three respective chapters. Chapter One explores the possibility that local market power contributes to the asymmetric relationship observed between wholesale costs and retail prices in gasoline markets. I exploit an original data set of weekly gas station prices in Southern California from September 2002 to May 2003, and take advantage of highly detailed station and local market-level characteristics to determine the extent to which spatial differentiation influences price-response asymmetry. I find that brand identity, proximity to rival stations, bundling and advertising, operation type, and local market features and demographics each influence a station's predicted asymmetric relationship between prices and wholesale costs. Chapter Two extends the existing literature on the effect of market structure on price dispersion in airline fares by modeling the effect at the disaggregate ticket level. Whereas past studies rely on aggregate measures of price dispersion such as the Gini coefficient or the standard deviation of fares, this paper estimates the entire empirical distribution of airline fares and documents how the shape of the distribution is determined by market structure. Specifically, I find that monopoly markets favor a wider distribution of fares with more mass in the tails while duopoly and competitive markets exhibit a tighter fare distribution. These findings indicate that the dispersion of airline fares may result from the efforts of airlines to practice second-degree price discrimination. Chapter Three adopts a Bayesian approach to the problem of tree structure specification in nested logit modelling, which requires a heavy computational burden in calculating marginal likelihoods. I compare two different techniques for estimating marginal likelihoods: (1) the Laplace approximation, and (2) reversible jump MCMC. I apply the techniques to both a simulated and a travel mode

  1. Street Choice Logit Model for Visitors in Shopping Districts

    Directory of Open Access Journals (Sweden)

    Ko Kawada

    2014-07-01

    Full Text Available In this study, we propose two models for predicting people’s activity. The first model is the pedestrian distribution prediction (or postdiction model by multiple regression analysis using space syntax indices of urban fabric and people distribution data obtained from a field survey. The second model is a street choice model for visitors using multinomial logit model. We performed a questionnaire survey on the field to investigate the strolling routes of 46 visitors and obtained a total of 1211 street choices in their routes. We proposed a utility function, sum of weighted space syntax indices, and other indices, and estimated the parameters for weights on the basis of maximum likelihood. These models consider both street networks, distance from destination, direction of the street choice and other spatial compositions (numbers of pedestrians, cars, shops, and elevation. The first model explains the characteristics of the street where many people tend to walk or stay. The second model explains the mechanism underlying the street choice of visitors and clarifies the differences in the weights of street choice parameters among the various attributes, such as gender, existence of destinations, number of people, etc. For all the attributes considered, the influences of DISTANCE and DIRECTION are strong. On the other hand, the influences of Int.V, SHOPS, CARS, ELEVATION, and WIDTH are different for each attribute. People with defined destinations tend to choose streets that “have more shops, and are wider and lower”. In contrast, people with undefined destinations tend to choose streets of high Int.V. The choice of males is affected by Int.V, SHOPS, WIDTH (positive and CARS (negative. Females prefer streets that have many shops, and couples tend to choose downhill streets. The behavior of individual persons is affected by all variables. The behavior of people visiting in groups is affected by SHOP and WIDTH (positive.

  2. Street Choice Logit Model for Visitors in Shopping Districts

    Science.gov (United States)

    Kawada, Ko; Yamada, Takashi; Kishimoto, Tatsuya

    2014-01-01

    In this study, we propose two models for predicting people’s activity. The first model is the pedestrian distribution prediction (or postdiction) model by multiple regression analysis using space syntax indices of urban fabric and people distribution data obtained from a field survey. The second model is a street choice model for visitors using multinomial logit model. We performed a questionnaire survey on the field to investigate the strolling routes of 46 visitors and obtained a total of 1211 street choices in their routes. We proposed a utility function, sum of weighted space syntax indices, and other indices, and estimated the parameters for weights on the basis of maximum likelihood. These models consider both street networks, distance from destination, direction of the street choice and other spatial compositions (numbers of pedestrians, cars, shops, and elevation). The first model explains the characteristics of the street where many people tend to walk or stay. The second model explains the mechanism underlying the street choice of visitors and clarifies the differences in the weights of street choice parameters among the various attributes, such as gender, existence of destinations, number of people, etc. For all the attributes considered, the influences of DISTANCE and DIRECTION are strong. On the other hand, the influences of Int.V, SHOPS, CARS, ELEVATION, and WIDTH are different for each attribute. People with defined destinations tend to choose streets that “have more shops, and are wider and lower”. In contrast, people with undefined destinations tend to choose streets of high Int.V. The choice of males is affected by Int.V, SHOPS, WIDTH (positive) and CARS (negative). Females prefer streets that have many shops, and couples tend to choose downhill streets. The behavior of individual persons is affected by all variables. The behavior of people visiting in groups is affected by SHOP and WIDTH (positive). PMID:25379274

  3. Unobserved Heterogeneity in the Binary Logit Model with Cross-Sectional Data and Short Panels

    DEFF Research Database (Denmark)

    Holm, Anders; Jæger, Mads Meier; Pedersen, Morten

    This paper proposes a new approach to dealing with unobserved heterogeneity in applied research using the binary logit model with cross-sectional data and short panels. Unobserved heterogeneity is particularly important in non-linear regression models such as the binary logit model because, unlike...... in linear regression models, estimates of the effects of observed independent variables are biased even when omitted independent variables are uncorrelated with the observed independent variables. We propose an extension of the binary logit model based on a finite mixture approach in which we conceptualize...

  4. Interpreting and Understanding Logits, Probits, and other Non-Linear Probability Models

    DEFF Research Database (Denmark)

    Breen, Richard; Karlson, Kristian Bernt; Holm, Anders

    2018-01-01

    Methods textbooks in sociology and other social sciences routinely recommend the use of the logit or probit model when an outcome variable is binary, an ordered logit or ordered probit when it is ordinal, and a multinomial logit when it has more than two categories. But these methodological...... guidelines take little or no account of a body of work that, over the past 30 years, has pointed to problematic aspects of these nonlinear probability models and, particularly, to difficulties in interpreting their parameters. In this chapterreview, we draw on that literature to explain the problems, show...

  5. An integrated Markov decision process and nested logit consumer response model of air ticket pricing

    NARCIS (Netherlands)

    Lu, J.; Feng, T.; Timmermans, H.P.J.; Yang, Z.

    2017-01-01

    The paper attempts to propose an optimal air ticket pricing model during the booking horizon by taking into account passengers' purchasing behavior of air tickets. A Markov decision process incorporating a nested logit consumer response model is established to modeling the dynamic pricing process.

  6. DETEKSI DINI KRISIS PERBANKAN INDONESIA: IDENTIFIKASI VARIABEL MAKRO DENGAN MODEL LOGIT

    Directory of Open Access Journals (Sweden)

    Shanty Oktavilia

    2012-01-01

    Full Text Available Indonesia suffered from banking crisis for several times. It was the effect of the worst crisis occurredin 1997. Actually, Bath Thailand which plunged into 27,8% at the third quarter of the year 1997 was thebeginning problem that caused Asia currency crisis. This study analyzes the influence of macro indicatoras an early warning system by using logit econometrics model for predicting the possibilities of bankingcrisis that may occur in Indonesia.Kewords: Banking Crisis, macro economic indicator, EWS-logit model

  7. Analysis of RIA standard curve by log-logistic and cubic log-logit models

    International Nuclear Information System (INIS)

    Yamada, Hideo; Kuroda, Akira; Yatabe, Tami; Inaba, Taeko; Chiba, Kazuo

    1981-01-01

    In order to improve goodness-of-fit in RIA standard analysis, programs for computing log-logistic and cubic log-logit were written in BASIC using personal computer P-6060 (Olivetti). Iterative least square method of Taylor series was applied for non-linear estimation of logistic and log-logistic. Hear ''log-logistic'' represents Y = (a - d)/(1 + (log(X)/c)sup(b)) + d As weights either 1, 1/var(Y) or 1/σ 2 were used in logistic or log-logistic and either Y 2 (1 - Y) 2 , Y 2 (1 - Y) 2 /var(Y), or Y 2 (1 - Y) 2 /σ 2 were used in quadratic or cubic log-logit. The term var(Y) represents squares of pure error and σ 2 represents estimated variance calculated using a following equation log(σ 2 + 1) = log(A) + J log(y). As indicators for goodness-of-fit, MSL/S sub(e)sup(2), CMD% and WRV (see text) were used. Better regression was obtained in case of alpha-fetoprotein by log-logistic than by logistic. Cortisol standard curve was much better fitted with cubic log-logit than quadratic log-logit. Predicted precision of AFP standard curve was below 5% in log-logistic in stead of 8% in logistic analysis. Predicted precision obtained using cubic log-logit was about five times lower than that with quadratic log-logit. Importance of selecting good models in RIA data processing was stressed in conjunction with intrinsic precision of radioimmunoassay system indicated by predicted precision. (author)

  8. Another Look at the Method of Y-Standardization in Logit and Probit Models

    DEFF Research Database (Denmark)

    Karlson, Kristian Bernt

    2015-01-01

    This paper takes another look at the derivation of the method of Y-standardization used in sociological analysis involving comparisons of coefficients across logit or probit models. It shows that the method can be derived under less restrictive assumptions than hitherto suggested. Rather than...

  9. Logit Estimation of a Gravity Model of the College Enrollment Decision.

    Science.gov (United States)

    Leppel, Karen

    1993-01-01

    A study investigated the factors influencing students' decisions about attending a college to which they had been admitted. Logit analysis confirmed gravity model predictions that geographic distance and student ability would most influence the enrollment decision and found other variables, although affecting earlier stages of decision making, did…

  10. Study on Emission Measurement of Vehicle on Road Based on Binomial Logit Model

    OpenAIRE

    Aly, Sumarni Hamid; Selintung, Mary; Ramli, Muhammad Isran; Sumi, Tomonori

    2011-01-01

    This research attempts to evaluate emission measurement of on road vehicle. In this regard, the research develops failure probability model of vehicle emission test for passenger car which utilize binomial logit model. The model focuses on failure of CO and HC emission test for gasoline cars category and Opacity emission test for diesel-fuel cars category as dependent variables, while vehicle age, engine size, brand and type of the cars as independent variables. In order to imp...

  11. How bicycle level of traffic stress correlate with reported cyclist accidents injury severities: A geospatial and mixed logit analysis.

    Science.gov (United States)

    Chen, Chen; Anderson, Jason C; Wang, Haizhong; Wang, Yinhai; Vogt, Rachel; Hernandez, Salvador

    2017-11-01

    Transportation agencies need efficient methods to determine how to reduce bicycle accidents while promoting cycling activities and prioritizing safety improvement investments. Many studies have used standalone methods, such as level of traffic stress (LTS) and bicycle level of service (BLOS), to better understand bicycle mode share and network connectivity for a region. However, in most cases, other studies rely on crash severity models to explain what variables contribute to the severity of bicycle related crashes. This research uniquely correlates bicycle LTS with reported bicycle crash locations for four cities in New Hampshire through geospatial mapping. LTS measurements and crash locations are compared visually using a GIS framework. Next, a bicycle injury severity model, that incorporates LTS measurements, is created through a mixed logit modeling framework. Results of the visual analysis show some geospatial correlation between higher LTS roads and "Injury" type bicycle crashes. It was determined, statistically, that LTS has an effect on the severity level of bicycle crashes and high LTS can have varying effects on severity outcome. However, it is recommended that further analyses be conducted to better understand the statistical significance and effect of LTS on injury severity. As such, this research will validate the use of LTS as a proxy for safety risk regardless of the recorded bicycle crash history. This research will help identify the clustering patterns of bicycle crashes on high-risk corridors and, therefore, assist with bicycle route planning and policy making. This paper also suggests low-cost countermeasures or treatments that can be implemented to address high-risk areas. Specifically, with the goal of providing safer routes for cyclists, such countermeasures or treatments have the potential to substantially reduce the number of fatalities and severe injuries. Published by Elsevier Ltd.

  12. Analyzing Korean consumers’ latent preferences for electricity generation sources with a hierarchical Bayesian logit model in a discrete choice experiment

    International Nuclear Information System (INIS)

    Byun, Hyunsuk; Lee, Chul-Yong

    2017-01-01

    Generally, consumers use electricity without considering the source the electricity was generated from. Since different energy sources exert varying effects on society, it is necessary to analyze consumers’ latent preference for electricity generation sources. The present study estimates Korean consumers’ marginal utility and an appropriate generation mix is derived using the hierarchical Bayesian logit model in a discrete choice experiment. The results show that consumers consider the danger posed by the source of electricity as the most important factor among the effects of electricity generation sources. Additionally, Korean consumers wish to reduce the contribution of nuclear power from the existing 32–11%, and increase that of renewable energy from the existing 4–32%. - Highlights: • We derive an electricity mix reflecting Korean consumers’ latent preferences. • We use the discrete choice experiment and hierarchical Bayesian logit model. • The danger posed by the generation source is the most important attribute. • The consumers wish to increase the renewable energy proportion from 4.3% to 32.8%. • Korea's cost-oriented energy supply policy and consumers’ preference differ markedly.

  13. An Empirical Analysis of Television Commercial Ratings in Alternative Competitive Environments Using Multinomial Logit Model

    Directory of Open Access Journals (Sweden)

    Dilek ALTAŞ

    2013-05-01

    Full Text Available Watching the commercials depends on the choice of the viewer. Most of the television viewing takes place during “Prime-Time” unfortunately; many viewers opt to zap to other channels when commercials start. The television viewers’ demographic characteristics may indicate the likelihood of the zapping frequency. Analysis made by using Multinomial Logit Model indicates how effective the demographic variables are in the watching rate of the first minute of the television commercials.

  14. Analysis of Internet Usage Intensity in Iraq: An Ordered Logit Model

    OpenAIRE

    Almas Heshmati; Firas H. Al-Hammadany; Ashraf Bany-Mohammed

    2013-01-01

    Intensity of Internet use is significantly influenced by government policies, people’s levels of income, education, employment and general development and economic conditions. Iraq has very low Internet usage levels compared to the region and the world. This study uses an ordered logit model to analyse the intensity of Internet use in Iraq. The results showed that economic reasons (internet cost and income level) were key cause for low level usage intensity rates. About 68% of the population ...

  15. Stability of Mixed-Strategy-Based Iterative Logit Quantal Response Dynamics in Game Theory

    Science.gov (United States)

    Zhuang, Qian; Di, Zengru; Wu, Jinshan

    2014-01-01

    Using the Logit quantal response form as the response function in each step, the original definition of static quantal response equilibrium (QRE) is extended into an iterative evolution process. QREs remain as the fixed points of the dynamic process. However, depending on whether such fixed points are the long-term solutions of the dynamic process, they can be classified into stable (SQREs) and unstable (USQREs) equilibriums. This extension resembles the extension from static Nash equilibriums (NEs) to evolutionary stable solutions in the framework of evolutionary game theory. The relation between SQREs and other solution concepts of games, including NEs and QREs, is discussed. Using experimental data from other published papers, we perform a preliminary comparison between SQREs, NEs, QREs and the observed behavioral outcomes of those experiments. For certain games, we determine that SQREs have better predictive power than QREs and NEs. PMID:25157502

  16. The importance of examining movements within the US health care system: sequential logit modeling

    Directory of Open Access Journals (Sweden)

    Lee Chioun

    2010-09-01

    Full Text Available Abstract Background Utilization of specialty care may not be a discrete, isolated behavior but rather, a behavior of sequential movements within the health care system. Although patients may often visit their primary care physician and receive a referral before utilizing specialty care, prior studies have underestimated the importance of accounting for these sequential movements. Methods The sample included 6,772 adults aged 18 years and older who participated in the 2001 Survey on Disparities in Quality of Care, sponsored by the Commonwealth Fund. A sequential logit model was used to account for movement in all stages of utilization: use of any health services (i.e., first stage, having a perceived need for specialty care (i.e., second stage, and utilization of specialty care (i.e., third stage. In the sequential logit model, all stages are nested within the previous stage. Results Gender, race/ethnicity, education and poor health had significant explanatory effects with regard to use of any health services and having a perceived need for specialty care, however racial/ethnic, gender, and educational disparities were not present in utilization of specialty care. After controlling for use of any health services and having a perceived need for specialty care, inability to pay for specialty care via income (AOR = 1.334, CI = 1.10 to 1.62 or health insurance (unstable insurance: AOR = 0.26, CI = 0.14 to 0.48; no insurance: AOR = 0.12, CI = 0.07 to 0.20 were significant barriers to utilization of specialty care. Conclusions Use of a sequential logit model to examine utilization of specialty care resulted in a detailed representation of utilization behaviors and patient characteristics that impact these behaviors at all stages within the health care system. After controlling for sequential movements within the health care system, the biggest barrier to utilizing specialty care is the inability to pay, while racial, gender, and educational disparities

  17. Determination of the Factors Influencing Store Preference in Erzurum by a Multinomial Logit Model

    Directory of Open Access Journals (Sweden)

    Hüseyin ÖZER

    2008-12-01

    Full Text Available The main objective of this study is to determine factors influencing store preference of the store costumers in Erzurum in terms of some characteristics of the store and its product and costumers’ demographic characteristics (sex, age, marital status, level of education and their income level. In order to carry out this objective, Pearson chi-square test is applied to determine whether there is a relationship between the store preference and customer, stores, and some characteristics of products and a multinominal logit model is fitted by stepwise regression method to the cross-section data compiled from a questionnaire applied to 384 store costumers in the center of Erzurum province. According to the model estimation and test results, the variables of marital status (married, education (primary and cheapness (unimportant for Migros; education (middle for Özmar and marital status (married for the other stores are determined as statistically significant at the level of 5 percent

  18. Airport Choice in Sao Paulo Metropolitan Area: An Application of the Conditional Logit Model

    Science.gov (United States)

    Moreno, Marcelo Baena; Muller, Carlos

    2003-01-01

    Using the conditional LOGIT model, this paper addresses the airport choice in the Sao Paulo Metropolitan Area. In this region, Guarulhos International Airport (GRU) and Congonhas Airport (CGH) compete for passengers flying to several domestic destinations. The airport choice is believed to be a result of the tradeoff passengers perform considering airport access characteristics, airline level of service characteristics and passenger experience with the analyzed airports. It was found that access time to the airports better explain the airport choice than access distance, whereas direct flight frequencies gives better explanation to the airport choice than the indirect (connections and stops) and total (direct plus indirect) flight frequencies. Out of 15 tested variables, passenger experience with the analyzed airports was the variable that best explained the airport choice in the region. Model specifications considering 1, 2 or 3 variables were tested. The model specification most adjusted to the observed data considered access time, direct flight frequencies in the travel period (morning or afternoon peak) and passenger experience with the analyzed airports. The influence of these variables was therefore analyzed across market segments according to departure airport and flight duration criteria. The choice of GRU (located neighboring Sao Paulo city) is not well explained by the rationality of access time economy and the increase of the supply of direct flight frequencies, while the choice of CGH (located inside Sao Paulo city) is. Access time was found to be more important to passengers flying shorter distances while direct flight frequencies in the travel period were more significant to those flying longer distances. Keywords: Airport choice, Multiple airport region, Conditional LOGIT model, Access time, Flight frequencies, Passenger experience with the analyzed airports, Transportation planning

  19. Risk factors associated with bus accident severity in the United States: A generalized ordered logit model

    DEFF Research Database (Denmark)

    Kaplan, Sigal; Prato, Carlo Giacomo

    2012-01-01

    of 2011. Method: The current study investigates the underlying risk factors of bus accident severity in the United States by estimating a generalized ordered logit model. Data for the analysis are retrieved from the General Estimates System (GES) database for the years 2005–2009. Results: Results show...... that accident severity increases: (i) for young bus drivers under the age of 25; (ii) for drivers beyond the age of 55, and most prominently for drivers over 65 years old; (iii) for female drivers; (iv) for very high (over 65 mph) and very low (under 20 mph) speed limits; (v) at intersections; (vi) because......Introduction: Recent years have witnessed a growing interest in improving bus safety operations worldwide. While in the United States buses are considered relatively safe, the number of bus accidents is far from being negligible, triggering the introduction of the Motor-coach Enhanced Safety Act...

  20. Model-based Clustering of Categorical Time Series with Multinomial Logit Classification

    Science.gov (United States)

    Frühwirth-Schnatter, Sylvia; Pamminger, Christoph; Winter-Ebmer, Rudolf; Weber, Andrea

    2010-09-01

    A common problem in many areas of applied statistics is to identify groups of similar time series in a panel of time series. However, distance-based clustering methods cannot easily be extended to time series data, where an appropriate distance-measure is rather difficult to define, particularly for discrete-valued time series. Markov chain clustering, proposed by Pamminger and Frühwirth-Schnatter [6], is an approach for clustering discrete-valued time series obtained by observing a categorical variable with several states. This model-based clustering method is based on finite mixtures of first-order time-homogeneous Markov chain models. In order to further explain group membership we present an extension to the approach of Pamminger and Frühwirth-Schnatter [6] by formulating a probabilistic model for the latent group indicators within the Bayesian classification rule by using a multinomial logit model. The parameters are estimated for a fixed number of clusters within a Bayesian framework using an Markov chain Monte Carlo (MCMC) sampling scheme representing a (full) Gibbs-type sampler which involves only draws from standard distributions. Finally, an application to a panel of Austrian wage mobility data is presented which leads to an interesting segmentation of the Austrian labour market.

  1. Assessment of Poisson, logit, and linear models for genetic analysis of clinical mastitis in Norwegian Red cows.

    Science.gov (United States)

    Vazquez, A I; Gianola, D; Bates, D; Weigel, K A; Heringstad, B

    2009-02-01

    Clinical mastitis is typically coded as presence/absence during some period of exposure, and records are analyzed with linear or binary data models. Because presence includes cows with multiple episodes, there is loss of information when a count is treated as a binary response. The Poisson model is designed for counting random variables, and although it is used extensively in epidemiology of mastitis, it has rarely been used for studying the genetics of mastitis. Many models have been proposed for genetic analysis of mastitis, but they have not been formally compared. The main goal of this study was to compare linear (Gaussian), Bernoulli (with logit link), and Poisson models for the purpose of genetic evaluation of sires for mastitis in dairy cattle. The response variables were clinical mastitis (CM; 0, 1) and number of CM cases (NCM; 0, 1, 2, ..). Data consisted of records on 36,178 first-lactation daughters of 245 Norwegian Red sires distributed over 5,286 herds. Predictive ability of models was assessed via a 3-fold cross-validation using mean squared error of prediction (MSEP) as the end-point. Between-sire variance estimates for NCM were 0.065 in Poisson and 0.007 in the linear model. For CM the between-sire variance was 0.093 in logit and 0.003 in the linear model. The ratio between herd and sire variances for the models with NCM response was 4.6 and 3.5 for Poisson and linear, respectively, and for model for CM was 3.7 in both logit and linear models. The MSEP for all cows was similar. However, within healthy animals, MSEP was 0.085 (Poisson), 0.090 (linear for NCM), 0.053 (logit), and 0.056 (linear for CM). For mastitic animals the MSEP values were 1.206 (Poisson), 1.185 (linear for NCM response), 1.333 (logit), and 1.319 (linear for CM response). The models for count variables had a better performance when predicting diseased animals and also had a similar performance between them. Logit and linear models for CM had better predictive ability for healthy

  2. Analysis of Salmonella sp bacterial contamination on Vannamei Shrimp using binary logit model approach

    Science.gov (United States)

    Oktaviana, P. P.; Fithriasari, K.

    2018-04-01

    Mostly Indonesian citizen consume vannamei shrimp as their food. Vannamei shrimp also is one of Indonesian exports comodities mainstay. Vannamei shrimp in the ponds and markets could be contaminated by Salmonella sp bacteria. This bacteria will endanger human health. Salmonella sp bacterial contamination on vannamei shrimp could be affected by many factors. This study is intended to identify what factors that supposedly influence the Salmonella sp bacterial contamination on vannamei shrimp. The researchers used the testing result of Salmonella sp bacterial contamination on vannamei shrimp as response variable. This response variable has two categories: 0 = if testing result indicate that there is no Salmonella sp on vannamei shrimp; 1 = if testing result indicate that there is Salmonella sp on vannamei shrimp. There are four factors that supposedly influence the Salmonella sp bacterial contamination on vannamei shrimp, which are the testing result of Salmonella sp bacterial contamination on farmer hand swab; the subdistrict of vannamei shrimp ponds; the fish processing unit supplied by; and the pond are in hectare. This four factors used as predictor variables. The analysis used is Binary Logit Model Approach according to the response variable that has two categories. The analysis result indicates that the factors or predictor variables which is significantly affect the Salmonella sp bacterial contamination on vannamei shrimp are the testing result of Salmonella sp bacterial contamination on farmer hand swab and the subdistrict of vannamei shrimp ponds.

  3. A Subpath-based Logit Model to Capture the Correlation of Routes

    Directory of Open Access Journals (Sweden)

    Xinjun Lai

    2016-06-01

    Full Text Available A subpath-based methodology is proposed to capture the travellers’ route choice behaviours and their perceptual correlation of routes, because the original link-based style may not be suitable in application: (1 travellers do not process road network information and construct the chosen route by a link-by-link style; (2 observations from questionnaires and GPS data, however, are not always link-specific. Subpaths are defined as important portions of the route, such as major roads and landmarks. The cross-nested Logit (CNL structure is used for its tractable closed-form and its capability to explicitly capture the routes correlation. Nests represent subpaths other than links so that the number of nests is significantly reduced. Moreover, the proposed method simplifies the original link-based CNL model; therefore, it alleviates the estimation and computation difficulties. The estimation and forecast validation with real data are presented, and the results suggest that the new method is practical.

  4. Comparing Johnson’s SBB, Weibull and Logit-Logistic bivariate distributions for modeling tree diameters and heights using copulas

    Energy Technology Data Exchange (ETDEWEB)

    Cardil Forradellas, A.; Molina Terrén, D.M.; Oliveres, J.; Castellnou, M.

    2016-07-01

    Aim of study: In this study we compare the accuracy of three bivariate distributions: Johnson’s SBB, Weibull-2P and LL-2P functions for characterizing the joint distribution of tree diameters and heights. Area of study: North-West of Spain. Material and methods: Diameter and height measurements of 128 plots of pure and even-aged Tasmanian blue gum (Eucalyptus globulus Labill.) stands located in the North-west of Spain were considered in the present study. The SBB bivariate distribution was obtained from SB marginal distributions using a Normal Copula based on a four-parameter logistic transformation. The Plackett Copula was used to obtain the bivariate models from the Weibull and Logit-logistic univariate marginal distributions. The negative logarithm of the maximum likelihood function was used to compare the results and the Wilcoxon signed-rank test was used to compare the related samples of these logarithms calculated for each sample plot and each distribution. Main results: The best results were obtained by using the Plackett copula and the best marginal distribution was the Logit-logistic. Research highlights: The copulas used in this study have shown a good performance for modeling the joint distribution of tree diameters and heights. They could be easily extended for modelling multivariate distributions involving other tree variables, such as tree volume or biomass. (Author)

  5. Recreation Value of Water to Wetlands in the San Joaquin Valley: Linked Multinomial Logit and Count Data Trip Frequency Models

    Science.gov (United States)

    Creel, Michael; Loomis, John

    1992-10-01

    The recreational benefits from providing increased quantities of water to wildlife and fisheries habitats is estimated using linked multinomial logit site selection models and count data trip frequency models. The study encompasses waterfowl hunting, fishing and wildlife viewing at 14 recreational resources in the San Joaquin Valley, including the National Wildlife Refuges, the State Wildlife Management Areas, and six river destinations. The economic benefits of increasing water supplies to wildlife refuges were also examined by using the estimated models to predict changing patterns of site selection and overall participation due to increases in water allocations. Estimates of the dollar value per acre foot of water are calculated for increases in water to refuges. The resulting model is a flexible and useful tool for estimating the economic benefits of alternative water allocation policies for wildlife habitat and rivers.

  6. Modeling a Multinomial Logit Model of Intercity Travel Mode Choice Behavior for All Trips in Libya

    OpenAIRE

    Manssour A. Abdulsalam Bin Miskeen; Ahmed Mohamed Alhodairi; Riza Atiq Abdullah Bin O. K. Rahmat

    2013-01-01

    In the planning point of view, it is essential to have mode choice, due to the massive amount of incurred in transportation systems. The intercity travellers in Libya have distinct features, as against travellers from other countries, which includes cultural and socioeconomic factors. Consequently, the goal of this study is to recognize the behavior of intercity travel using disaggregate models, for projecting the demand of nation-level intercity travel in Libya. Multinom...

  7. Logit and probit model in toll sensitivity analysis of Solo-Ngawi, Kartasura-Palang Joglo segment based on Willingness to Pay (WTP)

    Science.gov (United States)

    Handayani, Dewi; Cahyaning Putri, Hera; Mahmudah, AMH

    2017-12-01

    Solo-Ngawi toll road project is part of the mega project of the Trans Java toll road development initiated by the government and is still under construction until now. PT Solo Ngawi Jaya (SNJ) as the Solo-Ngawi toll management company needs to determine the toll fare that is in accordance with the business plan. The determination of appropriate toll rates will affect progress in regional economic sustainability and decrease the traffic congestion. These policy instruments is crucial for achieving environmentally sustainable transport. Therefore, the objective of this research is to find out how the toll fare sensitivity of Solo-Ngawi toll road based on Willingness To Pay (WTP). Primary data was obtained by distributing stated preference questionnaires to four wheeled vehicle users in Kartasura-Palang Joglo artery road segment. Further data obtained will be analysed with logit and probit model. Based on the analysis, it is found that the effect of fare change on the amount of WTP on the binomial logit model is more sensitive than the probit model on the same travel conditions. The range of tariff change against values of WTP on the binomial logit model is 20% greater than the range of values in the probit model . On the other hand, the probability results of the binomial logit model and the binary probit have no significant difference (less than 1%).

  8. Getting the right balance? A mixed logit analysis of the relationship between UK training doctors' characteristics and their specialties using the 2013 National Training Survey.

    Science.gov (United States)

    Rodriguez Santana, Idaira; Chalkley, Martin

    2017-08-11

    To analyse how training doctors' demographic and socioeconomic characteristics vary according to the specialty that they are training for. Descriptive statistics and mixed logistic regression analysis of cross-sectional survey data to quantify evidence of systematic relationships between doctors' characteristics and their specialty. Doctors in training in the United Kingdom in 2013. 27 530 doctors in training but not in their foundation year who responded to the National Training Survey 2013. Mixed logit regression estimates and the corresponding odds ratios (calculated separately for all doctors in training and a subsample comprising those educated in the UK), relating gender, age, ethnicity, place of studies, socioeconomic background and parental education to the probability of training for a particular specialty. Being female and being white British increase the chances of being in general practice with respect to any other specialty, while coming from a better-off socioeconomic background and having parents with tertiary education have the opposite effect. Mixed results are found for age and place of studies. For example, the difference between men and women is greatest for surgical specialties for which a man is 12.121 times more likely to be training to a surgical specialty (relative to general practice) than a woman (p-valuevalue<0.01). There are systematic and substantial differences between specialties in respect of training doctors' gender, ethnicity, age and socioeconomic background. The persistent underrepresentation in some specialties of women, minority ethnic groups and of those coming from disadvantaged backgrounds will impact on the representativeness of the profession into the future. Further research is needed to understand how the processes of selection and the self-selection of applicants into specialties gives rise to these observed differences. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article

  9. Exploratory multinomial logit model-based driver injury severity analyses for teenage and adult drivers in intersection-related crashes.

    Science.gov (United States)

    Wu, Qiong; Zhang, Guohui; Ci, Yusheng; Wu, Lina; Tarefder, Rafiqul A; Alcántara, Adélamar Dely

    2016-05-18

    Teenage drivers are more likely to be involved in severely incapacitating and fatal crashes compared to adult drivers. Moreover, because two thirds of urban vehicle miles traveled are on signal-controlled roadways, significant research efforts are needed to investigate intersection-related teenage driver injury severities and their contributing factors in terms of driver behavior, vehicle-infrastructure interactions, environmental characteristics, roadway geometric features, and traffic compositions. Therefore, this study aims to explore the characteristic differences between teenage and adult drivers in intersection-related crashes, identify the significant contributing attributes, and analyze their impacts on driver injury severities. Using crash data collected in New Mexico from 2010 to 2011, 2 multinomial logit regression models were developed to analyze injury severities for teenage and adult drivers, respectively. Elasticity analyses and transferability tests were conducted to better understand the quantitative impacts of these factors and the teenage driver injury severity model's generality. The results showed that although many of the same contributing factors were found to be significant in the both teenage and adult driver models, certain different attributes must be distinguished to specifically develop effective safety solutions for the 2 driver groups. The research findings are helpful to better understand teenage crash uniqueness and develop cost-effective solutions to reduce intersection-related teenage injury severities and facilitate driver injury mitigation research.

  10. A Smooth Transition Logit Model of the Effects of Deregulation in the Electricity Market

    DEFF Research Database (Denmark)

    Hurn, A.S.; Silvennoinen, Annastiina; Teräsvirta, Timo

    We consider a nonlinear vector model called the logistic vector smooth transition autoregressive model. The bivariate single-transition vector smooth transition regression model of Camacho (2004) is generalised to a multivariate and multitransition one. A modelling strategy consisting of specific......We consider a nonlinear vector model called the logistic vector smooth transition autoregressive model. The bivariate single-transition vector smooth transition regression model of Camacho (2004) is generalised to a multivariate and multitransition one. A modelling strategy consisting...... of specification, including testing linearity, estimation and evaluation of these models is constructed. Nonlinear least squares estimation of the parameters of the model is discussed. Evaluation by misspecification tests is carried out using tests derived in a companion paper. The use of the modelling strategy...

  11. An econometric analysis of changes in arable land utilization using multinomial logit model in Pinggu district, Beijing, China.

    Science.gov (United States)

    Xu, Yueqing; McNamara, Paul; Wu, Yanfang; Dong, Yue

    2013-10-15

    Arable land in China has been decreasing as a result of rapid population growth and economic development as well as urban expansion, especially in developed regions around cities where quality farmland quickly disappears. This paper analyzed changes in arable land utilization during 1993-2008 in the Pinggu district, Beijing, China, developed a multinomial logit (MNL) model to determine spatial driving factors influencing arable land-use change, and simulated arable land transition probabilities. Land-use maps, as well as social-economic and geographical data were used in the study. The results indicated that arable land decreased significantly between 1993 and 2008. Lost arable land shifted into orchard, forestland, settlement, and transportation land. Significant differences existed for arable land transitions among different landform areas. Slope, elevation, population density, urbanization rate, distance to settlements, and distance to roadways were strong drivers influencing arable land transition to other uses. The MNL model was proved effective for predicting transition probabilities in land use from arable land to other land-use types, thus can be used for scenario analysis to develop land-use policies and land-management measures in this metropolitan area. Copyright © 2013 Elsevier Ltd. All rights reserved.

  12. Revealing additional preference heterogeneity with an extended random parameter logit model: the case of extra virgin olive oil

    Directory of Open Access Journals (Sweden)

    Ahmed Yangui

    2014-07-01

    Full Text Available Methods that account for preference heterogeneity have received a significant amount of attention in recent literature. Most of them have focused on preference heterogeneity around the mean of the random parameters, which has been specified as a function of socio-demographic characteristics. This paper aims at analyzing consumers’ preferences towards extra-virgin olive oil in Catalonia using a methodological framework with two novelties over past studies: 1 it accounts for both preference heterogeneity around the mean and the variance; and 2 it considers both socio-demographic characteristics of consumers as well as their attitudinal factors. Estimated coefficients and moments of willingness to pay (WTP distributions are compared with those obtained from alternative Random Parameter Logit (RPL models. Results suggest that the proposed framework increases the goodness-of-fit and provides more useful insights for policy analysis. The most important attributes affecting consumers’ preferences towards extra virgin olive oil are the price and the product’s origin. The consumers perceive the organic olive oil attribute negatively, as they think that it is not worth paying a premium for a product that is healthy in nature.

  13. A smooth transition logit model of the effects of deregulation in the electricity market

    DEFF Research Database (Denmark)

    Hurn, A. Stan; Silvennoinen, Annastiina; Teräsvirta, Timo

    2016-01-01

    of the model are derived along with their asymptotic properties, together with a Lagrange multiplier test of the null hypothesis of linearity in the underlying latent index. The development of the STL model is motivated by the desire to assess the impact of deregulation in the Queensland electricity market...... and ascertain whether increased competition has resulted in significant changes in the behaviour of the spot price of electricity, specifically with respect to the occurrence of periodic abnormally high prices. The model allows the timing of any change to be endogenously determined and also market participants...

  14. Radiation effects on cancer mortality among A-bomb survivors, 1950-72. Comparison of some statistical models and analysis based on the additive logit model

    Energy Technology Data Exchange (ETDEWEB)

    Otake, M [Hiroshima Univ. (Japan). Faculty of Science

    1976-12-01

    Various statistical models designed to determine the effects of radiation dose on mortality of atomic bomb survivors in Hiroshima and Nagasaki from specific cancers were evaluated on the basis of a basic k(age) x c(dose) x 2 contingency table. From the aspects of application and fits of different models, analysis based on the additive logit model was applied to the mortality experience of this population during the 22year period from 1 Oct. 1950 to 31 Dec. 1972. The advantages and disadvantages of the additive logit model were demonstrated. Leukemia mortality showed a sharp rise with an increase in dose. The dose response relationship suggests a possible curvature or a log linear model, particularly if the dose estimated to be more than 600 rad were set arbitrarily at 600 rad, since the average dose in the 200+ rad group would then change from 434 to 350 rad. In the 22year period from 1950 to 1972, a high mortality risk due to radiation was observed in survivors with doses of 200 rad and over for all cancers except leukemia. On the other hand, during the latest period from 1965 to 1972 a significant risk was noted also for stomach and breast cancers. Survivors who were 9 year old or less at the time of the bomb and who were exposed to high doses of 200+ rad appeared to show a high mortality risk for all cancers except leukemia, although the number of observed deaths is yet small. A number of interesting areas are discussed from the statistical and epidemiological standpoints, i.e., the numerical comparison of risks in various models, the general evaluation of cancer mortality by the additive logit model, the dose response relationship, the relative risk in the high dose group, the time period of radiation induced cancer mortality, the difference of dose response between Hiroshima and Nagasaki and the relative biological effectiveness of neutrons.

  15. An integrated logit model for contamination event detection in water distribution systems.

    Science.gov (United States)

    Housh, Mashor; Ostfeld, Avi

    2015-05-15

    The problem of contamination event detection in water distribution systems has become one of the most challenging research topics in water distribution systems analysis. Current attempts for event detection utilize a variety of approaches including statistical, heuristics, machine learning, and optimization methods. Several existing event detection systems share a common feature in which alarms are obtained separately for each of the water quality indicators. Unifying those single alarms from different indicators is usually performed by means of simple heuristics. A salient feature of the current developed approach is using a statistically oriented model for discrete choice prediction which is estimated using the maximum likelihood method for integrating the single alarms. The discrete choice model is jointly calibrated with other components of the event detection system framework in a training data set using genetic algorithms. The fusing process of each indicator probabilities, which is left out of focus in many existing event detection system models, is confirmed to be a crucial part of the system which could be modelled by exploiting a discrete choice model for improving its performance. The developed methodology is tested on real water quality data, showing improved performances in decreasing the number of false positive alarms and in its ability to detect events with higher probabilities, compared to previous studies. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. A dynamic random effects multinomial logit model of household car ownership

    DEFF Research Database (Denmark)

    Bue Bjørner, Thomas; Leth-Petersen, Søren

    2007-01-01

    Using a large household panel we estimate demand for car ownership by means of a dynamic multinomial model with correlated random effects. Results suggest that the persistence in car ownership observed in the data should be attributed to both true state dependence and to unobserved heterogeneity...... (random effects). It also appears that random effects related to single and multiple car ownership are correlated, suggesting that the IIA assumption employed in simple multinomial models of car ownership is invalid. Relatively small elasticities with respect to income and car costs are estimated...

  17. Para Krizleri Öngörüsünde Logit Model ve Sinyal Yaklaşımının Değeri: Türkiye Tecrübesi

    OpenAIRE

    Kaya, Vedat; Yilmaz, Omer

    2007-01-01

    Logit model and the signal approach are two analysis methods being commonly used to forecast and explain currency crises. Logit model is successful to determine explaining variables of crisis and to calculate the probability of crisis in particular during the period experienced with a crisis. On the other hand, the signal approach aims at determining any possible currency crisis in advance, following some variables showing unusual change over the periods of economic fluctuation and thus it pr...

  18. A Mixed Logit Model of Homeowner Preferences for Wildfire Hazard Reduction

    Science.gov (United States)

    Thomas P. Holmes; John Loomis; Armando Gonzalez-Caban

    2010-01-01

    People living in the wildland-urban interface (WUI) are at greater risk of suffering major losses of property and life from wildfires. Over the past several decades the prevailing view has been that wildfire risk in rural areas was exogenous to the activities of homeowners. In response to catastrophic fires in the WUI over the past few years, recent approaches to fire...

  19. Facet-based analysis of vacation planning process : a binary mixed logit panel model

    NARCIS (Netherlands)

    Grigolon, A.B.; Kemperman, A.D.A.M.; Timmermans, H.J.P.

    2013-01-01

    This article documents the design and results of a study on vacation planning processes with a particular focus on aggregate relationships between the probability that a certain facet of the vacation decision has been decided at a particular point in time, as a function of lead time to the actual

  20. Facet-based analysis of vacation planning processes : a binary mixed logit panel model

    NARCIS (Netherlands)

    Grigolon, Anna; Kemperman, Astrid; Timmermans, Harry

    2012-01-01

    This article documents the design and results of a study on vacation planning processes with a particular focus on aggregate relationships between the probability that a certain facet of the vacation decision has been decided at a particular point in time, as a function of lead time to the actual

  1. Mixed multinomial logit model for out-of-home leisure activity choice

    NARCIS (Netherlands)

    Grigolon, A.B.; Kemperman, A.D.A.M.; Timmermans, H.J.P.

    2013-01-01

    This paper documents the design and results of a study on the factors influencing the choice of out-of-home leisure activities. Influencing factors seem related to socio-demographic characteristics, personal preferences, characteristics of the built environment and other aspects of the activities

  2. ADVANCED MIXING MODELS

    International Nuclear Information System (INIS)

    Lee, S; Richard Dimenna, R; David Tamburello, D

    2008-01-01

    The process of recovering the waste in storage tanks at the Savannah River Site (SRS) typically requires mixing the contents of the tank with one to four dual-nozzle jet mixers located within the tank. The typical criteria to establish a mixed condition in a tank are based on the number of pumps in operation and the time duration of operation. To ensure that a mixed condition is achieved, operating times are set conservatively long. This approach results in high operational costs because of the long mixing times and high maintenance and repair costs for the same reason. A significant reduction in both of these costs might be realized by reducing the required mixing time based on calculating a reliable indicator of mixing with a suitably validated computer code. The work described in this report establishes the basis for further development of the theory leading to the identified mixing indicators, the benchmark analyses demonstrating their consistency with widely accepted correlations, and the application of those indicators to SRS waste tanks to provide a better, physically based estimate of the required mixing time. Waste storage tanks at SRS contain settled sludge which varies in height from zero to 10 ft. The sludge has been characterized and modeled as micron-sized solids, typically 1 to 5 microns, at weight fractions as high as 20 to 30 wt%, specific gravities to 1.4, and viscosities up to 64 cp during motion. The sludge is suspended and mixed through the use of submersible slurry jet pumps. To suspend settled sludge, water is added to the tank as a slurry medium and stirred with the jet pump. Although there is considerable technical literature on mixing and solid suspension in agitated tanks, very little literature has been published on jet mixing in a large-scale tank. If shorter mixing times can be shown to support Defense Waste Processing Facility (DWPF) or other feed requirements, longer pump lifetimes can be achieved with associated operational cost and

  3. ADVANCED MIXING MODELS

    Energy Technology Data Exchange (ETDEWEB)

    Lee, S; Richard Dimenna, R; David Tamburello, D

    2008-11-13

    The process of recovering the waste in storage tanks at the Savannah River Site (SRS) typically requires mixing the contents of the tank with one to four dual-nozzle jet mixers located within the tank. The typical criteria to establish a mixed condition in a tank are based on the number of pumps in operation and the time duration of operation. To ensure that a mixed condition is achieved, operating times are set conservatively long. This approach results in high operational costs because of the long mixing times and high maintenance and repair costs for the same reason. A significant reduction in both of these costs might be realized by reducing the required mixing time based on calculating a reliable indicator of mixing with a suitably validated computer code. The work described in this report establishes the basis for further development of the theory leading to the identified mixing indicators, the benchmark analyses demonstrating their consistency with widely accepted correlations, and the application of those indicators to SRS waste tanks to provide a better, physically based estimate of the required mixing time. Waste storage tanks at SRS contain settled sludge which varies in height from zero to 10 ft. The sludge has been characterized and modeled as micron-sized solids, typically 1 to 5 microns, at weight fractions as high as 20 to 30 wt%, specific gravities to 1.4, and viscosities up to 64 cp during motion. The sludge is suspended and mixed through the use of submersible slurry jet pumps. To suspend settled sludge, water is added to the tank as a slurry medium and stirred with the jet pump. Although there is considerable technical literature on mixing and solid suspension in agitated tanks, very little literature has been published on jet mixing in a large-scale tank. If shorter mixing times can be shown to support Defense Waste Processing Facility (DWPF) or other feed requirements, longer pump lifetimes can be achieved with associated operational cost and

  4. ADVANCED MIXING MODELS

    Energy Technology Data Exchange (ETDEWEB)

    Lee, S; Dimenna, R; Tamburello, D

    2011-02-14

    height from zero to 10 ft. The sludge has been characterized and modeled as micron-sized solids, typically 1 to 5 microns, at weight fractions as high as 20 to 30 wt%, specific gravities to 1.4, and viscosities up to 64 cp during motion. The sludge is suspended and mixed through the use of submersible slurry jet pumps. To suspend settled sludge, water is added to the tank as a slurry medium and stirred with the jet pump. Although there is considerable technical literature on mixing and solid suspension in agitated tanks, very little literature has been published on jet mixing in a large-scale tank. One of the main objectives in the waste processing is to provide feed of a uniform slurry composition at a certain weight percentage (e.g. typically {approx}13 wt% at SRS) over an extended period of time. In preparation of the sludge for slurrying, several important questions have been raised with regard to sludge suspension and mixing of the solid suspension in the bulk of the tank: (1) How much time is required to prepare a slurry with a uniform solid composition? (2) How long will it take to suspend and mix the sludge for uniform composition in any particular waste tank? (3) What are good mixing indicators to answer the questions concerning sludge mixing stated above in a general fashion applicable to any waste tank/slurry pump geometry and fluid/sludge combination?

  5. System equivalent model mixing

    Science.gov (United States)

    Klaassen, Steven W. B.; van der Seijs, Maarten V.; de Klerk, Dennis

    2018-05-01

    This paper introduces SEMM: a method based on Frequency Based Substructuring (FBS) techniques that enables the construction of hybrid dynamic models. With System Equivalent Model Mixing (SEMM) frequency based models, either of numerical or experimental nature, can be mixed to form a hybrid model. This model follows the dynamic behaviour of a predefined weighted master model. A large variety of applications can be thought of, such as the DoF-space expansion of relatively small experimental models using numerical models, or the blending of different models in the frequency spectrum. SEMM is outlined, both mathematically and conceptually, based on a notation commonly used in FBS. A critical physical interpretation of the theory is provided next, along with a comparison to similar techniques; namely DoF expansion techniques. SEMM's concept is further illustrated by means of a numerical example. It will become apparent that the basic method of SEMM has some shortcomings which warrant a few extensions to the method. One of the main applications is tested in a practical case, performed on a validated benchmark structure; it will emphasize the practicality of the method.

  6. What drives active transportation choices among the aging population? Comparing a Bayesian belief network and mixed logit modeling approach

    NARCIS (Netherlands)

    Kemperman, A.D.A.M.; Timmermans, H.J.P.

    2013-01-01

    As people age, they typically face declining levels of physical ability and mobility. However, walking and bicycling can remain relatively easy ways to be physically active for older adults provided that the built environment facilitates these activities. The aim of this study is to investigate

  7. Generalized, Linear, and Mixed Models

    CERN Document Server

    McCulloch, Charles E; Neuhaus, John M

    2011-01-01

    An accessible and self-contained introduction to statistical models-now in a modernized new editionGeneralized, Linear, and Mixed Models, Second Edition provides an up-to-date treatment of the essential techniques for developing and applying a wide variety of statistical models. The book presents thorough and unified coverage of the theory behind generalized, linear, and mixed models and highlights their similarities and differences in various construction, application, and computational aspects.A clear introduction to the basic ideas of fixed effects models, random effects models, and mixed m

  8. Mixed Hitting-Time Models

    NARCIS (Netherlands)

    Abbring, J.H.

    2009-01-01

    We study mixed hitting-time models, which specify durations as the first time a Levy process (a continuous-time process with stationary and independent increments) crosses a heterogeneous threshold. Such models of substantial interest because they can be reduced from optimal-stopping models with

  9. Cluster Correlation in Mixed Models

    Science.gov (United States)

    Gardini, A.; Bonometto, S. A.; Murante, G.; Yepes, G.

    2000-10-01

    We evaluate the dependence of the cluster correlation length, rc, on the mean intercluster separation, Dc, for three models with critical matter density, vanishing vacuum energy (Λ=0), and COBE normalization: a tilted cold dark matter (tCDM) model (n=0.8) and two blue mixed models with two light massive neutrinos, yielding Ωh=0.26 and 0.14 (MDM1 and MDM2, respectively). All models approach the observational value of σ8 (and hence the observed cluster abundance) and are consistent with the observed abundance of damped Lyα systems. Mixed models have a motivation in recent results of neutrino physics; they also agree with the observed value of the ratio σ8/σ25, yielding the spectral slope parameter Γ, and nicely fit Las Campanas Redshift Survey (LCRS) reconstructed spectra. We use parallel AP3M simulations, performed in a wide box (of side 360 h-1 Mpc) and with high mass and distance resolution, enabling us to build artificial samples of clusters, whose total number and mass range allow us to cover the same Dc interval inspected through Automatic Plate Measuring Facility (APM) and Abell cluster clustering data. We find that the tCDM model performs substantially better than n=1 critical density CDM models. Our main finding, however, is that mixed models provide a surprisingly good fit to cluster clustering data.

  10. Kriging with mixed effects models

    Directory of Open Access Journals (Sweden)

    Alessio Pollice

    2007-10-01

    Full Text Available In this paper the effectiveness of the use of mixed effects models for estimation and prediction purposes in spatial statistics for continuous data is reviewed in the classical and Bayesian frameworks. A case study on agricultural data is also provided.

  11. Mathematical study of mixing models

    International Nuclear Information System (INIS)

    Lagoutiere, F.; Despres, B.

    1999-01-01

    This report presents the construction and the study of a class of models that describe the behavior of compressible and non-reactive Eulerian fluid mixtures. Mixture models can have two different applications. Either they are used to describe physical mixtures, in the case of a true zone of extensive mixing (but then this modelization is incomplete and must be considered only as a point of departure for the elaboration of models of mixtures actually relevant). Either they are used to solve the problem of the numerical mixture. This problem appears during the discretization of an interface which separates fluids having laws of different state: the zone of numerical mixing is the set of meshes which cover the interface. The attention is focused on numerical mixtures, for which the hypothesis of non-miscibility (physics) will bring two equations (the sixth and the eighth of the system). It is important to emphasize that even in the case of the only numerical mixture, the presence in one and same place (same mesh) of several fluids have to be taken into account. This will be formalized by the possibility for mass fractions to take all values between 0 and 1. This is not at odds with the equations that derive from the hypothesis of non-miscibility. One way of looking at things is to consider that there are two scales of observation: the physical scale at which one observes the separation of fluids, and the numerical scale, given by the fineness of the mesh, to which a mixture appears. In this work, mixtures are considered from the mathematical angle (both in the elaboration phase and during their study). In particular, Chapter 5 shows a result of model degeneration for a non-extended mixing zone (case of an interface): this justifies the use of models in the case of numerical mixing. All these models are based on the classical model of non-viscous compressible fluids recalled in Chapter 2. In Chapter 3, the central point of the elaboration of the class of models is

  12. Linear mixed models in sensometrics

    DEFF Research Database (Denmark)

    Kuznetsova, Alexandra

    quality of decision making in Danish as well as international food companies and other companies using the same methods. The two open-source R packages lmerTest and SensMixed implement and support the methodological developments in the research papers as well as the ANOVA modelling part of the Consumer...... an open-source software tool ConsumerCheck was developed in this project and now is available for everyone. will represent a major step forward when concerns this important problem in modern consumer driven product development. Standard statistical software packages can be used for some of the purposes......Today’s companies and researchers gather large amounts of data of different kind. In consumer studies the objective is the collection of the data to better understand consumer acceptance of products. In such studies a number of persons (generally not trained) are selected in order to score products...

  13. Multifractal Modeling of Turbulent Mixing

    Science.gov (United States)

    Samiee, Mehdi; Zayernouri, Mohsen; Meerschaert, Mark M.

    2017-11-01

    Stochastic processes in random media are emerging as interesting tools for modeling anomalous transport phenomena. Applications include intermittent passive scalar transport with background noise in turbulent flows, which are observed in atmospheric boundary layers, turbulent mixing in reactive flows, and long-range dependent flow fields in disordered/fractal environments. In this work, we propose a nonlocal scalar transport equation involving the fractional Laplacian, where the corresponding fractional index is linked to the multifractal structure of the nonlinear passive scalar power spectrum. This work was supported by the AFOSR Young Investigator Program (YIP) award (FA9550-17-1-0150) and partially by MURI/ARO (W911NF-15-1-0562).

  14. Random Coefficient Logit Model for Large Datasets

    NARCIS (Netherlands)

    C. Hernández-Mireles (Carlos); D. Fok (Dennis)

    2010-01-01

    textabstractWe present an approach for analyzing market shares and products price elasticities based on large datasets containing aggregate sales data for many products, several markets and for relatively long time periods. We consider the recently proposed Bayesian approach of Jiang et al [Jiang,

  15. Theoretical Models of Neutrino Mixing Recent Developments

    CERN Document Server

    Altarelli, Guido

    2009-01-01

    The data on neutrino mixing are at present compatible with Tri-Bimaximal (TB) mixing. If one takes this indication seriously then the models that lead to TB mixing in first approximation are particularly interesting and A4 models are prominent in this list. However, the agreement of TB mixing with the data could still be an accident. We discuss a recent model based on S4 where Bimaximal mixing is instead valid at leading order and the large corrections needed to reproduce the data arise from the diagonalization of charged leptons. The value of $\\theta_{13}$ could distinguish between the two alternatives.

  16. Mixed models for predictive modeling in actuarial science

    NARCIS (Netherlands)

    Antonio, K.; Zhang, Y.

    2012-01-01

    We start with a general discussion of mixed (also called multilevel) models and continue with illustrating specific (actuarial) applications of this type of models. Technical details on (linear, generalized, non-linear) mixed models follow: model assumptions, specifications, estimation techniques

  17. Mixed-mode modelling mixing methodologies for organisational intervention

    CERN Document Server

    Clarke, Steve; Lehaney, Brian

    2001-01-01

    The 1980s and 1990s have seen a growing interest in research and practice in the use of methodologies within problem contexts characterised by a primary focus on technology, human issues, or power. During the last five to ten years, this has given rise to challenges regarding the ability of a single methodology to address all such contexts, and the consequent development of approaches which aim to mix methodologies within a single problem situation. This has been particularly so where the situation has called for a mix of technological (the so-called 'hard') and human­ centred (so-called 'soft') methods. The approach developed has been termed mixed-mode modelling. The area of mixed-mode modelling is relatively new, with the phrase being coined approximately four years ago by Brian Lehaney in a keynote paper published at the 1996 Annual Conference of the UK Operational Research Society. Mixed-mode modelling, as suggested above, is a new way of considering problem situations faced by organisations. Traditional...

  18. Linear mixed models for longitudinal data

    CERN Document Server

    Molenberghs, Geert

    2000-01-01

    This paperback edition is a reprint of the 2000 edition. This book provides a comprehensive treatment of linear mixed models for continuous longitudinal data. Next to model formulation, this edition puts major emphasis on exploratory data analysis for all aspects of the model, such as the marginal model, subject-specific profiles, and residual covariance structure. Further, model diagnostics and missing data receive extensive treatment. Sensitivity analysis for incomplete data is given a prominent place. Several variations to the conventional linear mixed model are discussed (a heterogeity model, conditional linear mixed models). This book will be of interest to applied statisticians and biomedical researchers in industry, public health organizations, contract research organizations, and academia. The book is explanatory rather than mathematically rigorous. Most analyses were done with the MIXED procedure of the SAS software package, and many of its features are clearly elucidated. However, some other commerc...

  19. Flapping model of scalar mixing in turbulence

    International Nuclear Information System (INIS)

    Kerstein, A.R.

    1991-01-01

    Motivated by the fluctuating plume model of turbulent mixing downstream of a point source, a flapping model is formulated for application to other configurations. For the scalar mixing layer, simple expressions for single-point scalar fluctuation statistics are obtained that agree with measurements. For a spatially homogeneous scalar mixing field, the family of probability density functions previously derived using mapping closure is reproduced. It is inferred that single-point scalar statistics may depend primarily on large-scale flapping motions in many cases of interest, and thus that multipoint statistics may be the principal indicators of finer-scale mixing effects

  20. Logit dynamics for strategic games mixing time and metastability

    OpenAIRE

    Ferraioli, Diodato

    2012-01-01

    2010 - 2011 A complex system is generally de_ned as a system emerging from the interaction of several and di_erent components, each one with their properties and their goals, usually subject to external inuences. Nowadays, complex systems are ubiquitous and they are found in many research areas: examples can be found in Economy (e.g., markets), Physics (e.g., ideal gases, spin systems), Biology (e.g., evolution of life) and Computer Science (e.g., Internet and social network...

  1. Model Information Exchange System (MIXS).

    Science.gov (United States)

    2013-08-01

    Many travel demand forecast models operate at state, regional, and local levels. While they share the same physical network in overlapping geographic areas, they use different and uncoordinated modeling networks. This creates difficulties for models ...

  2. Modeling of particle mixing in the atmosphere

    International Nuclear Information System (INIS)

    Zhu, Shupeng

    2015-01-01

    This thesis presents a newly developed size-composition resolved aerosol model (SCRAM), which is able to simulate the dynamics of externally-mixed particles in the atmosphere, and evaluates its performance in three-dimensional air-quality simulations. The main work is split into four parts. First, the research context of external mixing and aerosol modelling is introduced. Secondly, the development of the SCRAM box model is presented along with validation tests. Each particle composition is defined by the combination of mass-fraction sections of its chemical components or aggregates of components. The three main processes involved in aerosol dynamic (nucleation, coagulation, condensation/ evaporation) are included in SCRAM. The model is first validated by comparisons with published reference solutions for coagulation and condensation/evaporation of internally-mixed particles. The particle mixing state is investigated in a 0-D simulation using data representative of air pollution at a traffic site in Paris. The relative influence on the mixing state of the different aerosol processes and of the algorithm used to model condensation/evaporation (dynamic evolution or bulk equilibrium between particles and gas) is studied. Then, SCRAM is integrated into the Polyphemus air quality platform and used to conduct simulations over Greater Paris during the summer period of 2009. This evaluation showed that SCRAM gives satisfactory results for both PM2.5/PM10 concentrations and aerosol optical depths, as assessed from comparisons to observations. Besides, the model allows us to analyze the particle mixing state, as well as the impact of the mixing state assumption made in the modelling on particle formation, aerosols optical properties, and cloud condensation nuclei activation. Finally, two simulations are conducted during the winter campaign of MEGAPOLI (Megacities: Emissions, urban, regional and Global Atmospheric Pollution and climate effects, and Integrated tools for

  3. Statistical Tests for Mixed Linear Models

    CERN Document Server

    Khuri, André I; Sinha, Bimal K

    2011-01-01

    An advanced discussion of linear models with mixed or random effects. In recent years a breakthrough has occurred in our ability to draw inferences from exact and optimum tests of variance component models, generating much research activity that relies on linear models with mixed and random effects. This volume covers the most important research of the past decade as well as the latest developments in hypothesis testing. It compiles all currently available results in the area of exact and optimum tests for variance component models and offers the only comprehensive treatment for these models a

  4. Multivariate generalized linear mixed models using R

    CERN Document Server

    Berridge, Damon Mark

    2011-01-01

    Multivariate Generalized Linear Mixed Models Using R presents robust and methodologically sound models for analyzing large and complex data sets, enabling readers to answer increasingly complex research questions. The book applies the principles of modeling to longitudinal data from panel and related studies via the Sabre software package in R. A Unified Framework for a Broad Class of Models The authors first discuss members of the family of generalized linear models, gradually adding complexity to the modeling framework by incorporating random effects. After reviewing the generalized linear model notation, they illustrate a range of random effects models, including three-level, multivariate, endpoint, event history, and state dependence models. They estimate the multivariate generalized linear mixed models (MGLMMs) using either standard or adaptive Gaussian quadrature. The authors also compare two-level fixed and random effects linear models. The appendices contain additional information on quadrature, model...

  5. Mixed-effects regression models in linguistics

    CERN Document Server

    Heylen, Kris; Geeraerts, Dirk

    2018-01-01

    When data consist of grouped observations or clusters, and there is a risk that measurements within the same group are not independent, group-specific random effects can be added to a regression model in order to account for such within-group associations. Regression models that contain such group-specific random effects are called mixed-effects regression models, or simply mixed models. Mixed models are a versatile tool that can handle both balanced and unbalanced datasets and that can also be applied when several layers of grouping are present in the data; these layers can either be nested or crossed.  In linguistics, as in many other fields, the use of mixed models has gained ground rapidly over the last decade. This methodological evolution enables us to build more sophisticated and arguably more realistic models, but, due to its technical complexity, also introduces new challenges. This volume brings together a number of promising new evolutions in the use of mixed models in linguistics, but also addres...

  6. —: A Multicategory Brand Equity Model and Its Application at Allstate

    OpenAIRE

    Venkatesh Shankar; Pablo Azar; Matthew Fuller

    2008-01-01

    We develop a robust model for estimating, tracking, and managing brand equity for multicategory brands based on customer survey and financial measures. This model has two components: (1) offering value (computed from discounted cash flow analysis) and (2) relative brand importance (computed from brand choice models such as multinomial logit, heteroscedastic extreme value, and mixed logit). We apply this model to estimate the brand equity of Allstate—a leading insurance company—and its leading...

  7. Mixed models theory and applications with R

    CERN Document Server

    Demidenko, Eugene

    2013-01-01

    Mixed modeling is one of the most promising and exciting areas of statistical analysis, enabling the analysis of nontraditional, clustered data that may come in the form of shapes or images. This book provides in-depth mathematical coverage of mixed models' statistical properties and numerical algorithms, as well as applications such as the analysis of tumor regrowth, shape, and image. The new edition includes significant updating, over 300 exercises, stimulating chapter projects and model simulations, inclusion of R subroutines, and a revised text format. The target audience continues to be g

  8. Modeling Left-Turn Driving Behavior at Signalized Intersections with Mixed Traffic Conditions

    Directory of Open Access Journals (Sweden)

    Hong Li

    2016-01-01

    Full Text Available In many developing countries, mixed traffic is the most common type of urban transportation; traffic of this type faces many major problems in traffic engineering, such as conflicts, inefficiency, and security issues. This paper focuses on the traffic engineering concerns on the driving behavior of left-turning vehicles caused by different degrees of pedestrian violations. The traffic characteristics of left-turning vehicles and pedestrians in the affected region at a signalized intersection were analyzed and a cellular-automata-based “following-conflict” driving behavior model that mainly addresses four basic behavior modes was proposed to study the conflict and behavior mechanisms of left-turning vehicles by mathematic methodologies. Four basic driving behavior modes were reproduced in computer simulations, and a logit model of the behavior mode choice was also developed to analyze the relative share of each behavior mode. Finally, the microscopic characteristics of driving behaviors and the macroscopic parameters of traffic flow in the affected region were all determined. These data are important reference for geometry and capacity design for signalized intersections. The simulation results show that the proposed models are valid and can be used to represent the behavior of left-turning vehicles in the case of conflicts with illegally crossing pedestrians. These results will have potential applications on improving traffic safety and traffic capacity at signalized intersections with mixed traffic conditions.

  9. Statistical models of global Langmuir mixing

    Science.gov (United States)

    Li, Qing; Fox-Kemper, Baylor; Breivik, Øyvind; Webb, Adrean

    2017-05-01

    The effects of Langmuir mixing on the surface ocean mixing may be parameterized by applying an enhancement factor which depends on wave, wind, and ocean state to the turbulent velocity scale in the K-Profile Parameterization. Diagnosing the appropriate enhancement factor online in global climate simulations is readily achieved by coupling with a prognostic wave model, but with significant computational and code development expenses. In this paper, two alternatives that do not require a prognostic wave model, (i) a monthly mean enhancement factor climatology, and (ii) an approximation to the enhancement factor based on the empirical wave spectra, are explored and tested in a global climate model. Both appear to reproduce the Langmuir mixing effects as estimated using a prognostic wave model, with nearly identical and substantial improvements in the simulated mixed layer depth and intermediate water ventilation over control simulations, but significantly less computational cost. Simpler approaches, such as ignoring Langmuir mixing altogether or setting a globally constant Langmuir number, are found to be deficient. Thus, the consequences of Stokes depth and misaligned wind and waves are important.

  10. Modeling of Salt Solubilities in Mixed Solvents

    DEFF Research Database (Denmark)

    Chiavone-Filho, O.; Rasmussen, Peter

    2000-01-01

    A method to correlate and predict salt solubilities in mixed solvents using a UNIQUAC+Debye-Huckel model is developed. The UNIQUAC equation is applied in a form with temperature-dependent parameters. The Debye-Huckel model is extended to mixed solvents by properly evaluating the dielectric...... constants and the liquid densities of the solvent media. To normalize the activity coefficients, the symmetric convention is adopted. Thermochemical properties of the salt are used to estimate the solubility product. It is shown that the proposed procedure can describe with good accuracy a series of salt...

  11. Valoração de contingente pelas modelagens logit e análise multivariada: um estudo de caso da disposição a aceitar compensação dos cafeicultores vinculados ao PRO-CAFÉ de Viçosa - MG Contingent valuation with modeling logit and multivariate analyses: a case study of the willingness of coffee planters linked to the PRO-COFFEE of Viçosa - MG to accept compensation

    Directory of Open Access Journals (Sweden)

    Pedro Silveira Máximo

    2009-12-01

    Full Text Available O objetivo deste estudo foi, justamente, identificar, entre os métodos LOGIT e a análise multivariada, qual a mais eficaz para estimar a Disposição a Aceitar Compensação (DAC dos cafeicultores quando o viés da utilidade marginal é passível de ocorrência. Para tal, foi elaborado um formulário com 33 perguntas envolvendo informações sobre características socioeconômicas dos cafeicultores, o uso da metodologia de valoração de contingente (MVC e do veículo de pagamento dos "Jogos de Lances", que revelou a Disposição a Aceitar uma Compensação (DAC na troca de um hectare de café por um hectare de mata. Como esperado, por causa do viés da utilidade marginal o método LOGIT foi incapaz de produzir resultados consistentes. Já a estimação da DAC pela análise multivariada mostrou que, caso o governo estivesse disposto a aumentar a provisão de mata em 70 ha, ele deveria despender 254.200 reais por ano, tratando apenas dos cafeicultores vinculados ao programa do PRO-CAFÉ.The object of this study was to identify which method, either LOGIT or multivariate analyses, was the most efficient to estimate the coffee planters' Willingness to Accept a Compensation, when there was a possibility of occurrence of marginal utility. For such, a questionnaire was formulated, with 33 questions involving information on coffee planters' socio - economic characteristics, the use of the methodology of contingent valuation (MCV, and the payment of the "offer game" that reveled the willingness to accept a compensation (WAC, by exchanging a hectare of coffee by a hectare of forest. As expected, because of the marginal utility's bias, the LOGIT method was unable to produce consistent results. However, when the WAC was estimated by multivariate analyses, the results showed that if the government is willing to increase the provision of forest to 70 hectares, it should pay out 254,200 reais (around 116,000 dollars, dealing only with the coffee planters

  12. Linear and Generalized Linear Mixed Models and Their Applications

    CERN Document Server

    Jiang, Jiming

    2007-01-01

    This book covers two major classes of mixed effects models, linear mixed models and generalized linear mixed models, and it presents an up-to-date account of theory and methods in analysis of these models as well as their applications in various fields. The book offers a systematic approach to inference about non-Gaussian linear mixed models. Furthermore, it has included recently developed methods, such as mixed model diagnostics, mixed model selection, and jackknife method in the context of mixed models. The book is aimed at students, researchers and other practitioners who are interested

  13. Predicting longitudinal trajectories of health probabilities with random-effects multinomial logit regression.

    Science.gov (United States)

    Liu, Xian; Engel, Charles C

    2012-12-20

    Researchers often encounter longitudinal health data characterized with three or more ordinal or nominal categories. Random-effects multinomial logit models are generally applied to account for potential lack of independence inherent in such clustered data. When parameter estimates are used to describe longitudinal processes, however, random effects, both between and within individuals, need to be retransformed for correctly predicting outcome probabilities. This study attempts to go beyond existing work by developing a retransformation method that derives longitudinal growth trajectories of unbiased health probabilities. We estimated variances of the predicted probabilities by using the delta method. Additionally, we transformed the covariates' regression coefficients on the multinomial logit function, not substantively meaningful, to the conditional effects on the predicted probabilities. The empirical illustration uses the longitudinal data from the Asset and Health Dynamics among the Oldest Old. Our analysis compared three sets of the predicted probabilities of three health states at six time points, obtained from, respectively, the retransformation method, the best linear unbiased prediction, and the fixed-effects approach. The results demonstrate that neglect of retransforming random errors in the random-effects multinomial logit model results in severely biased longitudinal trajectories of health probabilities as well as overestimated effects of covariates on the probabilities. Copyright © 2012 John Wiley & Sons, Ltd.

  14. A Lagrangian mixing frequency model for transported PDF modeling

    Science.gov (United States)

    Turkeri, Hasret; Zhao, Xinyu

    2017-11-01

    In this study, a Lagrangian mixing frequency model is proposed for molecular mixing models within the framework of transported probability density function (PDF) methods. The model is based on the dissipations of mixture fraction and progress variables obtained from Lagrangian particles in PDF methods. The new model is proposed as a remedy to the difficulty in choosing the optimal model constant parameters when using conventional mixing frequency models. The model is implemented in combination with the Interaction by exchange with the mean (IEM) mixing model. The performance of the new model is examined by performing simulations of Sandia Flame D and the turbulent premixed flame from the Cambridge stratified flame series. The simulations are performed using the pdfFOAM solver which is a LES/PDF solver developed entirely in OpenFOAM. A 16-species reduced mechanism is used to represent methane/air combustion, and in situ adaptive tabulation is employed to accelerate the finite-rate chemistry calculations. The results are compared with experimental measurements as well as with the results obtained using conventional mixing frequency models. Dynamic mixing frequencies are predicted using the new model without solving additional transport equations, and good agreement with experimental data is observed.

  15. Scotogenic model for co-bimaximal mixing

    Energy Technology Data Exchange (ETDEWEB)

    Ferreira, P.M. [Instituto Superior de Engenharia de Lisboa - ISEL,1959-007 Lisboa (Portugal); Centro de Física Teórica e Computacional - FCUL, Universidade de Lisboa,R. Ernesto de Vasconcelos, 1749-016 Lisboa (Portugal); Grimus, W. [Faculty of Physics, University of Vienna,Boltzmanngasse 5, A-1090 Wien (Austria); Jurčiukonis, D. [Institute of Theoretical Physics and Astronomy, Vilnius University,Saul?etekio ave. 3, LT-10222 Vilnius (Lithuania); Lavoura, L. [CFTP, Instituto Superior Técnico, Universidade de Lisboa,1049-001 Lisboa (Portugal)

    2016-07-04

    We present a scotogenic model, i.e. a one-loop neutrino mass model with dark right-handed neutrino gauge singlets and one inert dark scalar gauge doublet η, which has symmetries that lead to co-bimaximal mixing, i.e. to an atmospheric mixing angle θ{sub 23}=45{sup ∘} and to a CP-violating phase δ=±π/2, while the mixing angle θ{sub 13} remains arbitrary. The symmetries consist of softly broken lepton numbers L{sub α} (α=e,μ,τ), a non-standard CP symmetry, and three ℤ{sub 2} symmetries. We indicate two possibilities for extending the model to the quark sector. Since the model has, besides η, three scalar gauge doublets, we perform a thorough discussion of its scalar sector. We demonstrate that it can accommodate a Standard Model-like scalar with mass 125 GeV, with all the other charged and neutral scalars having much higher masses.

  16. Predicting Dropouts of University Freshmen: A Logit Regression Analysis.

    Science.gov (United States)

    Lam, Y. L. Jack

    1984-01-01

    Stepwise discriminant analysis coupled with logit regression analysis of freshmen data from Brandon University (Manitoba) indicated that six tested variables drawn from research on university dropouts were useful in predicting attrition: student status, residence, financial sources, distance from home town, goal fulfillment, and satisfaction with…

  17. Relating masses and mixing angles. A model-independent model

    Energy Technology Data Exchange (ETDEWEB)

    Hollik, Wolfgang Gregor [DESY, Hamburg (Germany); Saldana-Salazar, Ulises Jesus [CINVESTAV (Mexico)

    2016-07-01

    In general, mixing angles and fermion masses are seen to be independent parameters of the Standard Model. However, exploiting the observed hierarchy in the masses, it is viable to construct the mixing matrices for both quarks and leptons in terms of the corresponding mass ratios only. A closer view on the symmetry properties leads to potential realizations of that approach in extensions of the Standard Model. We discuss the application in the context of flavored multi-Higgs models.

  18. A mixed model framework for teratology studies.

    Science.gov (United States)

    Braeken, Johan; Tuerlinckx, Francis

    2009-10-01

    A mixed model framework is presented to model the characteristic multivariate binary anomaly data as provided in some teratology studies. The key features of the model are the incorporation of covariate effects, a flexible random effects distribution by means of a finite mixture, and the application of copula functions to better account for the relation structure of the anomalies. The framework is motivated by data of the Boston Anticonvulsant Teratogenesis study and offers an integrated approach to investigate substantive questions, concerning general and anomaly-specific exposure effects of covariates, interrelations between anomalies, and objective diagnostic measurement.

  19. Logit Analysis for Profit Maximizing Loan Classification

    OpenAIRE

    Watt, David L.; Mortensen, Timothy L.; Leistritz, F. Larry

    1988-01-01

    Lending criteria and loan classification methods are developed. Rating system breaking points are analyzed to present a method to maximize loan revenues. Financial characteristics of farmers are used as determinants of delinquency in a multivariate logistic model. Results indicate that debt-to-asset and operating ration are most indicative of default.

  20. Discrete Symmetries and Models of Flavour Mixing

    International Nuclear Information System (INIS)

    King, Stephen F

    2015-01-01

    In this talk we shall give an overview of the role of discrete symmetries, including both CP and family symmetry, in constructing unified models of quark and lepton (including especially neutrino) masses and mixing. Various different approaches to model building will be described, denoted as direct, semi-direct and indirect, and the pros and cons of each approach discussed. Particular examples based on Δ(6n 2 ) will be discussed and an A to Z of Flavour with Pati-Salam will be presented. (paper)

  1. Models of neutrino mass and mixing

    International Nuclear Information System (INIS)

    Ma, Ernest

    2000-01-01

    There are two basic theoretical approaches to obtaining neutrino mass and mixing. In the minimalist approach, one adds just enough new stuff to the Minimal Standard Model to get m ν ≠0 and U αi ≠1. In the holistic approach, one uses a general framework or principle to enlarge the Minimal Standard Model such that, among other things, m ν ≠0 and U αi ≠1. In both cases, there are important side effects besides neutrino oscillations. I discuss a number of examples, including the possibility of leptogenesis from R parity nonconservation in supersymmetry

  2. BDA special care case mix model.

    Science.gov (United States)

    Bateman, P; Arnold, C; Brown, R; Foster, L V; Greening, S; Monaghan, N; Zoitopoulos, L

    2010-04-10

    Routine dental care provided in special care dentistry is complicated by patient specific factors which increase the time taken and costs of treatment. The BDA have developed and conducted a field trial of a case mix tool to measure this complexity. For each episode of care the case mix tool assesses the following on a four point scale: 'ability to communicate', 'ability to cooperate', 'medical status', 'oral risk factors', 'access to oral care' and 'legal and ethical barriers to care'. The tool is reported to be easy to use and captures sufficient detail to discriminate between types of service and special care dentistry provided. It offers potential as a simple to use and clinically relevant source of performance management and commissioning data. This paper describes the model, demonstrates how it is currently being used, and considers future developments in its use.

  3. Model Selection with the Linear Mixed Model for Longitudinal Data

    Science.gov (United States)

    Ryoo, Ji Hoon

    2011-01-01

    Model building or model selection with linear mixed models (LMMs) is complicated by the presence of both fixed effects and random effects. The fixed effects structure and random effects structure are codependent, so selection of one influences the other. Most presentations of LMM in psychology and education are based on a multilevel or…

  4. Simplified models of mixed dark matter

    International Nuclear Information System (INIS)

    Cheung, Clifford; Sanford, David

    2014-01-01

    We explore simplified models of mixed dark matter (DM), defined here to be a stable relic composed of a singlet and an electroweak charged state. Our setup describes a broad spectrum of thermal DM candidates that can naturally accommodate the observed DM abundance but are subject to substantial constraints from current and upcoming direct detection experiments. We identify ''blind spots'' at which the DM-Higgs coupling is identically zero, thus nullifying direct detection constraints on spin independent scattering. Furthermore, we characterize the fine-tuning in mixing angles, i.e. well-tempering, required for thermal freeze-out to accommodate the observed abundance. Present and projected limits from LUX and XENON1T force many thermal relic models into blind spot tuning, well-tempering, or both. This simplified model framework generalizes bino-Higgsino DM in the MSSM, singlino-Higgsino DM in the NMSSM, and scalar DM candidates that appear in models of extended Higgs sectors

  5. Un procedimiento para selección de los modelos Logit Mixtos

    OpenAIRE

    Ruíz Gallegos, José de Jesús

    2004-01-01

    En el presente trabajo se hace una revisión de dos modelos que han tenido una fuerte aplicabilidad en los problemas de elecciones discretas: El modelo Logit y el modelo Logit Mixto. Además, se propone el uso del estadístico de Cox para seleccionar modelos, en el modelo Logit Mixto.

  6. A detailed aerosol mixing state model for investigating interactions between mixing state, semivolatile partitioning, and coagulation

    OpenAIRE

    J. Lu; F. M. Bowman

    2010-01-01

    A new method for describing externally mixed particles, the Detailed Aerosol Mixing State (DAMS) representation, is presented in this study. This novel method classifies aerosols by both composition and size, using a user-specified mixing criterion to define boundaries between compositional populations. Interactions between aerosol mixing state, semivolatile partitioning, and coagulation are investigated with a Lagrangian box model that incorporates the DAMS approach. Model results predict th...

  7. Additive action model for mixed irradiation

    International Nuclear Information System (INIS)

    Lam, G.K.Y.

    1984-01-01

    Recent experimental results indicate that a mixture of high and low LET radiation may have some beneficial features (such as lower OER but with skin sparing) for clinical use, and interest has been renewed in the study of mixtures of high and low LET radiation. Several standard radiation inactivation models can readily accommodate interaction between two mixed radiations, however, this is usually handled by postulating extra free parameters, which can only be determined by fitting to experimental data. A model without any free parameter is proposed to explain the biological effect of mixed radiations, based on the following two assumptions: (a) The combined biological action due to two radiations is additive, assuming no repair has taken place during the interval between the two irradiations; and (b) The initial physical damage induced by radiation develops into final biological effect (e.g. cell killing) over a relatively long period (hours) after irradiation. This model has been shown to provide satisfactory fit to the experiment results of previous studies

  8. Mixing parametrizations for ocean climate modelling

    Science.gov (United States)

    Gusev, Anatoly; Moshonkin, Sergey; Diansky, Nikolay; Zalesny, Vladimir

    2016-04-01

    The algorithm is presented of splitting the total evolutionary equations for the turbulence kinetic energy (TKE) and turbulence dissipation frequency (TDF), which is used to parameterize the viscosity and diffusion coefficients in ocean circulation models. The turbulence model equations are split into the stages of transport-diffusion and generation-dissipation. For the generation-dissipation stage, the following schemes are implemented: the explicit-implicit numerical scheme, analytical solution and the asymptotic behavior of the analytical solutions. The experiments were performed with different mixing parameterizations for the modelling of Arctic and the Atlantic climate decadal variability with the eddy-permitting circulation model INMOM (Institute of Numerical Mathematics Ocean Model) using vertical grid refinement in the zone of fully developed turbulence. The proposed model with the split equations for turbulence characteristics is similar to the contemporary differential turbulence models, concerning the physical formulations. At the same time, its algorithm has high enough computational efficiency. Parameterizations with using the split turbulence model make it possible to obtain more adequate structure of temperature and salinity at decadal timescales, compared to the simpler Pacanowski-Philander (PP) turbulence parameterization. Parameterizations with using analytical solution or numerical scheme at the generation-dissipation step of the turbulence model leads to better representation of ocean climate than the faster parameterization using the asymptotic behavior of the analytical solution. At the same time, the computational efficiency left almost unchanged relative to the simple PP parameterization. Usage of PP parametrization in the circulation model leads to realistic simulation of density and circulation with violation of T,S-relationships. This error is majorly avoided with using the proposed parameterizations containing the split turbulence model

  9. Inference of ICF Implosion Core Mix using Experimental Data and Theoretical Mix Modeling

    International Nuclear Information System (INIS)

    Welser-Sherrill, L.; Haynes, D.A.; Mancini, R.C.; Cooley, J.H.; Tommasini, R.; Golovkin, I.E.; Sherrill, M.E.; Haan, S.W.

    2009-01-01

    The mixing between fuel and shell materials in Inertial Confinement Fusion (ICF) implosion cores is a current topic of interest. The goal of this work was to design direct-drive ICF experiments which have varying levels of mix, and subsequently to extract information on mixing directly from the experimental data using spectroscopic techniques. The experimental design was accomplished using hydrodynamic simulations in conjunction with Haan's saturation model, which was used to predict the mix levels of candidate experimental configurations. These theoretical predictions were then compared to the mixing information which was extracted from the experimental data, and it was found that Haan's mix model performed well in predicting trends in the width of the mix layer. With these results, we have contributed to an assessment of the range of validity and predictive capability of the Haan saturation model, as well as increased our confidence in the methods used to extract mixing information from experimental data.

  10. Multiple model adaptive control with mixing

    Science.gov (United States)

    Kuipers, Matthew

    Despite the remarkable theoretical accomplishments and successful applications of adaptive control, the field is not sufficiently mature to solve challenging control problems requiring strict performance and safety guarantees. Towards addressing these issues, a novel deterministic multiple-model adaptive control approach called adaptive mixing control is proposed. In this approach, adaptation comes from a high-level system called the supervisor that mixes into feedback a number of candidate controllers, each finely-tuned to a subset of the parameter space. The mixing signal, the supervisor's output, is generated by estimating the unknown parameters and, at every instant of time, calculating the contribution level of each candidate controller based on certainty equivalence. The proposed architecture provides two characteristics relevant to solving stringent, performance-driven applications. First, the full-suite of linear time invariant control tools is available. A disadvantage of conventional adaptive control is its restriction to utilizing only those control laws whose solutions can be feasibly computed in real-time, such as model reference and pole-placement type controllers. Because its candidate controllers are computed off line, the proposed approach suffers no such restriction. Second, the supervisor's output is smooth and does not necessarily depend on explicit a priori knowledge of the disturbance model. These characteristics can lead to improved performance by avoiding the unnecessary switching and chattering behaviors associated with some other multiple adaptive control approaches. The stability and robustness properties of the adaptive scheme are analyzed. It is shown that the mean-square regulation error is of the order of the modeling error. And when the parameter estimate converges to its true value, which is guaranteed if a persistence of excitation condition is satisfied, the adaptive closed-loop system converges exponentially fast to a closed

  11. Mixed models in cerebral ischemia study

    Directory of Open Access Journals (Sweden)

    Matheus Henrique Dal Molin Ribeiro

    2016-06-01

    Full Text Available The data modeling from longitudinal studies stands out in the current scientific scenario, especially in the areas of health and biological sciences, which induces a correlation between measurements for the same observed unit. Thus, the modeling of the intra-individual dependency is required through the choice of a covariance structure that is able to receive and accommodate the sample variability. However, the lack of methodology for correlated data analysis may result in an increased occurrence of type I or type II errors and underestimate/overestimate the standard errors of the model estimates. In the present study, a Gaussian mixed model was adopted for the variable response latency of an experiment investigating the memory deficits in animals subjected to cerebral ischemia when treated with fish oil (FO. The model parameters estimation was based on maximum likelihood methods. Based on the restricted likelihood ratio test and information criteria, the autoregressive covariance matrix was adopted for errors. The diagnostic analyses for the model were satisfactory, since basic assumptions and results obtained corroborate with biological evidence; that is, the effectiveness of the FO treatment to alleviate the cognitive effects caused by cerebral ischemia was found.

  12. Numerical proceessing of radioimmunoassay results using logit-log transformation method

    International Nuclear Information System (INIS)

    Textoris, R.

    1983-01-01

    The mathematical model and algorithm are described of the numerical processing of the results of a radioimmunoassay by the logit-log transformation method and by linear regression with weight factors. The limiting value of the curve for zero concentration is optimized with regard to the residual sum by the iterative method by multiple repeats of the linear regression. Typical examples are presented of the approximation of calibration curves. The method proved suitable for all hitherto used RIA sets and is well suited for small computers with internal memory of min. 8 Kbyte. (author)

  13. A Note on the Identifiability of Generalized Linear Mixed Models

    DEFF Research Database (Denmark)

    Labouriau, Rodrigo

    2014-01-01

    I present here a simple proof that, under general regularity conditions, the standard parametrization of generalized linear mixed model is identifiable. The proof is based on the assumptions of generalized linear mixed models on the first and second order moments and some general mild regularity...... conditions, and, therefore, is extensible to quasi-likelihood based generalized linear models. In particular, binomial and Poisson mixed models with dispersion parameter are identifiable when equipped with the standard parametrization...

  14. Application of the Fokker-Planck molecular mixing model to turbulent scalar mixing using moment methods

    Science.gov (United States)

    Madadi-Kandjani, E.; Fox, R. O.; Passalacqua, A.

    2017-06-01

    An extended quadrature method of moments using the β kernel density function (β -EQMOM) is used to approximate solutions to the evolution equation for univariate and bivariate composition probability distribution functions (PDFs) of a passive scalar for binary and ternary mixing. The key element of interest is the molecular mixing term, which is described using the Fokker-Planck (FP) molecular mixing model. The direct numerical simulations (DNSs) of Eswaran and Pope ["Direct numerical simulations of the turbulent mixing of a passive scalar," Phys. Fluids 31, 506 (1988)] and the amplitude mapping closure (AMC) of Pope ["Mapping closures for turbulent mixing and reaction," Theor. Comput. Fluid Dyn. 2, 255 (1991)] are taken as reference solutions to establish the accuracy of the FP model in the case of binary mixing. The DNSs of Juneja and Pope ["A DNS study of turbulent mixing of two passive scalars," Phys. Fluids 8, 2161 (1996)] are used to validate the results obtained for ternary mixing. Simulations are performed with both the conditional scalar dissipation rate (CSDR) proposed by Fox [Computational Methods for Turbulent Reacting Flows (Cambridge University Press, 2003)] and the CSDR from AMC, with the scalar dissipation rate provided as input and obtained from the DNS. Using scalar moments up to fourth order, the ability of the FP model to capture the evolution of the shape of the PDF, important in turbulent mixing problems, is demonstrated. Compared to the widely used assumed β -PDF model [S. S. Girimaji, "Assumed β-pdf model for turbulent mixing: Validation and extension to multiple scalar mixing," Combust. Sci. Technol. 78, 177 (1991)], the β -EQMOM solution to the FP model more accurately describes the initial mixing process with a relatively small increase in computational cost.

  15. Environmental regulations and plant exit: A logit analysis based on established panel data

    Energy Technology Data Exchange (ETDEWEB)

    Bioern, E; Golombek, R; Raknerud, A

    1995-12-01

    This publication uses a model to study the relationship between environmental regulations and plant exit. It has the main characteristics of a multinomial qualitative response model of the logit type, but also has elements of a Markov chain model. The model uses Norwegian panel data for establishments in three manufacturing sectors with high shares of units which have been under strict environmental regulations. In two of the sectors, the exit probability of non-regulated establishments is about three times higher than for regulated ones. It is also found that the probability of changing regulation status from non-regulated to regulated depends significantly on economic factors. In particular, establishments with weak profitability are the most likely to become subject to environmental regulation. 12 refs., 2 figs., 6 tabs.

  16. Modeling molecular mixing in a spatially inhomogeneous turbulent flow

    Science.gov (United States)

    Meyer, Daniel W.; Deb, Rajdeep

    2012-02-01

    Simulations of spatially inhomogeneous turbulent mixing in decaying grid turbulence with a joint velocity-concentration probability density function (PDF) method were conducted. The inert mixing scenario involves three streams with different compositions. The mixing model of Meyer ["A new particle interaction mixing model for turbulent dispersion and turbulent reactive flows," Phys. Fluids 22(3), 035103 (2010)], the interaction by exchange with the mean (IEM) model and its velocity-conditional variant, i.e., the IECM model, were applied. For reference, the direct numerical simulation data provided by Sawford and de Bruyn Kops ["Direct numerical simulation and lagrangian modeling of joint scalar statistics in ternary mixing," Phys. Fluids 20(9), 095106 (2008)] was used. It was found that velocity conditioning is essential to obtain accurate concentration PDF predictions. Moreover, the model of Meyer provides significantly better results compared to the IECM model at comparable computational expense.

  17. Lagrangian mixed layer modeling of the western equatorial Pacific

    Science.gov (United States)

    Shinoda, Toshiaki; Lukas, Roger

    1995-01-01

    Processes that control the upper ocean thermohaline structure in the western equatorial Pacific are examined using a Lagrangian mixed layer model. The one-dimensional bulk mixed layer model of Garwood (1977) is integrated along the trajectories derived from a nonlinear 1 1/2 layer reduced gravity model forced with actual wind fields. The Global Precipitation Climatology Project (GPCP) data are used to estimate surface freshwater fluxes for the mixed layer model. The wind stress data which forced the 1 1/2 layer model are used for the mixed layer model. The model was run for the period 1987-1988. This simple model is able to simulate the isothermal layer below the mixed layer in the western Pacific warm pool and its variation. The subduction mechanism hypothesized by Lukas and Lindstrom (1991) is evident in the model results. During periods of strong South Equatorial Current, the warm and salty mixed layer waters in the central Pacific are subducted below the fresh shallow mixed layer in the western Pacific. However, this subduction mechanism is not evident when upwelling Rossby waves reach the western equatorial Pacific or when a prominent deepening of the mixed layer occurs in the western equatorial Pacific or when a prominent deepening of the mixed layer occurs in the western equatorial Pacific due to episodes of strong wind and light precipitation associated with the El Nino-Southern Oscillation. Comparison of the results between the Lagrangian mixed layer model and a locally forced Eulerian mixed layer model indicated that horizontal advection of salty waters from the central Pacific strongly affects the upper ocean salinity variation in the western Pacific, and that this advection is necessary to maintain the upper ocean thermohaline structure in this region.

  18. Modeling tides and vertical tidal mixing: A reality check

    International Nuclear Information System (INIS)

    Robertson, Robin

    2010-01-01

    Recently, there has been a great interest in the tidal contribution to vertical mixing in the ocean. In models, vertical mixing is estimated using parameterization of the sub-grid scale processes. Estimates of the vertical mixing varied widely depending on which vertical mixing parameterization was used. This study investigated the performance of ten different vertical mixing parameterizations in a terrain-following ocean model when simulating internal tides. The vertical mixing parameterization was found to have minor effects on the velocity fields at the tidal frequencies, but large effects on the estimates of vertical diffusivity of temperature. Although there was no definitive best performer for the vertical mixing parameterization, several parameterizations were eliminated based on comparison of the vertical diffusivity estimates with observations. The best performers were the new generic coefficients for the generic length scale schemes and Mellor-Yamada's 2.5 level closure scheme.

  19. Modeling pedestrian gap crossing index under mixed traffic condition.

    Science.gov (United States)

    Naser, Mohamed M; Zulkiple, Adnan; Al Bargi, Walid A; Khalifa, Nasradeen A; Daniel, Basil David

    2017-12-01

    There are a variety of challenges faced by pedestrians when they walk along and attempt to cross a road, as the most recorded accidents occur during this time. Pedestrians of all types, including both sexes with numerous aging groups, are always subjected to risk and are characterized as the most exposed road users. The increased demand for better traffic management strategies to reduce the risks at intersections, improve quality traffic management, traffic volume, and longer cycle time has further increased concerns over the past decade. This paper aims to develop a sustainable pedestrian gap crossing index model based on traffic flow density. It focusses on the gaps accepted by pedestrians and their decision for street crossing, where (Log-Gap) logarithm of accepted gaps was used to optimize the result of a model for gap crossing behavior. Through a review of extant literature, 15 influential variables were extracted for further empirical analysis. Subsequently, data from the observation at an uncontrolled mid-block in Jalan Ampang in Kuala Lumpur, Malaysia was gathered and Multiple Linear Regression (MLR) and Binary Logit Model (BLM) techniques were employed to analyze the results. From the results, different pedestrian behavioral characteristics were considered for a minimum gap size model, out of which only a few (four) variables could explain the pedestrian road crossing behavior while the remaining variables have an insignificant effect. Among the different variables, age, rolling gap, vehicle type, and crossing were the most influential variables. The study concludes that pedestrians' decision to cross the street depends on the pedestrian age, rolling gap, vehicle type, and size of traffic gap before crossing. The inferences from these models will be useful to increase pedestrian safety and performance evaluation of uncontrolled midblock road crossings in developing countries. Copyright © 2017 National Safety Council and Elsevier Ltd. All rights reserved.

  20. A detailed aerosol mixing state model for investigating interactions between mixing state, semivolatile partitioning, and coagulation

    Directory of Open Access Journals (Sweden)

    J. Lu

    2010-04-01

    Full Text Available A new method for describing externally mixed particles, the Detailed Aerosol Mixing State (DAMS representation, is presented in this study. This novel method classifies aerosols by both composition and size, using a user-specified mixing criterion to define boundaries between compositional populations. Interactions between aerosol mixing state, semivolatile partitioning, and coagulation are investigated with a Lagrangian box model that incorporates the DAMS approach. Model results predict that mixing state affects the amount and types of semivolatile organics that partition to available aerosol phases, causing external mixtures to produce a more size-varying composition than internal mixtures. Both coagulation and condensation contribute to the mixing of emitted particles, producing a collection of multiple compositionally distinct aerosol populations that exists somewhere between the extremes of a strictly external or internal mixture. The selection of mixing criteria has a significant impact on the size and type of individual populations that compose the modeled aerosol mixture. Computational demands for external mixture modeling are significant and can be controlled by limiting the number of aerosol populations used in the model.

  1. Analisis Faktor yang Mempengaruhi Tingkat Kesehatan Bank dengan Regresi Logit

    Directory of Open Access Journals (Sweden)

    Titik Aryati

    2007-09-01

    Full Text Available The article aims to find the probability effects of bank’s health level using CAMEL ratio analysis. The statistic method used to test on the research hypothesis was logit regression. The dependent variable used in this research was bank’s health level and independent variables were CAMEL financial ratios consisting of CAR, NPL, ROA, ROE, LDR, and NIM. The report data were extracted from bank’s financial from financial report, which had been published and accumulated by Infobank research bureau with valuation, based on bank Indonesia policy. The sample consisted of 60 healthy banks and 14 unhealthy banks in 2005 and 2006. The empirical result of this research indicates that the Non Performing Loan is the significant variable affecting bank health level.

  2. Maximum Simulated Likelihood and Expectation-Maximization Methods to Estimate Random Coefficients Logit with Panel Data

    DEFF Research Database (Denmark)

    Cherchi, Elisabetta; Guevara, Cristian

    2012-01-01

    with cross-sectional or with panel data, and (d) EM systematically attained more efficient estimators than the MSL method. The results imply that if the purpose of the estimation is only to determine the ratios of the model parameters (e.g., the value of time), the EM method should be preferred. For all......The random coefficients logit model allows a more realistic representation of agents' behavior. However, the estimation of that model may involve simulation, which may become impractical with many random coefficients because of the curse of dimensionality. In this paper, the traditional maximum...... simulated likelihood (MSL) method is compared with the alternative expectation- maximization (EM) method, which does not require simulation. Previous literature had shown that for cross-sectional data, MSL outperforms the EM method in the ability to recover the true parameters and estimation time...

  3. Modeling Dynamic Effects of the Marketing Mix on Market Shares

    NARCIS (Netherlands)

    D. Fok (Dennis); R. Paap (Richard); Ph.H.B.F. Franses (Philip Hans)

    2003-01-01

    textabstractTo comprehend the competitive structure of a market, it is important to understand the short-run and long-run effects of the marketing mix on market shares. A useful model to link market shares with marketing-mix variables, like price and promotion, is the market share attraction model.

  4. Molecular Thermodynamic Modeling of Mixed Solvent Solubility

    DEFF Research Database (Denmark)

    Ellegaard, Martin Dela; Abildskov, Jens; O’Connell, John P.

    2010-01-01

    A method based on statistical mechanical fluctuation solution theory for composition derivatives of activity coefficients is employed for estimating dilute solubilities of 11 solid pharmaceutical solutes in nearly 70 mixed aqueous and nonaqueous solvent systems. The solvent mixtures range from...... nearly ideal to strongly nonideal. The database covers a temperature range from 293 to 323 K. Comparisons with available data and other existing solubility methods show that the method successfully describes a variety of observed mixed solvent solubility behaviors using solute−solvent parameters from...

  5. Reliability assessment of competing risks with generalized mixed shock models

    International Nuclear Information System (INIS)

    Rafiee, Koosha; Feng, Qianmei; Coit, David W.

    2017-01-01

    This paper investigates reliability modeling for systems subject to dependent competing risks considering the impact from a new generalized mixed shock model. Two dependent competing risks are soft failure due to a degradation process, and hard failure due to random shocks. The shock process contains fatal shocks that can cause hard failure instantaneously, and nonfatal shocks that impact the system in three different ways: 1) damaging the unit by immediately increasing the degradation level, 2) speeding up the deterioration by accelerating the degradation rate, and 3) weakening the unit strength by reducing the hard failure threshold. While the first impact from nonfatal shocks comes from each individual shock, the other two impacts are realized when the condition for a new generalized mixed shock model is satisfied. Unlike most existing mixed shock models that consider a combination of two shock patterns, our new generalized mixed shock model includes three classic shock patterns. According to the proposed generalized mixed shock model, the degradation rate and the hard failure threshold can simultaneously shift multiple times, whenever the condition for one of these three shock patterns is satisfied. An example using micro-electro-mechanical systems devices illustrates the effectiveness of the proposed approach with sensitivity analysis. - Highlights: • A rich reliability model for systems subject to dependent failures is proposed. • The degradation rate and the hard failure threshold can shift simultaneously. • The shift is triggered by a new generalized mixed shock model. • The shift can occur multiple times under the generalized mixed shock model.

  6. Analysis and modeling of subgrid scalar mixing using numerical data

    Science.gov (United States)

    Girimaji, Sharath S.; Zhou, YE

    1995-01-01

    Direct numerical simulations (DNS) of passive scalar mixing in isotropic turbulence is used to study, analyze and, subsequently, model the role of small (subgrid) scales in the mixing process. In particular, we attempt to model the dissipation of the large scale (supergrid) scalar fluctuations caused by the subgrid scales by decomposing it into two parts: (1) the effect due to the interaction among the subgrid scales; and (2) the effect due to interaction between the supergrid and the subgrid scales. Model comparisons with DNS data show good agreement. This model is expected to be useful in the large eddy simulations of scalar mixing and reaction.

  7. Mixing of the Glauber dynamics for the ferromagnetic Potts model

    OpenAIRE

    Bordewich, Magnus; Greenhill, Catherine; Patel, Viresh

    2013-01-01

    We present several results on the mixing time of the Glauber dynamics for sampling from the Gibbs distribution in the ferromagnetic Potts model. At a fixed temperature and interaction strength, we study the interplay between the maximum degree ($\\Delta$) of the underlying graph and the number of colours or spins ($q$) in determining whether the dynamics mixes rapidly or not. We find a lower bound $L$ on the number of colours such that Glauber dynamics is rapidly mixing if at least $L$ colours...

  8. Applied model for the growth of the daytime mixed layer

    DEFF Research Database (Denmark)

    Batchvarova, E.; Gryning, Sven-Erik

    1991-01-01

    numerically. When the mixed layer is shallow or the atmosphere nearly neutrally stratified, the growth is controlled mainly by mechanical turbulence. When the layer is deep, its growth is controlled mainly by convective turbulence. The model is applied on a data set of the evolution of the height of the mixed...... layer in the morning hours, when both mechanical and convective turbulence contribute to the growth process. Realistic mixed-layer developments are obtained....

  9. Computer modeling of jet mixing in INEL waste tanks

    International Nuclear Information System (INIS)

    Meyer, P.A.

    1994-01-01

    The objective of this study is to examine the feasibility of using submerged jet mixing pumps to mobilize and suspend settled sludge materials in INEL High Level Radioactive Waste Tanks. Scenarios include removing the heel (a shallow liquid and sludge layer remaining after tank emptying processes) and mobilizing and suspending solids in full or partially full tanks. The approach used was to (1) briefly review jet mixing theory, (2) review erosion literature in order to identify and estimate important sludge characterization parameters (3) perform computer modeling of submerged liquid mixing jets in INEL tank geometries, (4) develop analytical models from which pump operating conditions and mixing times can be estimated, and (5) analyze model results to determine overall feasibility of using jet mixing pumps and make design recommendations

  10. Three essays on resource economics. Demand systems for energy forecasting: Practical considerations for estimating a generalized logit model, To borrow or not to borrow: A variation on the MacDougal-Kemp theme, and, Valuing reduced risk for households with children or the retired

    Science.gov (United States)

    Weng, Weifeng

    This thesis presents papers on three areas of study within resource and environmental economics. "Demand Systems For Energy Forecasting" provides some practical considerations for estimating a Generalized Logit model. The main reason for using this demand system for energy and other factors is that the derived price elasticities are robust when expenditure shares are small. The primary objective of the paper is to determine the best form of the cross-price weights, and a simple inverse function of the expenditure share is selected. A second objective is to demonstrate that the estimated elasticities are sensitive to the units specified for the prices, and to show how price scales can be estimated as part of the model. "To Borrow or Not to Borrow: A Variation on the MacDougal-Kemp Theme" studies the impact of international capital movements on the conditional convergence of economies differing from each other only in initial wealth. We found that in assets, income, consumption and utility, convergence obtains, with and only with, the absence of international capital movement. When a rich country invests in a poor country, the balance of debt increases forever. Asset ownership is increased in all periods for the lender, and asset ownership of the borrower is deceased. Also, capital investment decreases the lender's utility for early periods, but increases it forever after a cross-over point. In contrast, the borrower's utility increases for early periods, but then decreases forever. "Valuing Reduced Risk for Households with Children or the Retired" presents a theoretical model of how families value risk and then exams family automobile purchases to impute the average Value of a Statistical Life (VSL) for each type of family. Data for fatal accidents are used to estimate survival rates for individuals in different types of accidents, and the probabilities of having accidents for different types of vehicle. These models are used to determine standardized risks for

  11. Analysis of the liquidity risk in credit unions: a logit multinomial approach

    Directory of Open Access Journals (Sweden)

    Rosiane Maria Lima Gonçalves

    2008-10-01

    Full Text Available Liquidity risk in financial institutions is associated to balance between working capital and financial demands. Other factors that affect credit union liquidity are an unanticipated increase of withdrawals without an offsetting amount of new deposits, and the lack of ability in promoting the product geographical diversification. The objective of this study is to analyze Minas Gerais state credit union liquidity risk and its factor determinants. Financial ratios and the multinomial logit model are used. The cooperatives were classified in five categories of liquidity risk: very low, low, medium, high and very high. The empirical results indicate that high levels of liquidity are related to smaller values of the outsourcing capital use, immobilization of the turnover capital, and provision ratios. So, they are associated to larger values of the deposit total/credit operations, and asset growth ratios.

  12. Development of a Medicaid Behavioral Health Case-Mix Model

    Science.gov (United States)

    Robst, John

    2009-01-01

    Many Medicaid programs have either fully or partially carved out mental health services. The evaluation of carve-out plans requires a case-mix model that accounts for differing health status across Medicaid managed care plans. This article develops a diagnosis-based case-mix adjustment system specific to Medicaid behavioral health care. Several…

  13. Effects of the ρ - ω mixing interaction in relativistic models

    International Nuclear Information System (INIS)

    Menezes, D.P.; Providencia, C.

    2003-01-01

    The effects of the ρ-ω mixing term in infinite nuclear matter and in finite nuclei are investigated with the non-linear Walecka model in a Thomas-Fermi approximation. For infinite nuclear matter the influence of the mixing term in the binding energy calculated with the NL3 and TM1 parametrizations can be neglected. Its influence on the symmetry energy is only felt for the TM1 with a unrealistically large value for the mixing term strength. For finite nuclei the contribution of the isospin mixing term is very large as compared with the expected value to solve the Nolen-Schiffer anomaly

  14. Development of two mix model postprocessors for the investigation of shell mix in indirect drive implosion cores

    International Nuclear Information System (INIS)

    Welser-Sherrill, L.; Mancini, R. C.; Haynes, D. A.; Haan, S. W.; Koch, J. A.; Izumi, N.; Tommasini, R.; Golovkin, I. E.; MacFarlane, J. J.; Radha, P. B.; Delettrez, J. A.; Regan, S. P.; Smalyuk, V. A.

    2007-01-01

    The presence of shell mix in inertial confinement fusion implosion cores is an important characteristic. Mixing in this experimental regime is primarily due to hydrodynamic instabilities, such as Rayleigh-Taylor and Richtmyer-Meshkov, which can affect implosion dynamics. Two independent theoretical mix models, Youngs' model and the Haan saturation model, were used to estimate the level of Rayleigh-Taylor mixing in a series of indirect drive experiments. The models were used to predict the radial width of the region containing mixed fuel and shell materials. The results for Rayleigh-Taylor mixing provided by Youngs' model are considered to be a lower bound for the mix width, while those generated by Haan's model incorporate more experimental characteristics and consequently have larger mix widths. These results are compared with an independent experimental analysis, which infers a larger mix width based on all instabilities and effects captured in the experimental data

  15. Twice random, once mixed: applying mixed models to simultaneously analyze random effects of language and participants.

    Science.gov (United States)

    Janssen, Dirk P

    2012-03-01

    Psychologists, psycholinguists, and other researchers using language stimuli have been struggling for more than 30 years with the problem of how to analyze experimental data that contain two crossed random effects (items and participants). The classical analysis of variance does not apply; alternatives have been proposed but have failed to catch on, and a statistically unsatisfactory procedure of using two approximations (known as F(1) and F(2)) has become the standard. A simple and elegant solution using mixed model analysis has been available for 15 years, and recent improvements in statistical software have made mixed models analysis widely available. The aim of this article is to increase the use of mixed models by giving a concise practical introduction and by giving clear directions for undertaking the analysis in the most popular statistical packages. The article also introduces the DJMIXED: add-on package for SPSS, which makes entering the models and reporting their results as straightforward as possible.

  16. Perturbative estimates of lepton mixing angles in unified models

    International Nuclear Information System (INIS)

    Antusch, Stefan; King, Stephen F.; Malinsky, Michal

    2009-01-01

    Many unified models predict two large neutrino mixing angles, with the charged lepton mixing angles being small and quark-like, and the neutrino masses being hierarchical. Assuming this, we present simple approximate analytic formulae giving the lepton mixing angles in terms of the underlying high energy neutrino mixing angles together with small perturbations due to both charged lepton corrections and renormalisation group (RG) effects, including also the effects of third family canonical normalization (CN). We apply the perturbative formulae to the ubiquitous case of tri-bimaximal neutrino mixing at the unification scale, in order to predict the theoretical corrections to mixing angle predictions and sum rule relations, and give a general discussion of all limiting cases. We also discuss the implications for the sum rule relations of the measurement of a non-zero reactor angle, as hinted at by recent experimental measurements.

  17. Mixing Paradigms for More Comprehensible Models

    DEFF Research Database (Denmark)

    Westergaard, Michael; Slaats, Tijs

    2013-01-01

    Petri nets efficiently model both data- and control-flow. Control-flow is either modeled explicitly as flow of a specific kind of data, or implicit based on the data-flow. Explicit modeling of control-flow is useful for well-known and highly structured processes, but may make modeling of abstract...

  18. Markov and mixed models with applications

    DEFF Research Database (Denmark)

    Mortensen, Stig Bousgaard

    This thesis deals with mathematical and statistical models with focus on applications in pharmacokinetic and pharmacodynamic (PK/PD) modelling. These models are today an important aspect of the drug development in the pharmaceutical industry and continued research in statistical methodology within...... or uncontrollable factors in an individual. Modelling using SDEs also provides new tools for estimation of unknown inputs to a system and is illustrated with an application to estimation of insulin secretion rates in diabetic patients. Models for the eect of a drug is a broader area since drugs may affect...... for non-parametric estimation of Markov processes are proposed to give a detailed description of the sleep process during the night. Statistically the Markov models considered for sleep states are closely related to the PK models based on SDEs as both models share the Markov property. When the models...

  19. Application of mixed models for the assessment genotype and ...

    African Journals Online (AJOL)

    SAM

    2014-05-07

    May 7, 2014 ... cused mainly on the yield of cottonseed and fiber, with the CA 324 and ..... Gaps and opportunities for agricultural sector development in ... Adaptability and stability of maize varieties using mixed models. Crop. Breeding and ...

  20. Surface wind mixing in the Regional Ocean Modeling System (ROMS)

    Science.gov (United States)

    Robertson, Robin; Hartlipp, Paul

    2017-12-01

    Mixing at the ocean surface is key for atmosphere-ocean interactions and the distribution of heat, energy, and gases in the upper ocean. Winds are the primary force for surface mixing. To properly simulate upper ocean dynamics and the flux of these quantities within the upper ocean, models must reproduce mixing in the upper ocean. To evaluate the performance of the Regional Ocean Modeling System (ROMS) in replicating the surface mixing, the results of four different vertical mixing parameterizations were compared against observations, using the surface mixed layer depth, the temperature fields, and observed diffusivities for comparisons. The vertical mixing parameterizations investigated were Mellor- Yamada 2.5 level turbulent closure (MY), Large- McWilliams- Doney Kpp (LMD), Nakanishi- Niino (NN), and the generic length scale (GLS) schemes. This was done for one temperate site in deep water in the Eastern Pacific and three shallow water sites in the Baltic Sea. The model reproduced the surface mixed layer depth reasonably well for all sites; however, the temperature fields were reproduced well for the deep site, but not for the shallow Baltic Sea sites. In the Baltic Sea, the models overmixed the water column after a few days. Vertical temperature diffusivities were higher than those observed and did not show the temporal fluctuations present in the observations. The best performance was by NN and MY; however, MY became unstable in two of the shallow simulations with high winds. The performance of GLS nearly as good as NN and MY. LMD had the poorest performance as it generated temperature diffusivities that were too high and induced too much mixing. Further observational comparisons are needed to evaluate the effects of different stratification and wind conditions and the limitations on the vertical mixing parameterizations.

  1. Assessing the value of museums with a combined discrete choice/ count data model

    NARCIS (Netherlands)

    Rouwendal, J.; Boter, J.

    2009-01-01

    This article assesses the value of Dutch museums using information about destination choice as well as about the number of trips undertaken by an actor. Destination choice is analysed by means of a mixed logit model, and a count data model is used to explain trip generation. We use a

  2. Sensitivity of the urban airshed model to mixing height profiles

    Energy Technology Data Exchange (ETDEWEB)

    Rao, S.T.; Sistla, G.; Ku, J.Y.; Zhou, N.; Hao, W. [New York State Dept. of Environmental Conservation, Albany, NY (United States)

    1994-12-31

    The United States Environmental Protection Agency (USEPA) has recommended the use of the Urban Airshed Model (UAM), a grid-based photochemical model, for regulatory applications. One of the important parameters in applications of the UAM is the height of the mixed layer or the diffusion break. In this study, we examine the sensitivity of the UAM-predicted ozone concentrations to (a) a spatially invariant diurnal mixing height profile, and (b) a spatially varying diurnal mixing height profile for a high ozone episode of July 1988 for the New York Airshed. The 1985/88 emissions inventory used in the EPA`s Regional Oxidant Modeling simulations has been regridded for this study. Preliminary results suggest that the spatially varying case yields a higher peak ozone concentrations compared to the spatially invariant mixing height simulation, with differences in the peak ozone ranging from a few ppb to about 40 ppb for the days simulated. These differences are attributed to the differences in the shape of the mixing height profiles and its rate of growth during the morning hours when peak emissions are injected into the atmosphere. Examination of the impact of emissions reductions associated with these two mixing height profiles indicates that NO{sub x}-focussed controls provide a greater change in the predicted ozone peak under spatially invariant mixing heights than under the spatially varying mixing height profile. On the other hand, VOC-focussed controls provide a greater change in the predicted peak ozone levels under spatially varying mixing heights than under the spatially invariant mixing height profile.

  3. Modelling mixed forest growth : a review of models for forest management

    NARCIS (Netherlands)

    Porte, A.; Bartelink, H.H.

    2002-01-01

    Most forests today are multi-specific and heterogeneous forests (`mixed forests'). However, forest modelling has been focusing on mono-specific stands for a long time, only recently have models been developed for mixed forests. Previous reviews of mixed forest modelling were restricted to certain

  4. Actuarial statistics with generalized linear mixed models

    NARCIS (Netherlands)

    Antonio, K.; Beirlant, J.

    2007-01-01

    Over the last decade the use of generalized linear models (GLMs) in actuarial statistics has received a lot of attention, starting from the actuarial illustrations in the standard text by McCullagh and Nelder [McCullagh, P., Nelder, J.A., 1989. Generalized linear models. In: Monographs on Statistics

  5. A Comparison of Item Fit Statistics for Mixed IRT Models

    Science.gov (United States)

    Chon, Kyong Hee; Lee, Won-Chan; Dunbar, Stephen B.

    2010-01-01

    In this study we examined procedures for assessing model-data fit of item response theory (IRT) models for mixed format data. The model fit indices used in this study include PARSCALE's G[superscript 2], Orlando and Thissen's S-X[superscript 2] and S-G[superscript 2], and Stone's chi[superscript 2*] and G[superscript 2*]. To investigate the…

  6. A new approach to model mixed hydrates

    Czech Academy of Sciences Publication Activity Database

    Hielscher, S.; Vinš, Václav; Jäger, A.; Hrubý, Jan; Breitkopf, C.; Span, R.

    2018-01-01

    Roč. 459, March (2018), s. 170-185 ISSN 0378-3812 R&D Projects: GA ČR(CZ) GA17-08218S Institutional support: RVO:61388998 Keywords : gas hydrate * mixture * modeling Subject RIV: BJ - Thermodynamics Impact factor: 2.473, year: 2016 https://www.sciencedirect.com/science/article/pii/S0378381217304983

  7. Isothermal coarse mixing: experimental and CFD modelling

    International Nuclear Information System (INIS)

    Gilbertson, M.A.; Kenning, D.B.R.; Hall, R.W.

    1992-01-01

    A plane, two-dimensional flow apparatus has been built which uses a jet of solid 6mm diameter balls to model a jet of molten drops falling into a tank of water to study premixing prior to a vapour explosion. Preliminary experiments with unheated stainless steel balls are here compared with computational fluid dynamics (CFD) calculations by the code CHYMES. (6 figures) (Author)

  8. Comparison between the SIMPLE and ENERGY mixing models

    International Nuclear Information System (INIS)

    Burns, K.J.; Todreas, N.E.

    1980-07-01

    The SIMPLE and ENERGY mixing models were compared in order to investigate the limitations of SIMPLE's analytically formulated mixing parameter, relative to the experimentally calibrated ENERGY mixing parameters. For interior subchannels, it was shown that when the SIMPLE and ENERGY parameters are reduced to a common form, there is good agreement between the two models for a typical fuel geometry. However, large discrepancies exist for typical blanket (lower P/D) geometries. Furthermore, the discrepancies between the mixing parameters result in significant differences in terms of the temperature profiles generated by the ENERGY code utilizing these mixing parameters as input. For edge subchannels, the assumptions made in the development of the SIMPLE model were extended to the rectangular edge subchannel geometry used in ENERGY. The resulting effective eddy diffusivities (used by the ENERGY code) associated with the SIMPLE model are again closest to those of the ENERGY model for the fuel assembly geometry. Finally, the SIMPLE model's neglect of a net swirl effect in the edge region is most limiting for assemblies exhibiting relatively large radial power skews

  9. Business models in commercial media markets: Bargaining, advertising, and mixing

    OpenAIRE

    Thöne, Miriam; Rasch, Alexander; Wenzel, Tobias

    2016-01-01

    We consider a product and a media market and show how a change in the business model employed by the media platforms affects consumers, producers (or advertisers), and price negotiations for advertisements. On both markets, two firms differentiated á la Hotelling compete for consumers. On the media market, consumers can mix between the two outlets whereas on the product market, consumers have to decide for one supplier. With pay-tv, as opposed to free-to-air, mixing by consumers disappears, p...

  10. Stochastic model of Rayleigh-Taylor turbulent mixing

    International Nuclear Information System (INIS)

    Abarzhi, S.I.; Cadjan, M.; Fedotov, S.

    2007-01-01

    We propose a stochastic model to describe the random character of the dissipation process in Rayleigh-Taylor turbulent mixing. The parameter alpha, used conventionally to characterize the mixing growth-rate, is not a universal constant and is very sensitive to the statistical properties of the dissipation. The ratio between the rates of momentum loss and momentum gain is the statistic invariant and a robust parameter to diagnose with or without turbulent diffusion accounted for

  11. Functional Mixed Effects Model for Small Area Estimation.

    Science.gov (United States)

    Maiti, Tapabrata; Sinha, Samiran; Zhong, Ping-Shou

    2016-09-01

    Functional data analysis has become an important area of research due to its ability of handling high dimensional and complex data structures. However, the development is limited in the context of linear mixed effect models, and in particular, for small area estimation. The linear mixed effect models are the backbone of small area estimation. In this article, we consider area level data, and fit a varying coefficient linear mixed effect model where the varying coefficients are semi-parametrically modeled via B-splines. We propose a method of estimating the fixed effect parameters and consider prediction of random effects that can be implemented using a standard software. For measuring prediction uncertainties, we derive an analytical expression for the mean squared errors, and propose a method of estimating the mean squared errors. The procedure is illustrated via a real data example, and operating characteristics of the method are judged using finite sample simulation studies.

  12. Linear mixed models a practical guide using statistical software

    CERN Document Server

    West, Brady T; Galecki, Andrzej T

    2006-01-01

    Simplifying the often confusing array of software programs for fitting linear mixed models (LMMs), Linear Mixed Models: A Practical Guide Using Statistical Software provides a basic introduction to primary concepts, notation, software implementation, model interpretation, and visualization of clustered and longitudinal data. This easy-to-navigate reference details the use of procedures for fitting LMMs in five popular statistical software packages: SAS, SPSS, Stata, R/S-plus, and HLM. The authors introduce basic theoretical concepts, present a heuristic approach to fitting LMMs based on bo

  13. The Market for Ph.D. Holders in Greece: Probit and Multinomial Logit Analysis of their Employment Status

    OpenAIRE

    Joan Daouli; Eirini Konstantina Nikolatou

    2015-01-01

    The objective of this paper is to investigate the factors influencing the probability that a Ph.D. holder in Greece will work in the academic sector, as well as the probability of his or her choosing employment in various sectors of industry and occupational categories. Probit/multinomial logit models are employed using the 2001 Census data. The empirical results indicate that being young, married, having a Ph.D. in Natural Sciences and/or in Engineering, granted by a Greek university, increa...

  14. Stochastic transport models for mixing in variable-density turbulence

    Science.gov (United States)

    Bakosi, J.; Ristorcelli, J. R.

    2011-11-01

    In variable-density (VD) turbulent mixing, where very-different- density materials coexist, the density fluctuations can be an order of magnitude larger than their mean. Density fluctuations are non-negligible in the inertia terms of the Navier-Stokes equation which has both quadratic and cubic nonlinearities. Very different mixing rates of different materials give rise to large differential accelerations and some fundamentally new physics that is not seen in constant-density turbulence. In VD flows material mixing is active in a sense far stronger than that applied in the Boussinesq approximation of buoyantly-driven flows: the mass fraction fluctuations are coupled to each other and to the fluid momentum. Statistical modeling of VD mixing requires accounting for basic constraints that are not important in the small-density-fluctuation passive-scalar-mixing approximation: the unit-sum of mass fractions, bounded sample space, and the highly skewed nature of the probability densities become essential. We derive a transport equation for the joint probability of mass fractions, equivalent to a system of stochastic differential equations, that is consistent with VD mixing in multi-component turbulence and consistently reduces to passive scalar mixing in constant-density flows.

  15. A new model for the redundancy allocation problem with component mixing and mixed redundancy strategy

    International Nuclear Information System (INIS)

    Gholinezhad, Hadi; Zeinal Hamadani, Ali

    2017-01-01

    This paper develops a new model for redundancy allocation problem. In this paper, like many recent papers, the choice of the redundancy strategy is considered as a decision variable. But, in our model each subsystem can exploit both active and cold-standby strategies simultaneously. Moreover, the model allows for component mixing such that components of different types may be used in each subsystem. The problem, therefore, boils down to determining the types of components, redundancy levels, and number of active and cold-standby units of each type for each subsystem to maximize system reliability by considering such constraints as available budget, weight, and space. Since RAP belongs to the NP-hard class of optimization problems, a genetic algorithm (GA) is developed for solving the problem. Finally, the performance of the proposed algorithm is evaluated by applying it to a well-known test problem from the literature with relatively satisfactory results. - Highlights: • A new model for the redundancy allocation problem in series–parallel systems is proposed. • The redundancy strategy of each subsystem is considered as a decision variable and can be active, cold-standby or mixed. • Component mixing is allowed, in other words components of any subsystem can be non-identical. • A genetic algorithm is developed for solving the problem. • Computational experiments demonstrate that the new model leads to interesting results.

  16. The salinity effect in a mixed layer ocean model

    Science.gov (United States)

    Miller, J. R.

    1976-01-01

    A model of the thermally mixed layer in the upper ocean as developed by Kraus and Turner and extended by Denman is further extended to investigate the effects of salinity. In the tropical and subtropical Atlantic Ocean rapid increases in salinity occur at the bottom of a uniformly mixed surface layer. The most significant effects produced by the inclusion of salinity are the reduction of the deepening rate and the corresponding change in the heating characteristics of the mixed layer. If the net surface heating is positive, but small, salinity effects must be included to determine whether the mixed layer temperature will increase or decrease. Precipitation over tropical oceans leads to the development of a shallow stable layer accompanied by a decrease in the temperature and salinity at the sea surface.

  17. A mathematical model for turbulent incompressible flows through mixing grids

    International Nuclear Information System (INIS)

    Allaire, G.

    1989-01-01

    A mathematical model is proposed for the computation of turbulent incompressible flows through mixing grids. This model is obtained as follows: in a three-dimentional-domain we represent a mixing grid by small identical wings of size ε 2 periodically distributed at the nodes of a plane regular mesh of size ε, and we consider incompressible Navier-Stokes equations with a no-slip condition on the wings. Using an appropriate homogenization process we pass to the limit when ε tends to zero and we obtain a Brinkman equation, i.e. a Navier-Stokes equation plus a zero-order term for the velocity, in a homogeneous domain without anymore wings. The interest of this model is that the spatial discretization is simpler in a homogeneous domain, and, moreover, the new term, which expresses the grid's mixing effect, can be evaluated with a local computation around a single wing

  18. Mixed waste treatment model: Basis and analysis

    International Nuclear Information System (INIS)

    Palmer, B.A.

    1995-09-01

    The Department of Energy's Programmatic Environmental Impact Statement (PEIS) required treatment system capacities for risk and cost calculation. Los Alamos was tasked with providing these capacities to the PEIS team. This involved understanding the Department of Energy (DOE) Complex waste, making the necessary changes to correct for problems, categorizing the waste for treatment, and determining the treatment system requirements. The treatment system requirements depended on the incoming waste, which varied for each PEIS case. The treatment system requirements also depended on the type of treatment that was desired. Because different groups contributing to the PEIS needed specific types of results, we provided the treatment system requirements in a variety of forms. In total, some 40 data files were created for the TRU cases, and for the MLLW case, there were 105 separate data files. Each data file represents one treatment case consisting of the selected waste from various sites, a selected treatment system, and the reporting requirements for such a case. The treatment system requirements in their most basic form are the treatment process rates for unit operations in the desired treatment system, based on a 10-year working life and 20-year accumulation of the waste. These results were reported in cubic meters and for the MLLW case, in kilograms as well. The treatment system model consisted of unit operations that are linked together. Each unit operation's function depended on the input waste streams, waste matrix, and contaminants. Each unit operation outputs one or more waste streams whose matrix, contaminants, and volume/mass may have changed as a result of the treatment. These output streams are then routed to the appropriate unit operation for additional treatment until the output waste stream meets the treatment requirements for disposal. The total waste for each unit operation was calculated as well as the waste for each matrix treated by the unit

  19. Computer modeling of ORNL storage tank sludge mobilization and mixing

    International Nuclear Information System (INIS)

    Terrones, G.; Eyler, L.L.

    1993-09-01

    This report presents and analyzes the results of the computer modeling of mixing and mobilization of sludge in horizontal, cylindrical storage tanks using submerged liquid jets. The computer modeling uses the TEMPEST computational fluid dynamics computer program. The horizontal, cylindrical storage tank configuration is similar to the Melton Valley Storage Tanks (MVST) at Oak Ridge National (ORNL). The MVST tank contents exhibit non-homogeneous, non-Newtonian rheology characteristics. The eventual goals of the simulations are to determine under what conditions sludge mobilization using submerged liquid jets is feasible in tanks of this configuration, and to estimate mixing times required to approach homogeneity of the contents of the tanks

  20. A Mixing Based Model for DME Combustion in Diesel Engines

    DEFF Research Database (Denmark)

    Bek, Bjarne H.; Sorenson, Spencer C.

    1998-01-01

    A series of studies has been conducted investigating the behavior of di-methyl ether (DME) fuel jets injected into quiescent combus-tion chambers. These studies have shown that it is possible to make a good estimate of the penetration of the jet based on existing correlations for diesel fuel......, by using appropriate fuel properties. The results of the spray studies have been incorporated into a first generation model for DME combustion. The model is entirely based on physical mixing, where chemical processes have been assumed to be very fast in relation to mixing. The assumption was made...

  1. A mixing based model for DME combustion in diesel engines

    DEFF Research Database (Denmark)

    Bek, Bjarne Hjort; Sorenson, Spencer C

    2001-01-01

    A series of studies has been conducted investigating the behavior of di-methyl ether (DME) fuel jets injected into quiescent combustion chambers. These studies have shown that it is possible to make a good estimate of the penetration of the jet based on existing correlations for diesel fuel......, by using appropriate fuel properties. The results of the spray studies have been incorporated into a first generation model for DME combustion. The model is entirely based on physical mixing, where chemical processes have been assumed to be very fast in relation to mixing. The assumption was made...

  2. Multivariate Survival Mixed Models for Genetic Analysis of Longevity Traits

    DEFF Research Database (Denmark)

    Pimentel Maia, Rafael; Madsen, Per; Labouriau, Rodrigo

    2014-01-01

    A class of multivariate mixed survival models for continuous and discrete time with a complex covariance structure is introduced in a context of quantitative genetic applications. The methods introduced can be used in many applications in quantitative genetics although the discussion presented co...... applications. The methods presented are implemented in such a way that large and complex quantitative genetic data can be analyzed......A class of multivariate mixed survival models for continuous and discrete time with a complex covariance structure is introduced in a context of quantitative genetic applications. The methods introduced can be used in many applications in quantitative genetics although the discussion presented...... concentrates on longevity studies. The framework presented allows to combine models based on continuous time with models based on discrete time in a joint analysis. The continuous time models are approximations of the frailty model in which the hazard function will be assumed to be piece-wise constant...

  3. Multivariate Survival Mixed Models for Genetic Analysis of Longevity Traits

    DEFF Research Database (Denmark)

    Pimentel Maia, Rafael; Madsen, Per; Labouriau, Rodrigo

    2013-01-01

    A class of multivariate mixed survival models for continuous and discrete time with a complex covariance structure is introduced in a context of quantitative genetic applications. The methods introduced can be used in many applications in quantitative genetics although the discussion presented co...... applications. The methods presented are implemented in such a way that large and complex quantitative genetic data can be analyzed......A class of multivariate mixed survival models for continuous and discrete time with a complex covariance structure is introduced in a context of quantitative genetic applications. The methods introduced can be used in many applications in quantitative genetics although the discussion presented...... concentrates on longevity studies. The framework presented allows to combine models based on continuous time with models based on discrete time in a joint analysis. The continuous time models are approximations of the frailty model in which the hazard function will be assumed to be piece-wise constant...

  4. Advective mixing in a nondivergent barotropic hurricane model

    Directory of Open Access Journals (Sweden)

    B. Rutherford

    2010-01-01

    Full Text Available This paper studies Lagrangian mixing in a two-dimensional barotropic model for hurricane-like vortices. Since such flows show high shearing in the radial direction, particle separation across shear-lines is diagnosed through a Lagrangian field, referred to as R-field, that measures trajectory separation orthogonal to the Lagrangian velocity. The shear-lines are identified with the level-contours of another Lagrangian field, referred to as S-field, that measures the average shear-strength along a trajectory. Other fields used for model diagnostics are the Lagrangian field of finite-time Lyapunov exponents (FTLE-field, the Eulerian Q-field, and the angular velocity field. Because of the high shearing, the FTLE-field is not a suitable indicator for advective mixing, and in particular does not exhibit ridges marking the location of finite-time stable and unstable manifolds. The FTLE-field is similar in structure to the radial derivative of the angular velocity. In contrast, persisting ridges and valleys can be clearly recognized in the R-field, and their propagation speed indicates that transport across shear-lines is caused by Rossby waves. A radial mixing rate derived from the R-field gives a time-dependent measure of flux across the shear-lines. On the other hand, a measured mixing rate across the shear-lines, which counts trajectory crossings, confirms the results from the R-field mixing rate, and shows high mixing in the eyewall region after the formation of a polygonal eyewall, which continues until the vortex breaks down. The location of the R-field ridges elucidates the role of radial mixing for the interaction and breakdown of the mesovortices shown by the model.

  5. The MIDAS Touch: Mixed Data Sampling Regression Models

    OpenAIRE

    Ghysels, Eric; Santa-Clara, Pedro; Valkanov, Rossen

    2004-01-01

    We introduce Mixed Data Sampling (henceforth MIDAS) regression models. The regressions involve time series data sampled at different frequencies. Technically speaking MIDAS models specify conditional expectations as a distributed lag of regressors recorded at some higher sampling frequencies. We examine the asymptotic properties of MIDAS regression estimation and compare it with traditional distributed lag models. MIDAS regressions have wide applicability in macroeconomics and �nance.

  6. Mixed

    Directory of Open Access Journals (Sweden)

    Pau Baya

    2011-05-01

    Full Text Available Remenat (Catalan (Mixed, "revoltillo" (Scrambled in Spanish, is a dish which, in Catalunya, consists of a beaten egg cooked with vegetables or other ingredients, normally prawns or asparagus. It is delicious. Scrambled refers to the action of mixing the beaten egg with other ingredients in a pan, normally using a wooden spoon Thought is frequently an amalgam of past ideas put through a spinner and rhythmically shaken around like a cocktail until a uniform and dense paste is made. This malleable product, rather like a cake mixture can be deformed pulling it out, rolling it around, adapting its shape to the commands of one’s hands or the tool which is being used on it. In the piece Mixed, the contortion of the wood seeks to reproduce the plasticity of this slow heavy movement. Each piece lays itself on the next piece consecutively like a tongue of incandescent lava slowly advancing but with unstoppable inertia.

  7. Age and pedestrian injury severity in motor-vehicle crashes: a heteroskedastic logit analysis.

    Science.gov (United States)

    Kim, Joon-Ki; Ulfarsson, Gudmundur F; Shankar, Venkataraman N; Kim, Sungyop

    2008-09-01

    This research explores the injury severity of pedestrians in motor-vehicle crashes. It is hypothesized that the variance of unobserved pedestrian characteristics increases with age. In response, a heteroskedastic generalized extreme value model is used. The analysis links explanatory factors with four injury outcomes: fatal, incapacitating, non-incapacitating, and possible or no injury. Police-reported crash data between 1997 and 2000 from North Carolina, USA, are used. The results show that pedestrian age induces heteroskedasticity which affects the probability of fatal injury. The effect grows more pronounced with increasing age past 65. The heteroskedastic model provides a better fit than the multinomial logit model. Notable factors increasing the probability of fatal pedestrian injury: increasing pedestrian age, male driver, intoxicated driver (2.7 times greater probability of fatality), traffic sign, commercial area, darkness with or without streetlights (2-4 times greater probability of fatality), sport-utility vehicle, truck, freeway, two-way divided roadway, speeding-involved, off roadway, motorist turning or backing, both driver and pedestrian at fault, and pedestrian only at fault. Conversely, the probability of a fatal injury decreased: with increasing driver age, during the PM traffic peak, with traffic signal control, in inclement weather, on a curved roadway, at a crosswalk, and when walking along roadway.

  8. Application of LogitBoost Classifier for Traceability Using SNP Chip Data.

    Science.gov (United States)

    Kim, Kwondo; Seo, Minseok; Kang, Hyunsung; Cho, Seoae; Kim, Heebal; Seo, Kang-Seok

    2015-01-01

    Consumer attention to food safety has increased rapidly due to animal-related diseases; therefore, it is important to identify their places of origin (POO) for safety purposes. However, only a few studies have addressed this issue and focused on machine learning-based approaches. In the present study, classification analyses were performed using a customized SNP chip for POO prediction. To accomplish this, 4,122 pigs originating from 104 farms were genotyped using the SNP chip. Several factors were considered to establish the best prediction model based on these data. We also assessed the applicability of the suggested model using a kinship coefficient-filtering approach. Our results showed that the LogitBoost-based prediction model outperformed other classifiers in terms of classification performance under most conditions. Specifically, a greater level of accuracy was observed when a higher kinship-based cutoff was employed. These results demonstrated the applicability of a machine learning-based approach using SNP chip data for practical traceability.

  9. A system dynamics model to determine products mix

    Directory of Open Access Journals (Sweden)

    Mahtab Hajghasem

    2014-02-01

    Full Text Available This paper presents an implementation of system dynamics model to determine appropriate product mix by considering various factors such as labor, materials, overhead, etc. for an Iranian producer of cosmetic and sanitary products. The proposed model of this paper considers three hypotheses including the relationship between product mix and profitability, optimum production capacity and having minimum amount of storage to take advantage of low cost production. The implementation of system dynamics on VENSIM software package has confirmed all three hypotheses of the survey and suggested that in order to reach better mix product, it is necessary to reach optimum production planning, take advantage of all available production capacities and use inventory management techniques.

  10. Practical likelihood analysis for spatial generalized linear mixed models

    DEFF Research Database (Denmark)

    Bonat, W. H.; Ribeiro, Paulo Justiniano

    2016-01-01

    We investigate an algorithm for maximum likelihood estimation of spatial generalized linear mixed models based on the Laplace approximation. We compare our algorithm with a set of alternative approaches for two datasets from the literature. The Rhizoctonia root rot and the Rongelap are......, respectively, examples of binomial and count datasets modeled by spatial generalized linear mixed models. Our results show that the Laplace approximation provides similar estimates to Markov Chain Monte Carlo likelihood, Monte Carlo expectation maximization, and modified Laplace approximation. Some advantages...... of Laplace approximation include the computation of the maximized log-likelihood value, which can be used for model selection and tests, and the possibility to obtain realistic confidence intervals for model parameters based on profile likelihoods. The Laplace approximation also avoids the tuning...

  11. Longitudinal mixed-effects models for latent cognitive function

    NARCIS (Netherlands)

    van den Hout, Ardo; Fox, Gerardus J.A.; Muniz-Terrera, Graciela

    2015-01-01

    A mixed-effects regression model with a bent-cable change-point predictor is formulated to describe potential decline of cognitive function over time in the older population. For the individual trajectories, cognitive function is considered to be a latent variable measured through an item response

  12. Sensitivity Analysis of Mixed Models for Incomplete Longitudinal Data

    Science.gov (United States)

    Xu, Shu; Blozis, Shelley A.

    2011-01-01

    Mixed models are used for the analysis of data measured over time to study population-level change and individual differences in change characteristics. Linear and nonlinear functions may be used to describe a longitudinal response, individuals need not be observed at the same time points, and missing data, assumed to be missing at random (MAR),…

  13. Application of mixed models for the assessment genotype and ...

    African Journals Online (AJOL)

    Application of mixed models for the assessment genotype and environment interactions in cotton ( Gossypium hirsutum ) cultivars in Mozambique. ... The cultivars ISA 205, STAM 42 and REMU 40 showed superior productivity when they were selected by the Harmonic Mean of Genotypic Values (HMGV) criterion in relation ...

  14. Confidence Intervals for Assessing Heterogeneity in Generalized Linear Mixed Models

    Science.gov (United States)

    Wagler, Amy E.

    2014-01-01

    Generalized linear mixed models are frequently applied to data with clustered categorical outcomes. The effect of clustering on the response is often difficult to practically assess partly because it is reported on a scale on which comparisons with regression parameters are difficult to make. This article proposes confidence intervals for…

  15. Introduction to models of neutrino masses and mixings

    International Nuclear Information System (INIS)

    Joshipura, Anjan S.

    2004-01-01

    This review contains an introduction to models of neutrino masses for non-experts. Topics discussed are i) different types of neutrino masses ii) structure of neutrino masses and mixing needed to understand neutrino oscillation results iii) mechanism to generate neutrino masses in gauge theories and iv) discussion of generic scenarios proposed to realize the required neutrino mass structures. (author)

  16. The 4s web-marketing mix model

    NARCIS (Netherlands)

    Constantinides, Efthymios

    2002-01-01

    This paper reviews the criticism on the 4Ps Marketing Mix framework, the most popular tool of traditional marketing management, and categorizes the main objections of using the model as the foundation of physical marketing. It argues that applying the traditional approach, based on the 4Ps paradigm,

  17. Goodness-of-fit tests in mixed models

    KAUST Repository

    Claeskens, Gerda; Hart, Jeffrey D.

    2009-01-01

    Mixed models, with both random and fixed effects, are most often estimated on the assumption that the random effects are normally distributed. In this paper we propose several formal tests of the hypothesis that the random effects and/or errors

  18. COMBINING SOURCES IN STABLE ISOTOPE MIXING MODELS: ALTERNATIVE METHODS

    Science.gov (United States)

    Stable isotope mixing models are often used to quantify source contributions to a mixture. Examples include pollution source identification; trophic web studies; analysis of water sources for soils, plants, or water bodies; and many others. A common problem is having too many s...

  19. Metabolic modeling of mixed substrate uptake for polyhydroxyalkanoate (PHA) production

    NARCIS (Netherlands)

    Jiang, Y.; Hebly, M.; Kleerebezem, R.; Muyzer, G.; van Loosdrecht, M.C.M.

    2011-01-01

    Polyhydroxyalkanoate (PHA) production by mixed microbial communities can be established in a two-stage process, consisting of a microbial enrichment step and a PHA accumulation step. In this study, a mathematical model was constructed for evaluating the influence of the carbon substrate composition

  20. Fluctuations in a mixed IS-LM business cycle model

    Directory of Open Access Journals (Sweden)

    Hamad Talibi Alaoui

    2008-09-01

    Full Text Available In the present paper, we extend a delayed IS-LM business cycle model by introducing an additional advance (anticipated capital stock in the investment function. The resulting model is represented in terms of mixed differential equations. For the deviating argument $au$ (advance and delay being a bifurcation parameter we investigate the local stability and the local Hopf bifurcation. Also some numerical simulations are given to support the theoretical analysis.

  1. Configuration mixing in the sdg interacting boson model

    International Nuclear Information System (INIS)

    Bouldjedri, A; Van Isacker, P; Zerguine, S

    2005-01-01

    A wavefunction analysis of the strong-coupling limits of the sdg interacting boson model is presented. The analysis is carried out for two-boson states and allows us to characterize the boson configuration mixing in the different limits. Based on these results and those of a shell-model analysis of the sdg IBM, qualitative conclusions are drawn about the range of applicability of each limit

  2. Configuration mixing in the sdg interacting boson model

    Energy Technology Data Exchange (ETDEWEB)

    Bouldjedri, A [Department of Physics, Faculty of Science, University of Batna, Avenue Boukhelouf M El Hadi, 05000 Batna (Algeria); Van Isacker, P [GANIL, BP 55027, F-14076 Caen cedex 5 (France); Zerguine, S [Department of Physics, Faculty of Science, University of Batna, Avenue Boukhelouf M El Hadi, 05000 Batna (Algeria)

    2005-11-01

    A wavefunction analysis of the strong-coupling limits of the sdg interacting boson model is presented. The analysis is carried out for two-boson states and allows us to characterize the boson configuration mixing in the different limits. Based on these results and those of a shell-model analysis of the sdg IBM, qualitative conclusions are drawn about the range of applicability of each limit.

  3. Ill-posedness in modeling mixed sediment river morphodynamics

    Science.gov (United States)

    Chavarrías, Víctor; Stecca, Guglielmo; Blom, Astrid

    2018-04-01

    In this paper we analyze the Hirano active layer model used in mixed sediment river morphodynamics concerning its ill-posedness. Ill-posedness causes the solution to be unstable to short-wave perturbations. This implies that the solution presents spurious oscillations, the amplitude of which depends on the domain discretization. Ill-posedness not only produces physically unrealistic results but may also cause failure of numerical simulations. By considering a two-fraction sediment mixture we obtain analytical expressions for the mathematical characterization of the model. Using these we show that the ill-posed domain is larger than what was found in previous analyses, not only comprising cases of bed degradation into a substrate finer than the active layer but also in aggradational cases. Furthermore, by analyzing a three-fraction model we observe ill-posedness under conditions of bed degradation into a coarse substrate. We observe that oscillations in the numerical solution of ill-posed simulations grow until the model becomes well-posed, as the spurious mixing of the active layer sediment and substrate sediment acts as a regularization mechanism. Finally we conduct an eigenstructure analysis of a simplified vertically continuous model for mixed sediment for which we show that ill-posedness occurs in a wider range of conditions than the active layer model.

  4. Dynamic behaviours of mix-game model and its application

    Institute of Scientific and Technical Information of China (English)

    Gou Cheng-Ling

    2006-01-01

    In this paper a minority game (MG) is modified by adding into it some agents who play a majority game. Such a game is referred to as a mix-game. The highlight of this model is that the two groups of agents in the mix-game have different bounded abilities to deal with historical information and to count their own performance. Through simulations,it is found that the local volatilities change a lot by adding some agents who play the majority game into the MG,and the change of local volatilities greatly depends on different combinations of historical memories of the two groups.Furthermore, the analyses of the underlying mechanisms for this finding are made. The applications of mix-game mode are also given as an example.

  5. Bayesian Option Pricing Using Mixed Normal Heteroskedasticity Models

    DEFF Research Database (Denmark)

    Rombouts, Jeroen V.K.; Stentoft, Lars Peter

    While stochastic volatility models improve on the option pricing error when compared to the Black-Scholes-Merton model, mispricings remain. This paper uses mixed normal heteroskedasticity models to price options. Our model allows for significant negative skewness and time varying higher order...... moments of the risk neutral distribution. Parameter inference using Gibbs sampling is explained and we detail how to compute risk neutral predictive densities taking into account parameter uncertainty. When forecasting out-of-sample options on the S&P 500 index, substantial improvements are found compared...

  6. Handbook of mixed membership models and their applications

    CERN Document Server

    Airoldi, Edoardo M; Erosheva, Elena A; Fienberg, Stephen E

    2014-01-01

    In response to scientific needs for more diverse and structured explanations of statistical data, researchers have discovered how to model individual data points as belonging to multiple groups. Handbook of Mixed Membership Models and Their Applications shows you how to use these flexible modeling tools to uncover hidden patterns in modern high-dimensional multivariate data. It explores the use of the models in various application settings, including survey data, population genetics, text analysis, image processing and annotation, and molecular biology.Through examples using real data sets, yo

  7. Production, decay, and mixing models of the iota meson. II

    International Nuclear Information System (INIS)

    Palmer, W.F.; Pinsky, S.S.

    1987-01-01

    A five-channel mixing model for the ground and radially excited isoscalar pseudoscalar states and a glueball is presented. The model extends previous work by including two-body unitary corrections, following the technique of Toernqvist. The unitary corrections include contributions from three classes of two-body intermediate states: pseudoscalar-vector, pseudoscalar-scalar, and vector-vector states. All necessary three-body couplings are extracted from decay data. The solution of the mixing model provides information about the bare mass of the glueball and the fundamental quark-glue coupling. The solution also gives the composition of the wave function of the physical states in terms of the bare quark and glue states. Finally, it is shown how the coupling constants extracted from decay data can be used to calculate the decay rates of the five physical states to all two-body channels

  8. Linear mixing model applied to AVHRR LAC data

    Science.gov (United States)

    Holben, Brent N.; Shimabukuro, Yosio E.

    1993-01-01

    A linear mixing model was applied to coarse spatial resolution data from the NOAA Advanced Very High Resolution Radiometer. The reflective component of the 3.55 - 3.93 microns channel was extracted and used with the two reflective channels 0.58 - 0.68 microns and 0.725 - 1.1 microns to run a Constraine Least Squares model to generate vegetation, soil, and shade fraction images for an area in the Western region of Brazil. The Landsat Thematic Mapper data covering the Emas National park region was used for estimating the spectral response of the mixture components and for evaluating the mixing model results. The fraction images were compared with an unsupervised classification derived from Landsat TM data acquired on the same day. The relationship between the fraction images and normalized difference vegetation index images show the potential of the unmixing techniques when using coarse resolution data for global studies.

  9. Mixing-model Sensitivity to Initial Conditions in Hydrodynamic Predictions

    Science.gov (United States)

    Bigelow, Josiah; Silva, Humberto; Truman, C. Randall; Vorobieff, Peter

    2017-11-01

    Amagat and Dalton mixing-models were studied to compare their thermodynamic prediction of shock states. Numerical simulations with the Sandia National Laboratories shock hydrodynamic code CTH modeled University of New Mexico (UNM) shock tube laboratory experiments shocking a 1:1 molar mixture of helium (He) and sulfur hexafluoride (SF6) . Five input parameters were varied for sensitivity analysis: driver section pressure, driver section density, test section pressure, test section density, and mixture ratio (mole fraction). We show via incremental Latin hypercube sampling (LHS) analysis that significant differences exist between Amagat and Dalton mixing-model predictions. The differences observed in predicted shock speeds, temperatures, and pressures grow more pronounced with higher shock speeds. Supported by NNSA Grant DE-0002913.

  10. Comparison of mixed layer models predictions with experimental data

    Energy Technology Data Exchange (ETDEWEB)

    Faggian, P.; Riva, G.M. [CISE Spa, Divisione Ambiente, Segrate (Italy); Brusasca, G. [ENEL Spa, CRAM, Milano (Italy)

    1997-10-01

    The temporal evolution of the PBL vertical structure for a North Italian rural site, situated within relatively large agricultural fields and almost flat terrain, has been investigated during the period 22-28 June 1993 by experimental and modellistic point of view. In particular, the results about a sunny day (June 22) and a cloudy day (June 25) are presented in this paper. Three schemes to estimate mixing layer depth have been compared, i.e. Holzworth (1967), Carson (1973) and Gryning-Batchvarova models (1990), which use standard meteorological observations. To estimate their degree of accuracy, model outputs were analyzed considering radio-sounding meteorological profiles and stability atmospheric classification criteria. Besides, the mixed layer depths prediction were compared with the estimated values obtained by a simple box model, whose input requires hourly measures of air concentrations and ground flux of {sup 222}Rn. (LN)

  11. Modelling the development of mixing height in near equatorial region

    Energy Technology Data Exchange (ETDEWEB)

    Samah, A.A. [Univ. of Malaya, Air Pollution Research Unit, Kuala Lumpur (Malaysia)

    1997-10-01

    Most current air pollution models were developed for mid-latitude conditions and as such many of the empirical parameters used were based on observations taken in the mid-latitude boundary layer which is physically different from that of the equatorial boundary layer. In the equatorial boundary layer the Coriolis parameter f is small or zero and moisture plays a more important role in the control of stability and the surface energy balance. Therefore air pollution models such as the OMLMULTI or the ADMS which were basically developed for mid-latitude conditions must be applied with some caution and would need some adaptation to properly simulate the properties of equatorial boundary layer. This work elucidates some of the problems of modelling the evolution of mixing height in the equatorial region. The mixing height estimates were compared with routine observations taken during a severe air pollution episodes in Malaysia. (au)

  12. Numerical modeling of two-phase binary fluid mixing using mixed finite elements

    KAUST Repository

    Sun, Shuyu

    2012-07-27

    Diffusion coefficients of dense gases in liquids can be measured by considering two-phase binary nonequilibrium fluid mixing in a closed cell with a fixed volume. This process is based on convection and diffusion in each phase. Numerical simulation of the mixing often requires accurate algorithms. In this paper, we design two efficient numerical methods for simulating the mixing of two-phase binary fluids in one-dimensional, highly permeable media. Mathematical model for isothermal compositional two-phase flow in porous media is established based on Darcy\\'s law, material balance, local thermodynamic equilibrium for the phases, and diffusion across the phases. The time-lag and operator-splitting techniques are used to decompose each convection-diffusion equation into two steps: diffusion step and convection step. The Mixed finite element (MFE) method is used for diffusion equation because it can achieve a high-order and stable approximation of both the scalar variable and the diffusive fluxes across grid-cell interfaces. We employ the characteristic finite element method with moving mesh to track the liquid-gas interface. Based on the above schemes, we propose two methods: single-domain and two-domain methods. The main difference between two methods is that the two-domain method utilizes the assumption of sharp interface between two fluid phases, while the single-domain method allows fractional saturation level. Two-domain method treats the gas domain and the liquid domain separately. Because liquid-gas interface moves with time, the two-domain method needs work with a moving mesh. On the other hand, the single-domain method allows the use of a fixed mesh. We derive the formulas to compute the diffusive flux for MFE in both methods. The single-domain method is extended to multiple dimensions. Numerical results indicate that both methods can accurately describe the evolution of the pressure and liquid level. © 2012 Springer Science+Business Media B.V.

  13. A marketing mix model for a complex and turbulent environment

    Directory of Open Access Journals (Sweden)

    R. B. Mason

    2007-12-01

    Full Text Available Purpose: This paper is based on the proposition that the choice of marketing tactics is determined, or at least significantly influenced, by the nature of the company’s external environment. It aims to illustrate the type of marketing mix tactics that are suggested for a complex and turbulent environment when marketing and the environment are viewed through a chaos and complexity theory lens. Design/Methodology/Approach: Since chaos and complexity theories are proposed as a good means of understanding the dynamics of complex and turbulent markets, a comprehensive review and analysis of literature on the marketing mix and marketing tactics from a chaos and complexity viewpoint was conducted. From this literature review, a marketing mix model was conceptualised. Findings: A marketing mix model considered appropriate for success in complex and turbulent environments was developed. In such environments, the literature suggests destabilising marketing activities are more effective, whereas stabilising type activities are more effective in simple, stable environments. Therefore the model proposes predominantly destabilising type tactics as appropriate for a complex and turbulent environment such as is currently being experienced in South Africa. Implications: This paper is of benefit to marketers by emphasising a new way to consider the future marketing activities of their companies. How this model can assist marketers and suggestions for research to develop and apply this model are provided. It is hoped that the model suggested will form the basis of empirical research to test its applicability in the turbulent South African environment. Originality/Value: Since businesses and markets are complex adaptive systems, using complexity theory to understand how to cope in complex, turbulent environments is necessary, but has not been widely researched. In fact, most chaos and complexity theory work in marketing has concentrated on marketing strategy, with

  14. Wax Precipitation Modeled with Many Mixed Solid Phases

    DEFF Research Database (Denmark)

    Heidemann, Robert A.; Madsen, Jesper; Stenby, Erling Halfdan

    2005-01-01

    The behavior of the Coutinho UNIQUAC model for solid wax phases has been examined. The model can produce as many mixed solid phases as the number of waxy components. In binary mixtures, the solid rich in the lighter component contains little of the heavier component but the second phase shows sub......-temperature and low-temperature forms, are pure. Model calculations compare well with the data of Pauly et al. for C18 to C30 waxes precipitating from n-decane solutions. (C) 2004 American Institute of Chemical Engineers....

  15. Analysis of a PDF model in a mixing layer case

    International Nuclear Information System (INIS)

    Minier, J.P.; Pozorski, J.

    1996-04-01

    A recent turbulence model put forward by Pope (1991) in the context of PDF modeling has been applied to a mixing layer case. This model solves the one-point joint velocity-dissipation pdf equation by simulating the instantaneous behaviour of a large number of Lagrangian fluid particles. Closure of the evolution equations of these Lagrangian particles is based on diffusion stochastic processes. The paper reports numerical results and tries to analyse the physical meaning of some variables, in particular the dissipation-weighted kinetic energy and its relation with external intermittency. (authors). 14 refs., 7 figs

  16. Production, decay, and mixing models of the iota meson

    International Nuclear Information System (INIS)

    Palmer, W.F.; Pinsky, S.S.; Bender, C.

    1984-01-01

    We solve a five-channel mixing problem involving eta, eta', zeta(1275), iota(1440), and a new hypothetical high-mass pseudoscalar state between 1600 and 1900 MeV. We obtain the quark and glue content of iota(1440). We compare two solutions to the mixing problem with iota(1440) production and decay data, and with quark-model predictions for bare masses. In one solution the iota(1440) is primarily a glueball. This solution is preferred by the production and decay data. In the other solution the iota(1440) is a radially excited (ss-bar) state. This solution is preferred by the quark-model picture for the bare masses. We judge the weight of the combined evidence to favor the glueball interpretation

  17. Mildly mixed coupled models vs. WMAP7 data

    International Nuclear Information System (INIS)

    La Vacca, Giuseppe; Bonometto, Silvio A.

    2011-01-01

    Mildly mixed coupled models include massive ν's and CDM-DE coupling. We present new tests of their likelihood vs. recent data including WMAP7, confirming it to exceed ΛCDM, although at ∼2--σ's. We then show the impact on the physics of the dark components of ν-mass detection in 3 H β-decay or 0νββ-decay experiments.

  18. Estimation and Inference for Very Large Linear Mixed Effects Models

    OpenAIRE

    Gao, K.; Owen, A. B.

    2016-01-01

    Linear mixed models with large imbalanced crossed random effects structures pose severe computational problems for maximum likelihood estimation and for Bayesian analysis. The costs can grow as fast as $N^{3/2}$ when there are N observations. Such problems arise in any setting where the underlying factors satisfy a many to many relationship (instead of a nested one) and in electronic commerce applications, the N can be quite large. Methods that do not account for the correlation structure can...

  19. GUT and flavor models for neutrino masses and mixing

    Science.gov (United States)

    Meloni, Davide

    2017-10-01

    In the recent years experiments have established the existence of neutrino oscillations and most of the oscillation parameters have been measured with a good accuracy. However, in spite of many interesting ideas, no real illumination was sparked on the problem of flavor in the lepton sector. In this review, we discuss the state of the art of models for neutrino masses and mixings formulated in the context of flavor symmetries, with particular emphasis on the role played by grand unified gauge groups.

  20. The 4s web-marketing mix model

    OpenAIRE

    Constantinides, Efthymios

    2002-01-01

    This paper reviews the criticism on the 4Ps Marketing Mix framework, the most popular tool of traditional marketing management, and categorizes the main objections of using the model as the foundation of physical marketing. It argues that applying the traditional approach, based on the 4Ps paradigm, is also a poor choice in the case of virtual marketing and identifies two main limitations of the framework in online environments: the drastically diminished role of the Ps and the lack of any st...

  1. Study on system dynamics of evolutionary mix-game models

    Science.gov (United States)

    Gou, Chengling; Guo, Xiaoqian; Chen, Fang

    2008-11-01

    Mix-game model is ameliorated from an agent-based MG model, which is used to simulate the real financial market. Different from MG, there are two groups of agents in Mix-game: Group 1 plays a majority game and Group 2 plays a minority game. These two groups of agents have different bounded abilities to deal with historical information and to count their own performance. In this paper, we modify Mix-game model by assigning the evolution abilities to agents: if the winning rates of agents are smaller than a threshold, they will copy the best strategies the other agent has; and agents will repeat such evolution at certain time intervals. Through simulations this paper finds: (1) the average winning rates of agents in Group 1 and the mean volatilities increase with the increases of the thresholds of Group 1; (2) the average winning rates of both groups decrease but the mean volatilities of system increase with the increase of the thresholds of Group 2; (3) the thresholds of Group 2 have greater impact on system dynamics than the thresholds of Group 1; (4) the characteristics of system dynamics under different time intervals of strategy change are similar to each other qualitatively, but they are different quantitatively; (5) As the time interval of strategy change increases from 1 to 20, the system behaves more and more stable and the performances of agents in both groups become better also.

  2. Stochastic scalar mixing models accounting for turbulent frequency multiscale fluctuations

    International Nuclear Information System (INIS)

    Soulard, Olivier; Sabel'nikov, Vladimir; Gorokhovski, Michael

    2004-01-01

    Two new scalar micromixing models accounting for a turbulent frequency scale distribution are investigated. These models were derived by Sabel'nikov and Gorokhovski [Second International Symposium on Turbulence and Shear FLow Phenomena, Royal Institute of technology (KTH), Stockholm, Sweden, June 27-29, 2001] using a multiscale extension of the classical interaction by exchange with the mean (IEM) and Langevin models. They are, respectively, called Extended IEM (EIEM) and Extended Langevin (ELM) models. The EIEM and ELM models are tested against DNS results in the case of the decay of a homogeneous scalar field in homogeneous turbulence. This comparison leads to a reformulation of the law governing the mixing frequency distribution. Finally, the asymptotic behaviour of the modeled PDF is discussed

  3. Understanding and Improving Ocean Mixing Parameterizations for modeling Climate Change

    Science.gov (United States)

    Howard, A. M.; Fells, J.; Clarke, J.; Cheng, Y.; Canuto, V.; Dubovikov, M. S.

    2017-12-01

    Climate is vital. Earth is only habitable due to the atmosphere&oceans' distribution of energy. Our Greenhouse Gas emissions shift overall the balance between absorbed and emitted radiation causing Global Warming. How much of these emissions are stored in the ocean vs. entering the atmosphere to cause warming and how the extra heat is distributed depends on atmosphere&ocean dynamics, which we must understand to know risks of both progressive Climate Change and Climate Variability which affect us all in many ways including extreme weather, floods, droughts, sea-level rise and ecosystem disruption. Citizens must be informed to make decisions such as "business as usual" vs. mitigating emissions to avert catastrophe. Simulations of Climate Change provide needed knowledge but in turn need reliable parameterizations of key physical processes, including ocean mixing, which greatly impacts transport&storage of heat and dissolved CO2. The turbulence group at NASA-GISS seeks to use physical theory to improve parameterizations of ocean mixing, including smallscale convective, shear driven, double diffusive, internal wave and tidal driven vertical mixing, as well as mixing by submesoscale eddies, and lateral mixing along isopycnals by mesoscale eddies. Medgar Evers undergraduates aid NASA research while learning climate science and developing computer&math skills. We write our own programs in MATLAB and FORTRAN to visualize and process output of ocean simulations including producing statistics to help judge impacts of different parameterizations on fidelity in reproducing realistic temperatures&salinities, diffusivities and turbulent power. The results can help upgrade the parameterizations. Students are introduced to complex system modeling and gain deeper appreciation of climate science and programming skills, while furthering climate science. We are incorporating climate projects into the Medgar Evers college curriculum. The PI is both a member of the turbulence group at

  4. A Linear Mixed-Effects Model of Wireless Spectrum Occupancy

    Directory of Open Access Journals (Sweden)

    Pagadarai Srikanth

    2010-01-01

    Full Text Available We provide regression analysis-based statistical models to explain the usage of wireless spectrum across four mid-size US cities in four frequency bands. Specifically, the variations in spectrum occupancy across space, time, and frequency are investigated and compared between different sites within the city as well as with other cities. By applying the mixed-effects models, several conclusions are drawn that give the occupancy percentage and the ON time duration of the licensed signal transmission as a function of several predictor variables.

  5. Normal and Special Models of Neutrino Masses and Mixings

    CERN Document Server

    Altarelli, Guido

    2005-01-01

    One can make a distinction between "normal" and "special" models. For normal models $\\theta_{23}$ is not too close to maximal and $\\theta_{13}$ is not too small, typically a small power of the self-suggesting order parameter $\\sqrt{r}$, with $r=\\Delta m_{sol}^2/\\Delta m_{atm}^2 \\sim 1/35$. Special models are those where some symmetry or dynamical feature assures in a natural way the near vanishing of $\\theta_{13}$ and/or of $\\theta_{23}- \\pi/4$. Normal models are conceptually more economical and much simpler to construct. Here we focus on special models, in particular a recent one based on A4 discrete symmetry and extra dimensions that leads in a natural way to a Harrison-Perkins-Scott mixing matrix.

  6. Mixed Portmanteau Test for Diagnostic Checking of Time Series Models

    Directory of Open Access Journals (Sweden)

    Sohail Chand

    2014-01-01

    Full Text Available Model criticism is an important stage of model building and thus goodness of fit tests provides a set of tools for diagnostic checking of the fitted model. Several tests are suggested in literature for diagnostic checking. These tests use autocorrelation or partial autocorrelation in the residuals to criticize the adequacy of fitted model. The main idea underlying these portmanteau tests is to identify if there is any dependence structure which is yet unexplained by the fitted model. In this paper, we suggest mixed portmanteau tests based on autocorrelation and partial autocorrelation functions of the residuals. We derived the asymptotic distribution of the mixture test and studied its size and power using Monte Carlo simulations.

  7. CP violation and flavour mixing in the standard model

    International Nuclear Information System (INIS)

    Ali, A.; London, D.

    1995-08-01

    We review and update the constraints on the parameters of the quark flavour mixing matrix V CKM in the standard model and estimate the resulting CP asymmetries in B decays, taking into account recent experimental and theoretical developments. In performing our fits, we use inputs from the measurements of the following quantities: (i) vertical stroke εvertical stroke , the CP-violating parameter in K decays, (ii) ΔM d , the mass difference due to the B 0 d - anti B 0 d mixing, (iii) the matrix elements vertical stroke V cb vertical stroke and vertical stroke V ub vertical stroke , (iv) B-hadron lifetimes, and (v) the top quark mass. The experimental input in points (ii) - (v) has improved compared to our previous fits. With the updated CKM matrix we present the currently-allowed range of the ratios vertical stroke V td /V ts vertical stroke and vertical stroke V td /V ub vertical stroke , as well as the standard model predictions for the B s 0 - anti B s 0 mixing parameter x s , (or, equivalently, ΔM s ) and the quantities sin 2α, sin 2β and sin 2 γ, which characterize the CP-asymmetries in B-decays. Various theoretical issues related to the so-called ''penguin-pollution'', which are of importance for the determination of the phases α and γ from the CP-asymmetries in B decays, are also discussed. (orig.)

  8. Criticality in the configuration-mixed interacting boson model: (1) U(5)-Q(χ)Q(χ) mixing

    International Nuclear Information System (INIS)

    Hellemans, V.; Van Isacker, P.; De Baerdemacker, S.; Heyde, K.

    2007-01-01

    The case of U(5)-Q(χ)Q(χ) mixing in the configuration-mixed interacting boson model is studied in its mean-field approximation. Phase diagrams with analytical and numerical solutions are constructed and discussed. Indications for first-order and second-order shape phase transitions can be obtained from binding energies and from critical exponents, respectively

  9. Nonlinear spectral mixing theory to model multispectral signatures

    Energy Technology Data Exchange (ETDEWEB)

    Borel, C.C. [Los Alamos National Lab., NM (United States). Astrophysics and Radiation Measurements Group

    1996-02-01

    Nonlinear spectral mixing occurs due to multiple reflections and transmissions between discrete surfaces, e.g. leaves or facets of a rough surface. The radiosity method is an energy conserving computational method used in thermal engineering and it models nonlinear spectral mixing realistically and accurately. In contrast to the radiative transfer method the radiosity method takes into account the discreteness of the scattering surfaces (e.g. exact location, orientation and shape) such as leaves and includes mutual shading between them. An analytic radiosity-based scattering model for vegetation was developed and used to compute vegetation indices for various configurations. The leaf reflectance and transmittance was modeled using the PROSPECT model for various amounts of water, chlorophyll and variable leaf structure. The soil background was modeled using SOILSPEC with a linear mixture of reflectances of sand, clay and peat. A neural network and a geometry based retrieval scheme were used to retrieve leaf area index and chlorophyll concentration for dense canopies. Only simulated canopy reflectances in the 6 visible through short wave IR Landsat TM channels were used. The authors used an empirical function to compute the signal-to-noise ratio of a retrieved quantity.

  10. Multiple equilibria and limit cycles in evolutonary games with Logit Dynamics

    NARCIS (Netherlands)

    Hommes, C.H.; Ochea, M.I.

    2012-01-01

    This note shows, by means of two simple, three-strategy games, the existence of stable periodic orbits and of multiple, interior steady states in a smooth version of the Best-Response Dynamics, the Logit Dynamics. The main finding is that, unlike Replicator Dynamics, generic Hopf bifurcation and

  11. Multiple steady states, limit cycles and chaotic attractors in evolutionary games with Logit Dynamics

    NARCIS (Netherlands)

    Hommes, C.H.; Ochea, M.I.

    2010-01-01

    This paper investigates, by means of simple, three and four strategy games, the occurrence of periodic and chaotic behaviour in a smooth version of the Best Response Dynamics, the Logit Dynamics. The main finding is that, unlike Replicator Dynamics, generic Hopf bifurcation and thus, stable limit

  12. modelling of far modelling of far-field mixing o field mixing o ambient

    African Journals Online (AJOL)

    User

    his study sought to describe the dynamics of advective and dispersive tr .... focused on environmental policy designs targeted at ... consequences such as welfare loss of outright ban on polluting ... optimal DO level. ... carried out a similar study to model the shadow price .... As A varies, we have a family of curves depicted in.

  13. Forecasting Costa Rican Quarterly Growth with Mixed-frequency Models

    Directory of Open Access Journals (Sweden)

    Adolfo Rodríguez Vargas

    2014-11-01

    Full Text Available We assess the utility of mixed-frequency models to forecast the quarterly growth rate of Costa Rican real GDP: we estimate bridge and MiDaS models with several lag lengths using information of the IMAE and compute forecasts (horizons of 0-4 quarters which are compared between themselves, with those of ARIMA models and with those resulting from forecast combinations. Combining the most accurate forecasts is most useful when forecasting in real time, whereas MiDaS forecasts are the best-performing overall: as the forecasting horizon increases, their precisionis affected relatively little; their success rates in predicting the direction of changes in the growth rate are stable, and several forecastsremain unbiased. In particular, forecasts computed from simple MiDaS with 9 and 12 lags are unbiased at all horizons and information sets assessed, and show the highest number of significant differences in forecasting ability in comparison with all other models.

  14. A local mixing model for deuterium replacement in solids

    International Nuclear Information System (INIS)

    Doyle, B.L.; Brice, D.K.; Wampler, W.R.

    1980-01-01

    A new model for hydrogen isotope exchange by ion implantation has been developed. The basic difference between the present approach and previous work is that the depth distribution of the implanted species is included. The outstanding feature of this local mixing model is that the only adjustable parameter is the saturation hydrogen concentration which is specific to the target material and dependent only on temperature. The model is shown to give excellent agreement both with new data on H/D exchange in the low Z coating materials VB 2 , TiC, TiB 2 , and B reported here and with previously reported data on stainless steel. The saturation hydrogen concentrations used to fit these data were 0.15, 0.25, 0.15, 0.45, and 1.00 times atomic density respectively. This model should be useful in predicting the recycling behavior of hydrogen isotopes in tokamak limiter and wall materials. (author)

  15. Negative binomial mixed models for analyzing microbiome count data.

    Science.gov (United States)

    Zhang, Xinyan; Mallick, Himel; Tang, Zaixiang; Zhang, Lei; Cui, Xiangqin; Benson, Andrew K; Yi, Nengjun

    2017-01-03

    Recent advances in next-generation sequencing (NGS) technology enable researchers to collect a large volume of metagenomic sequencing data. These data provide valuable resources for investigating interactions between the microbiome and host environmental/clinical factors. In addition to the well-known properties of microbiome count measurements, for example, varied total sequence reads across samples, over-dispersion and zero-inflation, microbiome studies usually collect samples with hierarchical structures, which introduce correlation among the samples and thus further complicate the analysis and interpretation of microbiome count data. In this article, we propose negative binomial mixed models (NBMMs) for detecting the association between the microbiome and host environmental/clinical factors for correlated microbiome count data. Although having not dealt with zero-inflation, the proposed mixed-effects models account for correlation among the samples by incorporating random effects into the commonly used fixed-effects negative binomial model, and can efficiently handle over-dispersion and varying total reads. We have developed a flexible and efficient IWLS (Iterative Weighted Least Squares) algorithm to fit the proposed NBMMs by taking advantage of the standard procedure for fitting the linear mixed models. We evaluate and demonstrate the proposed method via extensive simulation studies and the application to mouse gut microbiome data. The results show that the proposed method has desirable properties and outperform the previously used methods in terms of both empirical power and Type I error. The method has been incorporated into the freely available R package BhGLM ( http://www.ssg.uab.edu/bhglm/ and http://github.com/abbyyan3/BhGLM ), providing a useful tool for analyzing microbiome data.

  16. Subgrid models for mass and thermal diffusion in turbulent mixing

    Energy Technology Data Exchange (ETDEWEB)

    Sharp, David H [Los Alamos National Laboratory; Lim, Hyunkyung [STONY BROOK UNIV; Li, Xiao - Lin [STONY BROOK UNIV; Gilmm, James G [STONY BROOK UNIV

    2008-01-01

    We are concerned with the chaotic flow fields of turbulent mixing. Chaotic flow is found in an extreme form in multiply shocked Richtmyer-Meshkov unstable flows. The goal of a converged simulation for this problem is twofold: to obtain converged solutions for macro solution features, such as the trajectories of the principal shock waves, mixing zone edges, and mean densities and velocities within each phase, and also for such micro solution features as the joint probability distributions of the temperature and species concentration. We introduce parameterized subgrid models of mass and thermal diffusion, to define large eddy simulations (LES) that replicate the micro features observed in the direct numerical simulation (DNS). The Schmidt numbers and Prandtl numbers are chosen to represent typical liquid, gas and plasma parameter values. Our main result is to explore the variation of the Schmidt, Prandtl and Reynolds numbers by three orders of magnitude, and the mesh by a factor of 8 per linear dimension (up to 3200 cells per dimension), to allow exploration of both DNS and LES regimes and verification of the simulations for both macro and micro observables. We find mesh convergence for key properties describing the molecular level of mixing, including chemical reaction rates between the distinct fluid species. We find results nearly independent of Reynolds number for Re 300, 6000, 600K . Methodologically, the results are also new. In common with the shock capturing community, we allow and maintain sharp solution gradients, and we enhance these gradients through use of front tracking. In common with the turbulence modeling community, we include subgrid scale models with no adjustable parameters for LES. To the authors' knowledge, these two methodologies have not been previously combined. In contrast to both of these methodologies, our use of Front Tracking, with DNS or LES resolution of the momentum equation at or near the Kolmogorov scale, but without

  17. Mixing Modeling Analysis For SRS Salt Waste Disposition

    International Nuclear Information System (INIS)

    Lee, S.

    2011-01-01

    Nuclear waste at Savannah River Site (SRS) waste tanks consists of three different types of waste forms. They are the lighter salt solutions referred to as supernate, the precipitated salts as salt cake, and heavier fine solids as sludge. The sludge is settled on the tank floor. About half of the residual waste radioactivity is contained in the sludge, which is only about 8 percentage of the total waste volume. Mixing study to be evaluated here for the Salt Disposition Integration (SDI) project focuses on supernate preparations in waste tanks prior to transfer to the Salt Waste Processing Facility (SWPF) feed tank. The methods to mix and blend the contents of the SRS blend tanks were evalutaed to ensure that the contents are properly blended before they are transferred from the blend tank such as Tank 50H to the SWPF feed tank. The work consists of two principal objectives to investigate two different pumps. One objective is to identify a suitable pumping arrangement that will adequately blend/mix two miscible liquids to obtain a uniform composition in the tank with a minimum level of sludge solid particulate in suspension. The other is to estimate the elevation in the tank at which the transfer pump inlet should be located where the solid concentration of the entrained fluid remains below the acceptance criterion (0.09 wt% or 1200 mg/liter) during transfer operation to the SWPF. Tank 50H is a Waste Tank that will be used to prepare batches of salt feed for SWPF. The salt feed must be a homogeneous solution satisfying the acceptance criterion of the solids entrainment during transfer operation. The work described here consists of two modeling areas. They are the mixing modeling analysis during miscible liquid blending operation, and the flow pattern analysis during transfer operation of the blended liquid. The modeling results will provide quantitative design and operation information during the mixing/blending process and the transfer operation of the blended

  18. Linear mixed models a practical guide using statistical software

    CERN Document Server

    West, Brady T; Galecki, Andrzej T

    2014-01-01

    Highly recommended by JASA, Technometrics, and other journals, the first edition of this bestseller showed how to easily perform complex linear mixed model (LMM) analyses via a variety of software programs. Linear Mixed Models: A Practical Guide Using Statistical Software, Second Edition continues to lead readers step by step through the process of fitting LMMs. This second edition covers additional topics on the application of LMMs that are valuable for data analysts in all fields. It also updates the case studies using the latest versions of the software procedures and provides up-to-date information on the options and features of the software procedures available for fitting LMMs in SAS, SPSS, Stata, R/S-plus, and HLM.New to the Second Edition A new chapter on models with crossed random effects that uses a case study to illustrate software procedures capable of fitting these models Power analysis methods for longitudinal and clustered study designs, including software options for power analyses and suggest...

  19. Solving large mixed linear models using preconditioned conjugate gradient iteration.

    Science.gov (United States)

    Strandén, I; Lidauer, M

    1999-12-01

    Continuous evaluation of dairy cattle with a random regression test-day model requires a fast solving method and algorithm. A new computing technique feasible in Jacobi and conjugate gradient based iterative methods using iteration on data is presented. In the new computing technique, the calculations in multiplication of a vector by a matrix were recorded to three steps instead of the commonly used two steps. The three-step method was implemented in a general mixed linear model program that used preconditioned conjugate gradient iteration. Performance of this program in comparison to other general solving programs was assessed via estimation of breeding values using univariate, multivariate, and random regression test-day models. Central processing unit time per iteration with the new three-step technique was, at best, one-third that needed with the old technique. Performance was best with the test-day model, which was the largest and most complex model used. The new program did well in comparison to other general software. Programs keeping the mixed model equations in random access memory required at least 20 and 435% more time to solve the univariate and multivariate animal models, respectively. Computations of the second best iteration on data took approximately three and five times longer for the animal and test-day models, respectively, than did the new program. Good performance was due to fast computing time per iteration and quick convergence to the final solutions. Use of preconditioned conjugate gradient based methods in solving large breeding value problems is supported by our findings.

  20. Modelling rainfall amounts using mixed-gamma model for Kuantan district

    Science.gov (United States)

    Zakaria, Roslinazairimah; Moslim, Nor Hafizah

    2017-05-01

    An efficient design of flood mitigation and construction of crop growth models depend upon good understanding of the rainfall process and characteristics. Gamma distribution is usually used to model nonzero rainfall amounts. In this study, the mixed-gamma model is applied to accommodate both zero and nonzero rainfall amounts. The mixed-gamma model presented is for the independent case. The formulae of mean and variance are derived for the sum of two and three independent mixed-gamma variables, respectively. Firstly, the gamma distribution is used to model the nonzero rainfall amounts and the parameters of the distribution (shape and scale) are estimated using the maximum likelihood estimation method. Then, the mixed-gamma model is defined for both zero and nonzero rainfall amounts simultaneously. The formulae of mean and variance for the sum of two and three independent mixed-gamma variables derived are tested using the monthly rainfall amounts from rainfall stations within Kuantan district in Pahang Malaysia. Based on the Kolmogorov-Smirnov goodness of fit test, the results demonstrate that the descriptive statistics of the observed sum of rainfall amounts is not significantly different at 5% significance level from the generated sum of independent mixed-gamma variables. The methodology and formulae demonstrated can be applied to find the sum of more than three independent mixed-gamma variables.

  1. Modelling ice microphysics of mixed-phase clouds

    Science.gov (United States)

    Ahola, J.; Raatikainen, T.; Tonttila, J.; Romakkaniemi, S.; Kokkola, H.; Korhonen, H.

    2017-12-01

    The low-level Arctic mixed-phase clouds have a significant role for the Arctic climate due to their ability to absorb and reflect radiation. Since the climate change is amplified in polar areas, it is vital to apprehend the mixed-phase cloud processes. From a modelling point of view, this requires a high spatiotemporal resolution to capture turbulence and the relevant microphysical processes, which has shown to be difficult.In order to solve this problem about modelling mixed-phase clouds, a new ice microphysics description has been developed. The recently published large-eddy simulation cloud model UCLALES-SALSA offers a good base for a feasible solution (Tonttila et al., Geosci. Mod. Dev., 10:169-188, 2017). The model includes aerosol-cloud interactions described with a sectional SALSA module (Kokkola et al., Atmos. Chem. Phys., 8, 2469-2483, 2008), which represents a good compromise between detail and computational expense.Newly, the SALSA module has been upgraded to include also ice microphysics. The dynamical part of the model is based on well-known UCLA-LES model (Stevens et al., J. Atmos. Sci., 56, 3963-3984, 1999) which can be used to study cloud dynamics on a fine grid.The microphysical description of ice is sectional and the included processes consist of formation, growth and removal of ice and snow particles. Ice cloud particles are formed by parameterized homo- or heterogeneous nucleation. The growth mechanisms of ice particles and snow include coagulation and condensation of water vapor. Autoconversion from cloud ice particles to snow is parameterized. The removal of ice particles and snow happens by sedimentation and melting.The implementation of ice microphysics is tested by initializing the cloud simulation with atmospheric observations from the Indirect and Semi-Direct Aerosol Campaign (ISDAC). The results are compared to the model results shown in the paper of Ovchinnikov et al. (J. Adv. Model. Earth Syst., 6, 223-248, 2014) and they show a good

  2. Delta-tilde interpretation of standard linear mixed model results

    DEFF Research Database (Denmark)

    Brockhoff, Per Bruun; Amorim, Isabel de Sousa; Kuznetsova, Alexandra

    2016-01-01

    effects relative to the residual error and to choose the proper effect size measure. For multi-attribute bar plots of F-statistics this amounts, in balanced settings, to a simple transformation of the bar heights to get them transformed into depicting what can be seen as approximately the average pairwise...... data set and compared to actual d-prime calculations based on Thurstonian regression modeling through the ordinal package. For more challenging cases we offer a generic "plug-in" implementation of a version of the method as part of the R-package SensMixed. We discuss and clarify the bias mechanisms...

  3. lmerTest Package: Tests in Linear Mixed Effects Models

    DEFF Research Database (Denmark)

    Kuznetsova, Alexandra; Brockhoff, Per B.; Christensen, Rune Haubo Bojesen

    2017-01-01

    One of the frequent questions by users of the mixed model function lmer of the lme4 package has been: How can I get p values for the F and t tests for objects returned by lmer? The lmerTest package extends the 'lmerMod' class of the lme4 package, by overloading the anova and summary functions...... by providing p values for tests for fixed effects. We have implemented the Satterthwaite's method for approximating degrees of freedom for the t and F tests. We have also implemented the construction of Type I - III ANOVA tables. Furthermore, one may also obtain the summary as well as the anova table using...

  4. Linking effort and fishing mortality in a mixed fisheries model

    DEFF Research Database (Denmark)

    Thøgersen, Thomas Talund; Hoff, Ayoe; Frost, Hans Staby

    2012-01-01

    in fish stocks has led to overcapacity in many fisheries, leading to incentives for overfishing. Recent research has shown that the allocation of effort among fleets can play an important role in mitigating overfishing when the targeting covers a range of species (multi-species—i.e., so-called mixed...... fisheries), while simultaneously optimising the overall economic performance of the fleets. The so-called FcubEcon model, in particular, has elucidated both the biologically and economically optimal method for allocating catches—and thus effort—between fishing fleets, while ensuring that the quotas...

  5. Modeling of speed distribution for mixed bicycle traffic flow

    Directory of Open Access Journals (Sweden)

    Cheng Xu

    2015-11-01

    Full Text Available Speed is a fundamental measure of traffic performance for highway systems. There were lots of results for the speed characteristics of motorized vehicles. In this article, we studied the speed distribution for mixed bicycle traffic which was ignored in the past. Field speed data were collected from Hangzhou, China, under different survey sites, traffic conditions, and percentages of electric bicycle. The statistics results of field data show that the total mean speed of electric bicycles is 17.09 km/h, 3.63 km/h faster and 27.0% higher than that of regular bicycles. Normal, log-normal, gamma, and Weibull distribution models were used for testing speed data. The results of goodness-of-fit hypothesis tests imply that the log-normal and Weibull model can fit the field data very well. Then, the relationships between mean speed and electric bicycle proportions were proposed using linear regression models, and the mean speed for purely electric bicycles or regular bicycles can be obtained. The findings of this article will provide effective help for the safety and traffic management of mixed bicycle traffic.

  6. Spatial generalised linear mixed models based on distances.

    Science.gov (United States)

    Melo, Oscar O; Mateu, Jorge; Melo, Carlos E

    2016-10-01

    Risk models derived from environmental data have been widely shown to be effective in delineating geographical areas of risk because they are intuitively easy to understand. We present a new method based on distances, which allows the modelling of continuous and non-continuous random variables through distance-based spatial generalised linear mixed models. The parameters are estimated using Markov chain Monte Carlo maximum likelihood, which is a feasible and a useful technique. The proposed method depends on a detrending step built from continuous or categorical explanatory variables, or a mixture among them, by using an appropriate Euclidean distance. The method is illustrated through the analysis of the variation in the prevalence of Loa loa among a sample of village residents in Cameroon, where the explanatory variables included elevation, together with maximum normalised-difference vegetation index and the standard deviation of normalised-difference vegetation index calculated from repeated satellite scans over time. © The Author(s) 2013.

  7. FACTORS THAT AFFECT TRANSPORT MODE PREFERENCE FOR GRADUATE STUDENTS IN THE NATIONAL UNIVERSITY OF MALAYSIA BY LOGIT METHOD

    Directory of Open Access Journals (Sweden)

    ALI AHMED MOHAMMED

    2013-06-01

    Full Text Available A study was carried out to examine the perceptions and preferences of students on choosing the type of transportation for their travels in university campus. This study focused on providing personal transport users road transport alternatives as a countermeasure aimed at shifting car users to other modes of transportation. Overall 456 questionnaires were conducted to develop a choice of transportation mode preferences. Consequently, Logit model and SPSS were used to identify the factors that affect the determination of the choice of transportation mode. Results indicated that by reducing travel time by 70% the amount of private cars users will be reduced by 84%, while reduction the travel cost was found to be highly improving the public modes of utilization. This study revealed positive aspects is needed to shift travellers from private modes to public. The positive aspect contributes to travel time and travel cost reduction, hence improving the services, whereby contributing to sustainability.

  8. Mixing height derived from the DMI-HIRLAM NWP model, and used for ETEX dispersion modelling

    Energy Technology Data Exchange (ETDEWEB)

    Soerensen, J.H.; Rasmussen, A. [Danish Meteorological Inst., Copenhagen (Denmark)

    1997-10-01

    For atmospheric dispersion modelling it is of great significance to estimate the mixing height well. Mesoscale and long-range diffusion models using output from numerical weather prediction (NWP) models may well use NWP model profiles of wind, temperature and humidity in computation of the mixing height. This is dynamically consistent, and enables calculation of the mixing height for predicted states of the atmosphere. In autumn 1994, the European Tracer Experiment (ETEX) was carried out with the objective to validate atmospheric dispersion models. The Danish Meteorological Institute (DMI) participates in the model evaluations with the Danish Emergency Response Model of the Atmosphere (DERMA) using NWP model data from the DMI version of the High Resolution Limited Area Model (HIRLAM) as well as from the global model of the European Centre for Medium-Range Weather Forecast (ECMWF). In DERMA, calculation of mixing heights are performed based on a bulk Richardson number approach. Comparing with tracer gas measurements for the first ETEX experiment, a sensitivity study is performed for DERMA. Using DMI-HIRLAM data, the study shows that optimum values of the critical bulk Richardson number in the range 0.15-0.35 are adequate. These results are in agreement with recent mixing height verification studies against radiosonde data. The fairly large range of adequate critical values is a signature of the robustness of the method. Direct verification results against observed missing heights from operational radio-sondes released under the ETEX plume are presented. (au) 10 refs.

  9. A flavor symmetry model for bilarge leptonic mixing and the lepton masses

    Science.gov (United States)

    Ohlsson, Tommy; Seidl, Gerhart

    2002-11-01

    We present a model for leptonic mixing and the lepton masses based on flavor symmetries and higher-dimensional mass operators. The model predicts bilarge leptonic mixing (i.e., the mixing angles θ12 and θ23 are large and the mixing angle θ13 is small) and an inverted hierarchical neutrino mass spectrum. Furthermore, it approximately yields the experimental hierarchical mass spectrum of the charged leptons. The obtained values for the leptonic mixing parameters and the neutrino mass squared differences are all in agreement with atmospheric neutrino data, the Mikheyev-Smirnov-Wolfenstein large mixing angle solution of the solar neutrino problem, and consistent with the upper bound on the reactor mixing angle. Thus, we have a large, but not close to maximal, solar mixing angle θ12, a nearly maximal atmospheric mixing angle θ23, and a small reactor mixing angle θ13. In addition, the model predicts θ 12≃ {π}/{4}-θ 13.

  10. Models for fluid flows with heat transfer in mixed convection

    International Nuclear Information System (INIS)

    Mompean Munhoz da Cruz, G.

    1989-06-01

    Second order models were studied in order to predict turbulent flows with heat transfer. The equations used correspond to the characteristic scale of turbulent flows. The order of magnitude of the terms of the equation is analyzed by using Reynolds and Peclet numbers. The two-equation model (K-ε) is applied in the hydrodynamic study. Two models are developed for the heat transfer analysis: the Prt + teta 2 and the complete model. In the first model, the turbulent thermal diffusivity is calculated by using the Prandtl number for turbulent flow and an equation for the variance of the temperature fluctuation. The second model consists of three equations concerning: the turbulent heat flow, the variance of the temperature fluctuation and its dissipation ratio. The equations were validated by four experiments, which were characterized by the analysis of: the air flow after passing through a grid of constant average temperature and with temperature gradient, an axysymmetric air jet submitted to high and low heating temperature, the mixing (cold-hot) of two coaxial jets of sodium at high Peclet number. The complete model is shown to be the most suitable for the investigations presented [fr

  11. Bayesian Option Pricing using Mixed Normal Heteroskedasticity Models

    DEFF Research Database (Denmark)

    Rombouts, Jeroen; Stentoft, Lars

    2014-01-01

    Option pricing using mixed normal heteroscedasticity models is considered. It is explained how to perform inference and price options in a Bayesian framework. The approach allows to easily compute risk neutral predictive price densities which take into account parameter uncertainty....... In an application to the S&P 500 index, classical and Bayesian inference is performed on the mixture model using the available return data. Comparing the ML estimates and posterior moments small differences are found. When pricing a rich sample of options on the index, both methods yield similar pricing errors...... measured in dollar and implied standard deviation losses, and it turns out that the impact of parameter uncertainty is minor. Therefore, when it comes to option pricing where large amounts of data are available, the choice of the inference method is unimportant. The results are robust to different...

  12. Goodness-of-fit tests in mixed models

    KAUST Repository

    Claeskens, Gerda

    2009-05-12

    Mixed models, with both random and fixed effects, are most often estimated on the assumption that the random effects are normally distributed. In this paper we propose several formal tests of the hypothesis that the random effects and/or errors are normally distributed. Most of the proposed methods can be extended to generalized linear models where tests for non-normal distributions are of interest. Our tests are nonparametric in the sense that they are designed to detect virtually any alternative to normality. In case of rejection of the null hypothesis, the nonparametric estimation method that is used to construct a test provides an estimator of the alternative distribution. © 2009 Sociedad de Estadística e Investigación Operativa.

  13. Linear mixing model applied to coarse resolution satellite data

    Science.gov (United States)

    Holben, Brent N.; Shimabukuro, Yosio E.

    1992-01-01

    A linear mixing model typically applied to high resolution data such as Airborne Visible/Infrared Imaging Spectrometer, Thematic Mapper, and Multispectral Scanner System is applied to the NOAA Advanced Very High Resolution Radiometer coarse resolution satellite data. The reflective portion extracted from the middle IR channel 3 (3.55 - 3.93 microns) is used with channels 1 (0.58 - 0.68 microns) and 2 (0.725 - 1.1 microns) to run the Constrained Least Squares model to generate fraction images for an area in the west central region of Brazil. The derived fraction images are compared with an unsupervised classification and the fraction images derived from Landsat TM data acquired in the same day. In addition, the relationship betweeen these fraction images and the well known NDVI images are presented. The results show the great potential of the unmixing techniques for applying to coarse resolution data for global studies.

  14. Evaluating significance in linear mixed-effects models in R.

    Science.gov (United States)

    Luke, Steven G

    2017-08-01

    Mixed-effects models are being used ever more frequently in the analysis of experimental data. However, in the lme4 package in R the standards for evaluating significance of fixed effects in these models (i.e., obtaining p-values) are somewhat vague. There are good reasons for this, but as researchers who are using these models are required in many cases to report p-values, some method for evaluating the significance of the model output is needed. This paper reports the results of simulations showing that the two most common methods for evaluating significance, using likelihood ratio tests and applying the z distribution to the Wald t values from the model output (t-as-z), are somewhat anti-conservative, especially for smaller sample sizes. Other methods for evaluating significance, including parametric bootstrapping and the Kenward-Roger and Satterthwaite approximations for degrees of freedom, were also evaluated. The results of these simulations suggest that Type 1 error rates are closest to .05 when models are fitted using REML and p-values are derived using the Kenward-Roger or Satterthwaite approximations, as these approximations both produced acceptable Type 1 error rates even for smaller samples.

  15. Estimating preferential flow in karstic aquifers using statistical mixed models.

    Science.gov (United States)

    Anaya, Angel A; Padilla, Ingrid; Macchiavelli, Raul; Vesper, Dorothy J; Meeker, John D; Alshawabkeh, Akram N

    2014-01-01

    Karst aquifers are highly productive groundwater systems often associated with conduit flow. These systems can be highly vulnerable to contamination, resulting in a high potential for contaminant exposure to humans and ecosystems. This work develops statistical models to spatially characterize flow and transport patterns in karstified limestone and determines the effect of aquifer flow rates on these patterns. A laboratory-scale Geo-HydroBed model is used to simulate flow and transport processes in a karstic limestone unit. The model consists of stainless steel tanks containing a karstified limestone block collected from a karst aquifer formation in northern Puerto Rico. Experimental work involves making a series of flow and tracer injections, while monitoring hydraulic and tracer response spatially and temporally. Statistical mixed models (SMMs) are applied to hydraulic data to determine likely pathways of preferential flow in the limestone units. The models indicate a highly heterogeneous system with dominant, flow-dependent preferential flow regions. Results indicate that regions of preferential flow tend to expand at higher groundwater flow rates, suggesting a greater volume of the system being flushed by flowing water at higher rates. Spatial and temporal distribution of tracer concentrations indicates the presence of conduit-like and diffuse flow transport in the system, supporting the notion of both combined transport mechanisms in the limestone unit. The temporal response of tracer concentrations at different locations in the model coincide with, and confirms the preferential flow distribution generated with the SMMs used in the study. © 2013, National Ground Water Association.

  16. Modeling containment of large wildfires using generalized linear mixed-model analysis

    Science.gov (United States)

    Mark Finney; Isaac C. Grenfell; Charles W. McHugh

    2009-01-01

    Billions of dollars are spent annually in the United States to contain large wildland fires, but the factors contributing to suppression success remain poorly understood. We used a regression model (generalized linear mixed-model) to model containment probability of individual fires, assuming that containment was a repeated-measures problem (fixed effect) and...

  17. Application of Hierarchical Linear Models/Linear Mixed-Effects Models in School Effectiveness Research

    Science.gov (United States)

    Ker, H. W.

    2014-01-01

    Multilevel data are very common in educational research. Hierarchical linear models/linear mixed-effects models (HLMs/LMEs) are often utilized to analyze multilevel data nowadays. This paper discusses the problems of utilizing ordinary regressions for modeling multilevel educational data, compare the data analytic results from three regression…

  18. Patient choice modelling: how do patients choose their hospitals?

    Science.gov (United States)

    Smith, Honora; Currie, Christine; Chaiwuttisak, Pornpimol; Kyprianou, Andreas

    2018-06-01

    As an aid to predicting future hospital admissions, we compare use of the Multinomial Logit and the Utility Maximising Nested Logit models to describe how patients choose their hospitals. The models are fitted to real data from Derbyshire, United Kingdom, which lists the postcodes of more than 200,000 admissions to six different local hospitals. Both elective and emergency admissions are analysed for this mixed urban/rural area. For characteristics that may affect a patient's choice of hospital, we consider the distance of the patient from the hospital, the number of beds at the hospital and the number of car parking spaces available at the hospital, as well as several statistics publicly available on National Health Service (NHS) websites: an average waiting time, the patient survey score for ward cleanliness, the patient safety score and the inpatient survey score for overall care. The Multinomial Logit model is successfully fitted to the data. Results obtained with the Utility Maximising Nested Logit model show that nesting according to city or town may be invalid for these data; in other words, the choice of hospital does not appear to be preceded by choice of city. In all of the analysis carried out, distance appears to be one of the main influences on a patient's choice of hospital rather than statistics available on the Internet.

  19. Linear models for sound from supersonic reacting mixing layers

    Science.gov (United States)

    Chary, P. Shivakanth; Samanta, Arnab

    2016-12-01

    We perform a linearized reduced-order modeling of the aeroacoustic sound sources in supersonic reacting mixing layers to explore their sensitivities to some of the flow parameters in radiating sound. Specifically, we investigate the role of outer modes as the effective flow compressibility is raised, when some of these are expected to dominate over the traditional Kelvin-Helmholtz (K-H) -type central mode. Although the outer modes are known to be of lesser importance in the near-field mixing, how these radiate to the far-field is uncertain, on which we focus. On keeping the flow compressibility fixed, the outer modes are realized via biasing the respective mean densities of the fast (oxidizer) or slow (fuel) side. Here the mean flows are laminar solutions of two-dimensional compressible boundary layers with an imposed composite (turbulent) spreading rate, which we show to significantly alter the growth of instability waves by saturating them earlier, similar to in nonlinear calculations, achieved here via solving the linear parabolized stability equations. As the flow parameters are varied, instability of the slow modes is shown to be more sensitive to heat release, potentially exceeding equivalent central modes, as these modes yield relatively compact sound sources with lesser spreading of the mixing layer, when compared to the corresponding fast modes. In contrast, the radiated sound seems to be relatively unaffected when the mixture equivalence ratio is varied, except for a lean mixture which is shown to yield a pronounced effect on the slow mode radiation by reducing its modal growth.

  20. Modeling of Cd(II) sorption on mixed oxide

    International Nuclear Information System (INIS)

    Waseem, M.; Mustafa, S.; Naeem, A.; Shah, K.H.; Hussain, S.Y.; Safdar, M.

    2011-01-01

    Mixed oxide of iron and silicon (0.75 M Fe(OH)3:0.25 M SiO/sub 2/) was synthesized and characterized by various techniques like surface area analysis, point of zero charge (PZC), energy dispersive X-rays (EDX) spectroscopy, Thermogravimetric and differential thermal analysis (TG-DTA), Fourier transform infrared spectroscopy (FTIR) and X-rays diffraction (XRD) analysis. The uptake of Cd/sup 2+/ ions on mixed oxide increased with pH, temperature and metal ion concentration. Sorption data have been interpreted in terms of both Langmuir and Freundlich models. The Xm values at pH 7 are found to be almost twice as compared to pH 5. The values of both DH and DS were found to be positive indicating that the sorption process was endothermic and accompanied by the dehydration of Cd/sup 2+/. Further, the negative value of DG confirms the spontaneity of the reaction. The ion exchange mechanism was suggested to take place for each Cd/sup 2+/ ions at pH 5, whereas ion exchange was found coupled with non specific adsorption of metal cations at pH 7. (author)

  1. Modeling Intercity Mode Choice and Airport Choice in the United States

    OpenAIRE

    Ashiabor, Senanu Y.

    2007-01-01

    The aim of this study was to develop a framework to model travel choice behavior in order to estimate intercity travel demand at nation-level in the United States. Nested and mixed logit models were developed to study national-level intercity transportation in the United States. A separate General Aviation airport choice model to estimates General Aviation person-trips and number of aircraft operations though more than 3000 airports was also developed. The combination of the General Aviati...

  2. Computational Fluid Dynamics Modeling Of Scaled Hanford Double Shell Tank Mixing - CFD Modeling Sensitivity Study Results

    International Nuclear Information System (INIS)

    Jackson, V.L.

    2011-01-01

    The primary purpose of the tank mixing and sampling demonstration program is to mitigate the technical risks associated with the ability of the Hanford tank farm delivery and celtification systems to measure and deliver a uniformly mixed high-level waste (HLW) feed to the Waste Treatment and Immobilization Plant (WTP) Uniform feed to the WTP is a requirement of 24590-WTP-ICD-MG-01-019, ICD-19 - Interface Control Document for Waste Feed, although the exact definition of uniform is evolving in this context. Computational Fluid Dynamics (CFD) modeling has been used to assist in evaluating scaleup issues, study operational parameters, and predict mixing performance at full-scale.

  3. Subgrid models for mass and thermal diffusion in turbulent mixing

    International Nuclear Information System (INIS)

    Lim, H; Yu, Y; Glimm, J; Li, X-L; Sharp, D H

    2010-01-01

    We propose a new method for the large eddy simulation (LES) of turbulent mixing flows. The method yields convergent probability distribution functions (PDFs) for temperature and concentration and a chemical reaction rate when applied to reshocked Richtmyer-Meshkov (RM) unstable flows. Because such a mesh convergence is an unusual and perhaps original capability for LES of RM flows, we review previous validation studies of the principal components of the algorithm. The components are (i) a front tracking code, FronTier, to control numerical mass diffusion and (ii) dynamic subgrid scale (SGS) models to compensate for unresolved scales in the LES. We also review the relevant code comparison studies. We compare our results to a simple model based on 1D diffusion, taking place in the geometry defined statistically by the interface (the 50% isoconcentration surface between the two fluids). Several conclusions important to physics could be drawn from our study. We model chemical reactions with no closure approximations beyond those in the LES of the fluid variables itself, and as with dynamic SGS models, these closures contain no adjustable parameters. The chemical reaction rate is specified by the joint PDF for temperature and concentration. We observe a bimodal distribution for the PDF and we observe significant dependence on fluid transport parameters.

  4. Adaptability and stability of maize varieties using mixed model methodology

    Directory of Open Access Journals (Sweden)

    Walter Fernandes Meirelles

    2012-01-01

    Full Text Available The objective of this study was to evaluate the performance, adaptability and stability of corn cultivars simultaneously in unbalanced experiments, using the method of harmonic means of the relative performance of genetic values. The grain yield of 45 cultivars, including hybrids and varieties, was evaluated in 49 environments in two growing seasons. In the 2007/2008 growing season, 36 cultivars were evaluated and in 2008/2009 25 cultivars, of which 16 were used in both seasons. Statistical analyses were performed based on mixed models, considering genotypes as random and replications within environments as fixed factors. The experimental precision in the combined analyses was high (accuracy estimates > 92 %. Despite the existence of genotype x environment interaction, hybrids and varieties with high adaptability and stability were identified. Results showed that the method of harmonic means of the relative performance of genetic values is a suitable method for maize breeding programs.

  5. Latent Fundamentals Arbitrage with a Mixed Effects Factor Model

    Directory of Open Access Journals (Sweden)

    Andrei Salem Gonçalves

    2012-09-01

    Full Text Available We propose a single-factor mixed effects panel data model to create an arbitrage portfolio that identifies differences in firm-level latent fundamentals. Furthermore, we show that even though the characteristics that affect returns are unknown variables, it is possible to identify the strength of the combination of these latent fundamentals for each stock by following a simple approach using historical data. As a result, a trading strategy that bought the stocks with the best fundamentals (strong fundamentals portfolio and sold the stocks with the worst ones (weak fundamentals portfolio realized significant risk-adjusted returns in the U.S. market for the period between July 1986 and June 2008. To ensure robustness, we performed sub period and seasonal analyses and adjusted for trading costs and we found further empirical evidence that using a simple investment rule, that identified these latent fundamentals from the structure of past returns, can lead to profit.

  6. Modeling of mixing in stirred bioreactors 4. mixing time for aerated bacteria, yeasts and fungus broths

    Directory of Open Access Journals (Sweden)

    Cascaval Dan

    2004-01-01

    Full Text Available The mixing time for bioreactors depends mainly on the rheoiogicai properties of the broths, the biomass concentration and morphology, mixing system characteristics and fermentation conditions. For quantifying the influence of these factors on the mixing efficiency for stirred bioreactors, aerated broths of bacteria (P. shermanii, yeasts (S. cerevisiae and fungi (P. chrysogenum, free mycelia and mycelial aggregates of different concentrations have been investigated using a laboratory bioreactor with a double turbine impeller. The experimental data indicated that the influence of the rotation speed, aeration rate and stirrer positions on the mixing intensity strongly differ from one system to another and must be correlated with the microorganism characteristics, namely: the biomass concentration and morphology. Moreover, compared with non-aerated broths, variations of the mixing time with the considered parameters are very different, due to the complex flow mechanism of gas-liquid dispersions. By means of the experimental data and using a multiregression analysis method some mathematical correlations for the mixing time of the general form: tm = a1*Cx2+a2*Cx+a3*IgVa+a4-N2+a5-N+a6/a7*L2+a8*L+a9 were established. The proposed equations offer good agreement with the experiments, the average deviation being ±6.7% - ±9.4 and are adequate for the flow regime Re < 25,000.

  7. Sensitivity of fluvial sediment source apportionment to mixing model assumptions: A Bayesian model comparison.

    Science.gov (United States)

    Cooper, Richard J; Krueger, Tobias; Hiscock, Kevin M; Rawlins, Barry G

    2014-11-01

    Mixing models have become increasingly common tools for apportioning fluvial sediment load to various sediment sources across catchments using a wide variety of Bayesian and frequentist modeling approaches. In this study, we demonstrate how different model setups can impact upon resulting source apportionment estimates in a Bayesian framework via a one-factor-at-a-time (OFAT) sensitivity analysis. We formulate 13 versions of a mixing model, each with different error assumptions and model structural choices, and apply them to sediment geochemistry data from the River Blackwater, Norfolk, UK, to apportion suspended particulate matter (SPM) contributions from three sources (arable topsoils, road verges, and subsurface material) under base flow conditions between August 2012 and August 2013. Whilst all 13 models estimate subsurface sources to be the largest contributor of SPM (median ∼76%), comparison of apportionment estimates reveal varying degrees of sensitivity to changing priors, inclusion of covariance terms, incorporation of time-variant distributions, and methods of proportion characterization. We also demonstrate differences in apportionment results between a full and an empirical Bayesian setup, and between a Bayesian and a frequentist optimization approach. This OFAT sensitivity analysis reveals that mixing model structural choices and error assumptions can significantly impact upon sediment source apportionment results, with estimated median contributions in this study varying by up to 21% between model versions. Users of mixing models are therefore strongly advised to carefully consider and justify their choice of model structure prior to conducting sediment source apportionment investigations. An OFAT sensitivity analysis of sediment fingerprinting mixing models is conductedBayesian models display high sensitivity to error assumptions and structural choicesSource apportionment results differ between Bayesian and frequentist approaches.

  8. Extending the linear model with R generalized linear, mixed effects and nonparametric regression models

    CERN Document Server

    Faraway, Julian J

    2005-01-01

    Linear models are central to the practice of statistics and form the foundation of a vast range of statistical methodologies. Julian J. Faraway''s critically acclaimed Linear Models with R examined regression and analysis of variance, demonstrated the different methods available, and showed in which situations each one applies. Following in those footsteps, Extending the Linear Model with R surveys the techniques that grow from the regression model, presenting three extensions to that framework: generalized linear models (GLMs), mixed effect models, and nonparametric regression models. The author''s treatment is thoroughly modern and covers topics that include GLM diagnostics, generalized linear mixed models, trees, and even the use of neural networks in statistics. To demonstrate the interplay of theory and practice, throughout the book the author weaves the use of the R software environment to analyze the data of real examples, providing all of the R commands necessary to reproduce the analyses. All of the ...

  9. Differential expression analysis for RNAseq using Poisson mixed models.

    Science.gov (United States)

    Sun, Shiquan; Hood, Michelle; Scott, Laura; Peng, Qinke; Mukherjee, Sayan; Tung, Jenny; Zhou, Xiang

    2017-06-20

    Identifying differentially expressed (DE) genes from RNA sequencing (RNAseq) studies is among the most common analyses in genomics. However, RNAseq DE analysis presents several statistical and computational challenges, including over-dispersed read counts and, in some settings, sample non-independence. Previous count-based methods rely on simple hierarchical Poisson models (e.g. negative binomial) to model independent over-dispersion, but do not account for sample non-independence due to relatedness, population structure and/or hidden confounders. Here, we present a Poisson mixed model with two random effects terms that account for both independent over-dispersion and sample non-independence. We also develop a scalable sampling-based inference algorithm using a latent variable representation of the Poisson distribution. With simulations, we show that our method properly controls for type I error and is generally more powerful than other widely used approaches, except in small samples (n <15) with other unfavorable properties (e.g. small effect sizes). We also apply our method to three real datasets that contain related individuals, population stratification or hidden confounders. Our results show that our method increases power in all three data compared to other approaches, though the power gain is smallest in the smallest sample (n = 6). Our method is implemented in MACAU, freely available at www.xzlab.org/software.html. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  10. Research on mixed network architecture collaborative application model

    Science.gov (United States)

    Jing, Changfeng; Zhao, Xi'an; Liang, Song

    2009-10-01

    When facing complex requirements of city development, ever-growing spatial data, rapid development of geographical business and increasing business complexity, collaboration between multiple users and departments is needed urgently, however conventional GIS software (such as Client/Server model or Browser/Server model) are not support this well. Collaborative application is one of the good resolutions. Collaborative application has four main problems to resolve: consistency and co-edit conflict, real-time responsiveness, unconstrained operation, spatial data recoverability. In paper, application model called AMCM is put forward based on agent and multi-level cache. AMCM can be used in mixed network structure and supports distributed collaborative. Agent is an autonomous, interactive, initiative and reactive computing entity in a distributed environment. Agent has been used in many fields such as compute science and automation. Agent brings new methods for cooperation and the access for spatial data. Multi-level cache is a part of full data. It reduces the network load and improves the access and handle of spatial data, especially, in editing the spatial data. With agent technology, we make full use of its characteristics of intelligent for managing the cache and cooperative editing that brings a new method for distributed cooperation and improves the efficiency.

  11. The transition model test for serial dependence in mixed-effects models for binary data

    DEFF Research Database (Denmark)

    Breinegaard, Nina; Rabe-Hesketh, Sophia; Skrondal, Anders

    2017-01-01

    Generalized linear mixed models for longitudinal data assume that responses at different occasions are conditionally independent, given the random effects and covariates. Although this assumption is pivotal for consistent estimation, violation due to serial dependence is hard to assess by model...

  12. A time dependent mixing model to close PDF equations for transport in heterogeneous aquifers

    Science.gov (United States)

    Schüler, L.; Suciu, N.; Knabner, P.; Attinger, S.

    2016-10-01

    Probability density function (PDF) methods are a promising alternative to predicting the transport of solutes in groundwater under uncertainty. They make it possible to derive the evolution equations of the mean concentration and the concentration variance, used in moment methods. The mixing model, describing the transport of the PDF in concentration space, is essential for both methods. Finding a satisfactory mixing model is still an open question and due to the rather elaborate PDF methods, a difficult undertaking. Both the PDF equation and the concentration variance equation depend on the same mixing model. This connection is used to find and test an improved mixing model for the much easier to handle concentration variance. Subsequently, this mixing model is transferred to the PDF equation and tested. The newly proposed mixing model yields significantly improved results for both variance modelling and PDF modelling.

  13. Modeling pedestrian shopping behavior using principles of bounded rationality: model comparison and validation

    Science.gov (United States)

    Zhu, Wei; Timmermans, Harry

    2011-06-01

    Models of geographical choice behavior have been dominantly based on rational choice models, which assume that decision makers are utility-maximizers. Rational choice models may be less appropriate as behavioral models when modeling decisions in complex environments in which decision makers may simplify the decision problem using heuristics. Pedestrian behavior in shopping streets is an example. We therefore propose a modeling framework for pedestrian shopping behavior incorporating principles of bounded rationality. We extend three classical heuristic rules (conjunctive, disjunctive and lexicographic rule) by introducing threshold heterogeneity. The proposed models are implemented using data on pedestrian behavior in Wang Fujing Street, the city center of Beijing, China. The models are estimated and compared with multinomial logit models and mixed logit models. Results show that the heuristic models are the best for all the decisions that are modeled. Validation tests are carried out through multi-agent simulation by comparing simulated spatio-temporal agent behavior with the observed pedestrian behavior. The predictions of heuristic models are slightly better than those of the multinomial logit models.

  14. Mixed dark matter in left-right symmetric models

    Energy Technology Data Exchange (ETDEWEB)

    Berlin, Asher [Department of Physics, University of Chicago,Chicago, Illinois 60637 (United States); Fox, Patrick J. [Theoretical Physics Department, Fermilab,Batavia, Illinois 60510 (United States); Hooper, Dan [Center for Particle Astrophysics, Fermi National Accelerator Laboratory,Batavia, Illinois 60510 (United States); Department of Astronomy and Astrophysics, University of Chicago,Chicago, Illinois 60637 (United States); Mohlabeng, Gopolang [Center for Particle Astrophysics, Fermi National Accelerator Laboratory,Batavia, Illinois 60510 (United States); Department of Physics and Astronomy, University of Kansas,Lawrence, Kansas 66045 (United States)

    2016-06-08

    Motivated by the recently reported diboson and dijet excesses in Run 1 data at ATLAS and CMS, we explore models of mixed dark matter in left-right symmetric theories. In this study, we calculate the relic abundance and the elastic scattering cross section with nuclei for a number of dark matter candidates that appear within the fermionic multiplets of left-right symmetric models. In contrast to the case of pure multiplets, WIMP-nucleon scattering proceeds at tree-level, and hence the projected reach of future direct detection experiments such as LUX-ZEPLIN and XENON1T will cover large regions of parameter space for TeV-scale thermal dark matter. Decays of the heavy charged W{sup ′} boson to particles in the dark sector can potentially shift the right-handed gauge coupling to larger values when fixed to the rate of the Run 1 excesses, moving towards the theoretically attractive scenario, g{sub R}=g{sub L}. This region of parameter space may be probed by future collider searches for new Higgs bosons or electroweak fermions.

  15. Extended Mixed-Efects Item Response Models with the MH-RM Algorithm

    Science.gov (United States)

    Chalmers, R. Philip

    2015-01-01

    A mixed-effects item response theory (IRT) model is presented as a logical extension of the generalized linear mixed-effects modeling approach to formulating explanatory IRT models. Fixed and random coefficients in the extended model are estimated using a Metropolis-Hastings Robbins-Monro (MH-RM) stochastic imputation algorithm to accommodate for…

  16. Numerical modelling of the atmospheric mixing-layer diurnal evolution

    International Nuclear Information System (INIS)

    Molnary, L. de.

    1990-03-01

    This paper introduce a numeric procedure to determine the temporal evolution of the height, potential temperature and mixing ratio in the atmospheric mixing layer. The time and spatial derivatives were evaluated via forward in time scheme to predict the local evolution of the mixing-layer parameters, and a forward in time, upstream in space scheme to predict the evolution of the mixing-layer over a flat region with a one-dimensional advection component. The surface turbulent fluxes of sensible and latent heat were expressed using a simple sine wave that is function of the hour day and kind of the surface (water or country). (author) [pt

  17. Oxygen reduction kinetics on mixed conducting SOFC model cathodes

    Energy Technology Data Exchange (ETDEWEB)

    Baumann, F.S.

    2006-07-01

    The kinetics of the oxygen reduction reaction at the surface of mixed conducting solid oxide fuel cell (SOFC) cathodes is one of the main limiting factors to the performance of these promising systems. For ''realistic'' porous electrodes, however, it is usually very difficult to separate the influence of different resistive processes. Therefore, a suitable, geometrically well-defined model system was used in this work to enable an unambiguous distinction of individual electrochemical processes by means of impedance spectroscopy. The electrochemical measurements were performed on dense thin film microelectrodes, prepared by PLD and photolithography, of mixed conducting perovskite-type materials. The first part of the thesis consists of an extensive impedance spectroscopic investigation of La0.6Sr0.4Co0.8Fe0.2O3 (LSCF) microelectrodes. An equivalent circuit was identified that describes the electrochemical properties of the model electrodes appropriately and enables an unambiguous interpretation of the measured impedance spectra. Hence, the dependencies of individual electrochemical processes such as the surface exchange reaction on a wide range of experimental parameters including temperature, dc bias and oxygen partial pressure could be studied. As a result, a comprehensive set of experimental data has been obtained, which was previously not available for a mixed conducting model system. In the course of the experiments on the dc bias dependence of the electrochemical processes a new and surprising effect was discovered: It could be shown that a short but strong dc polarisation of a LSCF microelectrode at high temperature improves its electrochemical performance with respect to the oxygen reduction reaction drastically. The electrochemical resistance associated with the oxygen surface exchange reaction, initially the dominant contribution to the total electrode resistance, can be reduced by two orders of magnitude. This &apos

  18. Linear mixed-effects modeling approach to FMRI group analysis.

    Science.gov (United States)

    Chen, Gang; Saad, Ziad S; Britton, Jennifer C; Pine, Daniel S; Cox, Robert W

    2013-06-01

    Conventional group analysis is usually performed with Student-type t-test, regression, or standard AN(C)OVA in which the variance-covariance matrix is presumed to have a simple structure. Some correction approaches are adopted when assumptions about the covariance structure is violated. However, as experiments are designed with different degrees of sophistication, these traditional methods can become cumbersome, or even be unable to handle the situation at hand. For example, most current FMRI software packages have difficulty analyzing the following scenarios at group level: (1) taking within-subject variability into account when there are effect estimates from multiple runs or sessions; (2) continuous explanatory variables (covariates) modeling in the presence of a within-subject (repeated measures) factor, multiple subject-grouping (between-subjects) factors, or the mixture of both; (3) subject-specific adjustments in covariate modeling; (4) group analysis with estimation of hemodynamic response (HDR) function by multiple basis functions; (5) various cases of missing data in longitudinal studies; and (6) group studies involving family members or twins. Here we present a linear mixed-effects modeling (LME) methodology that extends the conventional group analysis approach to analyze many complicated cases, including the six prototypes delineated above, whose analyses would be otherwise either difficult or unfeasible under traditional frameworks such as AN(C)OVA and general linear model (GLM). In addition, the strength of the LME framework lies in its flexibility to model and estimate the variance-covariance structures for both random effects and residuals. The intraclass correlation (ICC) values can be easily obtained with an LME model with crossed random effects, even at the presence of confounding fixed effects. The simulations of one prototypical scenario indicate that the LME modeling keeps a balance between the control for false positives and the sensitivity

  19. Prediction of stock markets by the evolutionary mix-game model

    Science.gov (United States)

    Chen, Fang; Gou, Chengling; Guo, Xiaoqian; Gao, Jieping

    2008-06-01

    This paper presents the efforts of using the evolutionary mix-game model, which is a modified form of the agent-based mix-game model, to predict financial time series. Here, we have carried out three methods to improve the original mix-game model by adding the abilities of strategy evolution to agents, and then applying the new model referred to as the evolutionary mix-game model to forecast the Shanghai Stock Exchange Composite Index. The results show that these modifications can improve the accuracy of prediction greatly when proper parameters are chosen.

  20. Metabolic modelling of polyhydroxyalkanoate copolymers production by mixed microbial cultures

    Directory of Open Access Journals (Sweden)

    Reis Maria AM

    2008-07-01

    Full Text Available Abstract Background This paper presents a metabolic model describing the production of polyhydroxyalkanoate (PHA copolymers in mixed microbial cultures, using mixtures of acetic and propionic acid as carbon source material. Material and energetic balances were established on the basis of previously elucidated metabolic pathways. Equations were derived for the theoretical yields for cell growth and PHA production on mixtures of acetic and propionic acid as functions of the oxidative phosphorylation efficiency, P/O ratio. The oxidative phosphorylation efficiency was estimated from rate measurements, which in turn allowed the estimation of the theoretical yield coefficients. Results The model was validated with experimental data collected in a sequencing batch reactor (SBR operated under varying feeding conditions: feeding of acetic and propionic acid separately (control experiments, and the feeding of acetic and propionic acid simultaneously. Two different feast and famine culture enrichment strategies were studied: (i either with acetate or (ii with propionate as carbon source material. Metabolic flux analysis (MFA was performed for the different feeding conditions and culture enrichment strategies. Flux balance analysis (FBA was used to calculate optimal feeding scenarios for high quality PHA polymers production, where it was found that a suitable polymer would be obtained when acetate is fed in excess and the feeding rate of propionate is limited to ~0.17 C-mol/(C-mol.h. The results were compared with published pure culture metabolic studies. Conclusion Acetate was more conducive toward the enrichment of a microbial culture with higher PHA storage fluxes and yields as compared to propionate. The P/O ratio was not only influenced by the selected microbial culture, but also by the carbon substrate fed to each culture, where higher P/O ratio values were consistently observed for acetate than propionate. MFA studies suggest that when mixtures of

  1. Log-normal frailty models fitted as Poisson generalized linear mixed models.

    Science.gov (United States)

    Hirsch, Katharina; Wienke, Andreas; Kuss, Oliver

    2016-12-01

    The equivalence of a survival model with a piecewise constant baseline hazard function and a Poisson regression model has been known since decades. As shown in recent studies, this equivalence carries over to clustered survival data: A frailty model with a log-normal frailty term can be interpreted and estimated as a generalized linear mixed model with a binary response, a Poisson likelihood, and a specific offset. Proceeding this way, statistical theory and software for generalized linear mixed models are readily available for fitting frailty models. This gain in flexibility comes at the small price of (1) having to fix the number of pieces for the baseline hazard in advance and (2) having to "explode" the data set by the number of pieces. In this paper we extend the simulations of former studies by using a more realistic baseline hazard (Gompertz) and by comparing the model under consideration with competing models. Furthermore, the SAS macro %PCFrailty is introduced to apply the Poisson generalized linear mixed approach to frailty models. The simulations show good results for the shared frailty model. Our new %PCFrailty macro provides proper estimates, especially in case of 4 events per piece. The suggested Poisson generalized linear mixed approach for log-normal frailty models based on the %PCFrailty macro provides several advantages in the analysis of clustered survival data with respect to more flexible modelling of fixed and random effects, exact (in the sense of non-approximate) maximum likelihood estimation, and standard errors and different types of confidence intervals for all variance parameters. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  2. Microergodicity effects on ebullition of methane modelled by Mixed Poisson process with Pareto mixing variable

    Czech Academy of Sciences Publication Activity Database

    Jordanova, P.; Dušek, Jiří; Stehlík, M.

    2013-01-01

    Roč. 128, OCT 15 (2013), s. 124-134 ISSN 0169-7439 R&D Projects: GA ČR(CZ) GAP504/11/1151; GA MŠk(CZ) ED1.1.00/02.0073 Institutional support: RVO:67179843 Keywords : environmental chemistry * ebullition of methane * mixed poisson processes * renewal process * pareto distribution * moving average process * robust statistics * sedge–grass marsh Subject RIV: EH - Ecology, Behaviour Impact factor: 2.381, year: 2013

  3. Chlorophyll modulation of mixed layer thermodynamics in a mixed-layer isopycnal general circulation model - An example from Arabian Sea and Equatorial Pacific

    Digital Repository Service at National Institute of Oceanography (India)

    Nakamoto, S.; PrasannaKumar, S.; Oberhuber, J.M.; Saito, H.; Muneyama, K.

    and supported by quasi-steady upwelling. Remotely sensed chlorophyll pigment concentrations from the Coastal Zone Color Scanner (CZCS) are used to investigate the chlorophyll modulation of ocean mixed layer thermodynamics in a bulk mixed-layer model, embedded...

  4. An improved mixing model providing joint statistics of scalar and scalar dissipation

    Energy Technology Data Exchange (ETDEWEB)

    Meyer, Daniel W. [Department of Energy Resources Engineering, Stanford University, Stanford, CA (United States); Jenny, Patrick [Institute of Fluid Dynamics, ETH Zurich (Switzerland)

    2008-11-15

    For the calculation of nonpremixed turbulent flames with thin reaction zones the joint probability density function (PDF) of the mixture fraction and its dissipation rate plays an important role. The corresponding PDF transport equation involves a mixing model for the closure of the molecular mixing term. Here, the parameterized scalar profile (PSP) mixing model is extended to provide the required joint statistics. Model predictions are validated using direct numerical simulation (DNS) data of a passive scalar mixing in a statistically homogeneous turbulent flow. Comparisons between the DNS and the model predictions are provided, which involve different initial scalar-field lengthscales. (author)

  5. Estimating marginal properties of quantitative real-time PCR data using nonlinear mixed models

    DEFF Research Database (Denmark)

    Gerhard, Daniel; Bremer, Melanie; Ritz, Christian

    2014-01-01

    A unified modeling framework based on a set of nonlinear mixed models is proposed for flexible modeling of gene expression in real-time PCR experiments. Focus is on estimating the marginal or population-based derived parameters: cycle thresholds and ΔΔc(t), but retaining the conditional mixed mod...

  6. PREDICTION OF THE MIXING ENTHALPIES OF BINARY LIQUID ALLOYS BY MOLECULAR INTERACTION VOLUME MODEL

    Institute of Scientific and Technical Information of China (English)

    H.W.Yang; D.P.Tao; Z.H.Zhou

    2008-01-01

    The mixing enthalpies of 23 binary liquid alloys are calculated by molecular interaction volume model (MIVM), which is a two-parameter model with the partial molar infinite dilute mixing enthalpies. The predicted values are in agreement with the experimental data and then indicate that the model is reliable and convenient.

  7. From linear to generalized linear mixed models: A case study in repeated measures

    Science.gov (United States)

    Compared to traditional linear mixed models, generalized linear mixed models (GLMMs) can offer better correspondence between response variables and explanatory models, yielding more efficient estimates and tests in the analysis of data from designed experiments. Using proportion data from a designed...

  8. Multilevel nonlinear mixed-effects models for the modeling of earlywood and latewood microfibril angle

    Science.gov (United States)

    Lewis Jordon; Richard F. Daniels; Alexander Clark; Rechun He

    2005-01-01

    Earlywood and latewood microfibril angle (MFA) was determined at I-millimeter intervals from disks at 1.4 meters, then at 3-meter intervals to a height of 13.7 meters, from 18 loblolly pine (Pinus taeda L.) trees grown in southeastern Texas. A modified three-parameter logistic function with mixed effects is used for modeling earlywood and latewood...

  9. Pricing and lot sizing optimization in a two-echelon supply chain with a constrained Logit demand function

    Directory of Open Access Journals (Sweden)

    Yeison Díaz-Mateus

    2017-07-01

    Full Text Available Decision making in supply chains is influenced by demand variations, and hence sales, purchase orders and inventory levels are therefore concerned. This paper presents a non-linear optimization model for a two-echelon supply chain, for a unique product. In addition, the model includes the consumers’ maximum willingness to pay, taking socioeconomic differences into account. To do so, the constrained multinomial logit for discrete choices is used to estimate demand levels. Then, a metaheuristic approach based on particle swarm optimization is proposed to determine the optimal product sales price and inventory coordination variables. To validate the proposed model, a supply chain of a technological product was chosen and three scenarios are analyzed: discounts, demand segmentation and demand overestimation. Results are analyzed on the basis of profits, lotsizing and inventory turnover and market share. It can be concluded that the maximum willingness to pay must be taken into consideration, otherwise fictitious profits may mislead decision making, and although the market share would seem to improve, overall profits are not in fact necessarily better.

  10. UN MODELO LOGIT PARA LA FRAGILIDAD DEL SISTEMA FINANCIERO VENEZOLANO DENTRO DEL CONTEXTO DE LOS PROCESOS DE FUSIÓN E INTERVENCIÓN | A LOGIT APPROACH OF THE VENEZUELAN FINANCIAL SYSTEM WITHIN THE CONTEXT OF MERGER AND INTERVENTION

    Directory of Open Access Journals (Sweden)

    César Rubicundo

    2016-08-01

    Full Text Available In Venezuela there have been more than 30 mergers, after the approval of the Banking Act in 1999, since from 103 institutions, the financial system closed the year 2013 with 35 brokerage firms, which represents a decrease of 66% due to 20 coalitions, 30 transformations and 18 settlements. Therefore, an analysis is proposed of the current economic situation of the financial system in the context of mergers and interventions, considering internal and external factors according to the constituted capital. The study was based on information from 37 private capital institutions and 04 public institutions, between January 2009 and December 2013. The previous analysis for privately held banks, showed that these institutions have a 78.40% chance of not incurring in situations of fragility; while the State capital banks have 83.30% of surviving in the market. As for the estimated logit models, it was found that the liquidity ratio, ROE, management index and inflation, are components that push towards a fragile situation for privately held banks, with a probability forecast of fragility. With regard to the state capital banks, this situation is explained by a 62.50% equity index, ROE, and inflation. A probability of stability for these banks is expected. The joint model forecasted a probability of a stable financial system for the coming months.

  11. Computer modeling of forced mixing in waste storage tanks

    International Nuclear Information System (INIS)

    Eyler, L.L.; Michener, T.E.

    1992-01-01

    In this paper, numerical simulation results of fluid dynamic and physical process in radioactive waste storage tanks are presented. Investigations include simulation of jet mixing pump induced flows intended to mix and maintain particulate material uniformly distributed throughout the liquid volume. Physical effects of solids are included in the code. These are particle size through a settling velocity and mixture properties through density and viscosity. Calculations have been accomplished for centrally located, rotationally-oscillating, horizontally-directed jet mixing pump for two cases. One case is with low jet velocity an flow settling velocity. It results in uniform conditions. Results are being used to aid in experiment design and to understand mixing in the waste tanks. These results are to be used in conjunction with scaled experiments to define limits of pump operation to maintain uniformity of the mixture in the storage tanks during waste retrieval operations

  12. Computer modeling of forced mixing in waste storage tanks

    International Nuclear Information System (INIS)

    Eyler, L.L.; Michener, T.E.

    1992-04-01

    Numerical simulation results of fluid dynamic and physical processes in radioactive waste storage tanks are presented. Investigations include simulation of jet mixing pump induced flows intended to mix and maintain particulate material uniformly distributed throughout the liquid volume. Physical effects of solids are included in the code. These are particle size through a settling velocity and mixture properties through density and viscosity. Calculations have been accomplished for a centrally located, rotationally-oscillating, horizontally-directed jet mixing pump for two cases. One case is with low jet velocity and high settling velocity. It results in nonuniform distribution. The other case is with high jet velocity and low settling velocity. It results in uniform conditions. Results are being used to aid in experiment design and to understand mixing in the waste tanks. These results are to be used in conjunction with scaled experiments to define limits of pump operation to maintain uniformity of the mixture in the storage tanks during waste retrieval operations

  13. Modeling of mixing processes: Fluids, particulates, and powders

    Energy Technology Data Exchange (ETDEWEB)

    Ottino, J.M.; Hansen, S. [Northwestern Univ., Evanston, IL (United States)

    1995-12-31

    Work under this grant involves two main areas: (1) Mixing of Viscous Liquids, this first area comprising aggregation, fragmentation and dispersion, and (2) Mixing of Powders. In order to produce a coherent self-contained picture, we report primarily on results obtained under (1), and within this area, mostly on computational studies of particle aggregation in regular and chaotic flows. Numerical simulations show that the average cluster size of compact clusters grows algebraically, while the average cluster size of fractal clusters grows exponentially; companion mathematical arguments are used to describe the initial growth of average cluster size and polydispersity. It is found that when the system is well mixed and the capture radius independent of mass, the polydispersity is constant for long-times and the cluster size distribution is self-similar. Furthermore, our simulations indicate that the fractal nature of the clusters is dependent upon the mixing.

  14. Mixed models approaches for joint modeling of different types of responses.

    Science.gov (United States)

    Ivanova, Anna; Molenberghs, Geert; Verbeke, Geert

    2016-01-01

    In many biomedical studies, one jointly collects longitudinal continuous, binary, and survival outcomes, possibly with some observations missing. Random-effects models, sometimes called shared-parameter models or frailty models, received a lot of attention. In such models, the corresponding variance components can be employed to capture the association between the various sequences. In some cases, random effects are considered common to various sequences, perhaps up to a scaling factor; in others, there are different but correlated random effects. Even though a variety of data types has been considered in the literature, less attention has been devoted to ordinal data. For univariate longitudinal or hierarchical data, the proportional odds mixed model (POMM) is an instance of the generalized linear mixed model (GLMM; Breslow and Clayton, 1993). Ordinal data are conveniently replaced by a parsimonious set of dummies, which in the longitudinal setting leads to a repeated set of dummies. When ordinal longitudinal data are part of a joint model, the complexity increases further. This is the setting considered in this paper. We formulate a random-effects based model that, in addition, allows for overdispersion. Using two case studies, it is shown that the combination of random effects to capture association with further correction for overdispersion can improve the model's fit considerably and that the resulting models allow to answer research questions that could not be addressed otherwise. Parameters can be estimated in a fairly straightforward way, using the SAS procedure NLMIXED.

  15. Application of zero-inflated poisson mixed models in prognostic factors of hepatitis C.

    Science.gov (United States)

    Akbarzadeh Baghban, Alireza; Pourhoseingholi, Asma; Zayeri, Farid; Jafari, Ali Akbar; Alavian, Seyed Moayed

    2013-01-01

    In recent years, hepatitis C virus (HCV) infection represents a major public health problem. Evaluation of risk factors is one of the solutions which help protect people from the infection. This study aims to employ zero-inflated Poisson mixed models to evaluate prognostic factors of hepatitis C. The data was collected from a longitudinal study during 2005-2010. First, mixed Poisson regression (PR) model was fitted to the data. Then, a mixed zero-inflated Poisson model was fitted with compound Poisson random effects. For evaluating the performance of the proposed mixed model, standard errors of estimators were compared. The results obtained from mixed PR showed that genotype 3 and treatment protocol were statistically significant. Results of zero-inflated Poisson mixed model showed that age, sex, genotypes 2 and 3, the treatment protocol, and having risk factors had significant effects on viral load of HCV patients. Of these two models, the estimators of zero-inflated Poisson mixed model had the minimum standard errors. The results showed that a mixed zero-inflated Poisson model was the almost best fit. The proposed model can capture serial dependence, additional overdispersion, and excess zeros in the longitudinal count data.

  16. Extending existing structural identifiability analysis methods to mixed-effects models.

    Science.gov (United States)

    Janzén, David L I; Jirstrand, Mats; Chappell, Michael J; Evans, Neil D

    2018-01-01

    The concept of structural identifiability for state-space models is expanded to cover mixed-effects state-space models. Two methods applicable for the analytical study of the structural identifiability of mixed-effects models are presented. The two methods are based on previously established techniques for non-mixed-effects models; namely the Taylor series expansion and the input-output form approach. By generating an exhaustive summary, and by assuming an infinite number of subjects, functions of random variables can be derived which in turn determine the distribution of the system's observation function(s). By considering the uniqueness of the analytical statistical moments of the derived functions of the random variables, the structural identifiability of the corresponding mixed-effects model can be determined. The two methods are applied to a set of examples of mixed-effects models to illustrate how they work in practice. Copyright © 2017 Elsevier Inc. All rights reserved.

  17. Modelling subject-specific childhood growth using linear mixed-effect models with cubic regression splines.

    Science.gov (United States)

    Grajeda, Laura M; Ivanescu, Andrada; Saito, Mayuko; Crainiceanu, Ciprian; Jaganath, Devan; Gilman, Robert H; Crabtree, Jean E; Kelleher, Dermott; Cabrera, Lilia; Cama, Vitaliano; Checkley, William

    2016-01-01

    Childhood growth is a cornerstone of pediatric research. Statistical models need to consider individual trajectories to adequately describe growth outcomes. Specifically, well-defined longitudinal models are essential to characterize both population and subject-specific growth. Linear mixed-effect models with cubic regression splines can account for the nonlinearity of growth curves and provide reasonable estimators of population and subject-specific growth, velocity and acceleration. We provide a stepwise approach that builds from simple to complex models, and account for the intrinsic complexity of the data. We start with standard cubic splines regression models and build up to a model that includes subject-specific random intercepts and slopes and residual autocorrelation. We then compared cubic regression splines vis-à-vis linear piecewise splines, and with varying number of knots and positions. Statistical code is provided to ensure reproducibility and improve dissemination of methods. Models are applied to longitudinal height measurements in a cohort of 215 Peruvian children followed from birth until their fourth year of life. Unexplained variability, as measured by the variance of the regression model, was reduced from 7.34 when using ordinary least squares to 0.81 (p linear mixed-effect models with random slopes and a first order continuous autoregressive error term. There was substantial heterogeneity in both the intercept (p modeled with a first order continuous autoregressive error term as evidenced by the variogram of the residuals and by a lack of association among residuals. The final model provides a parametric linear regression equation for both estimation and prediction of population- and individual-level growth in height. We show that cubic regression splines are superior to linear regression splines for the case of a small number of knots in both estimation and prediction with the full linear mixed effect model (AIC 19,352 vs. 19

  18. Evaluation of vertical coordinate and vertical mixing algorithms in the HYbrid-Coordinate Ocean Model (HYCOM)

    Science.gov (United States)

    Halliwell, George R.

    Vertical coordinate and vertical mixing algorithms included in the HYbrid Coordinate Ocean Model (HYCOM) are evaluated in low-resolution climatological simulations of the Atlantic Ocean. The hybrid vertical coordinates are isopycnic in the deep ocean interior, but smoothly transition to level (pressure) coordinates near the ocean surface, to sigma coordinates in shallow water regions, and back again to level coordinates in very shallow water. By comparing simulations to climatology, the best model performance is realized using hybrid coordinates in conjunction with one of the three available differential vertical mixing models: the nonlocal K-Profile Parameterization, the NASA GISS level 2 turbulence closure, and the Mellor-Yamada level 2.5 turbulence closure. Good performance is also achieved using the quasi-slab Price-Weller-Pinkel dynamical instability model. Differences among these simulations are too small relative to other errors and biases to identify the "best" vertical mixing model for low-resolution climate simulations. Model performance deteriorates slightly when the Kraus-Turner slab mixed layer model is used with hybrid coordinates. This deterioration is smallest when solar radiation penetrates beneath the mixed layer and when shear instability mixing is included. A simulation performed using isopycnic coordinates to emulate the Miami Isopycnic Coordinate Ocean Model (MICOM), which uses Kraus-Turner mixing without penetrating shortwave radiation and shear instability mixing, demonstrates that the advantages of switching from isopycnic to hybrid coordinates and including more sophisticated turbulence closures outweigh the negative numerical effects of maintaining hybrid vertical coordinates.

  19. Best practices for use of stable isotope mixing models in food-web studies

    Science.gov (United States)

    Stable isotope mixing models are increasingly used to quantify contributions of resources to consumers. While potentially powerful tools, these mixing models have the potential to be misused, abused, and misinterpreted. Here we draw on our collective experiences to address the qu...

  20. A Proposed Model of Retransformed Qualitative Data within a Mixed Methods Research Design

    Science.gov (United States)

    Palladino, John M.

    2009-01-01

    Most models of mixed methods research design provide equal emphasis of qualitative and quantitative data analyses and interpretation. Other models stress one method more than the other. The present article is a discourse about the investigator's decision to employ a mixed method design to examine special education teachers' advocacy and…

  1. Three novel approaches to structural identifiability analysis in mixed-effects models.

    Science.gov (United States)

    Janzén, David L I; Jirstrand, Mats; Chappell, Michael J; Evans, Neil D

    2016-05-06

    Structural identifiability is a concept that considers whether the structure of a model together with a set of input-output relations uniquely determines the model parameters. In the mathematical modelling of biological systems, structural identifiability is an important concept since biological interpretations are typically made from the parameter estimates. For a system defined by ordinary differential equations, several methods have been developed to analyse whether the model is structurally identifiable or otherwise. Another well-used modelling framework, which is particularly useful when the experimental data are sparsely sampled and the population variance is of interest, is mixed-effects modelling. However, established identifiability analysis techniques for ordinary differential equations are not directly applicable to such models. In this paper, we present and apply three different methods that can be used to study structural identifiability in mixed-effects models. The first method, called the repeated measurement approach, is based on applying a set of previously established statistical theorems. The second method, called the augmented system approach, is based on augmenting the mixed-effects model to an extended state-space form. The third method, called the Laplace transform mixed-effects extension, is based on considering the moment invariants of the systems transfer function as functions of random variables. To illustrate, compare and contrast the application of the three methods, they are applied to a set of mixed-effects models. Three structural identifiability analysis methods applicable to mixed-effects models have been presented in this paper. As method development of structural identifiability techniques for mixed-effects models has been given very little attention, despite mixed-effects models being widely used, the methods presented in this paper provides a way of handling structural identifiability in mixed-effects models previously not

  2. How ocean lateral mixing changes Southern Ocean variability in coupled climate models

    Science.gov (United States)

    Pradal, M. A. S.; Gnanadesikan, A.; Thomas, J. L.

    2016-02-01

    The lateral mixing of tracers represents a major uncertainty in the formulation of coupled climate models. The mixing of tracers along density surfaces in the interior and horizontally within the mixed layer is often parameterized using a mixing coefficient ARedi. The models used in the Coupled Model Intercomparison Project 5 exhibit more than an order of magnitude range in the values of this coefficient used within the Southern Ocean. The impacts of such uncertainty on Southern Ocean variability have remained unclear, even as recent work has shown that this variability differs between different models. In this poster, we change the lateral mixing coefficient within GFDL ESM2Mc, a coarse-resolution Earth System model that nonetheless has a reasonable circulation within the Southern Ocean. As the coefficient varies from 400 to 2400 m2/s the amplitude of the variability varies significantly. The low-mixing case shows strong decadal variability with an annual mean RMS temperature variability exceeding 1C in the Circumpolar Current. The highest-mixing case shows a very similar spatial pattern of variability, but with amplitudes only about 60% as large. The suppression of mixing is larger in the Atlantic Sector of the Southern Ocean relatively to the Pacific sector. We examine the salinity budgets of convective regions, paying particular attention to the extent to which high mixing prevents the buildup of low-saline waters that are capable of shutting off deep convection entirely.

  3. Experiments and CFD Modelling of Turbulent Mass Transfer in a Mixing Channel

    DEFF Research Database (Denmark)

    Hjertager Osenbroch, Lene Kristin; Hjertager, Bjørn H.; Solberg, Tron

    2006-01-01

    . Three different flow cases are studied. The 2D numerical predictions of the mixing channel show that none of the k-ε turbulence models tested is suitable for the flow cases studied here. The turbulent Schmidt number is reduced to obtain a better agreement between measured and predicted mean......Experiments are carried out for passive mixing in order to obtain local mean and turbulent velocities and concentrations. The mixing takes place in a square channel with two inlets separated by a block. A combined PIV/PLIF technique is used to obtain instantaneous velocity and concentration fields...... and fluctuating concentrations. The multi-peak presumed PDF mixing model is tested....

  4. Non-linear mixed-effects pharmacokinetic/pharmacodynamic modelling in NLME using differential equations

    DEFF Research Database (Denmark)

    Tornøe, Christoffer Wenzel; Agersø, Henrik; Madsen, Henrik

    2004-01-01

    The standard software for non-linear mixed-effect analysis of pharmacokinetic/phar-macodynamic (PK/PD) data is NONMEM while the non-linear mixed-effects package NLME is an alternative as tong as the models are fairly simple. We present the nlmeODE package which combines the ordinary differential...... equation (ODE) solver package odesolve and the non-Linear mixed effects package NLME thereby enabling the analysis of complicated systems of ODEs by non-linear mixed-effects modelling. The pharmacokinetics of the anti-asthmatic drug theophylline is used to illustrate the applicability of the nlme...

  5. CFD modeling of thermal mixing in a T-junction geometry using LES model

    Energy Technology Data Exchange (ETDEWEB)

    Ayhan, Hueseyin, E-mail: huseyinayhan@hacettepe.edu.tr [Hacettepe University, Department of Nuclear Engineering, Beytepe, Ankara 06800 (Turkey); Soekmen, Cemal Niyazi, E-mail: cemalniyazi.sokmen@hacettepe.edu.tr [Hacettepe University, Department of Nuclear Engineering, Beytepe, Ankara 06800 (Turkey)

    2012-12-15

    Highlights: Black-Right-Pointing-Pointer CFD simulations of temperature and velocity fluctuations for thermal mixing cases in T-junction are performed. Black-Right-Pointing-Pointer It is found that the frequency range of 2-5 Hz contains most of the energy; therefore, may cause thermal fatigue. Black-Right-Pointing-Pointer This study shows that RANS based calculations fail to predict a realistic mixing between the fluids. Black-Right-Pointing-Pointer LES model can predict instantaneous turbulence behavior. - Abstract: Turbulent mixing of fluids at different temperatures can lead to temperature fluctuations at the pipe material. These fluctuations, or thermal striping, inducing cyclical thermal stresses and resulting thermal fatigue, may cause unexpected failure of pipe material. Therefore, an accurate characterization of temperature fluctuations is important in order to estimate the lifetime of pipe material. Thermal fatigue of the coolant circuits of nuclear power plants is one of the major issues in nuclear safety. To investigate thermal fatigue damage, the OECD/NEA has recently organized a blind benchmark study including some of results of present work for prediction of temperature and velocity fluctuations performing a thermal mixing experiment in a T-junction. This paper aims to estimate the frequency of velocity and temperature fluctuations in the mixing region using Computational Fluid Dynamics (CFD). Reynolds Averaged Navier-Stokes and Large Eddy Simulation (LES) models were used to simulate turbulence. CFD results were compared with the available experimental results. Predicted LES results, even in coarse mesh, were found to be in well-agreement with the experimental results in terms of amplitude and frequency of temperature and velocity fluctuations. Analysis of the temperature fluctuations and the power spectrum densities (PSD) at the locations having the strongest temperature fluctuations in the tee junction shows that the frequency range of 2-5 Hz

  6. Modeling Temporal Behavior in Large Networks: A Dynamic Mixed-Membership Model

    Energy Technology Data Exchange (ETDEWEB)

    Rossi, R; Gallagher, B; Neville, J; Henderson, K

    2011-11-11

    Given a large time-evolving network, how can we model and characterize the temporal behaviors of individual nodes (and network states)? How can we model the behavioral transition patterns of nodes? We propose a temporal behavior model that captures the 'roles' of nodes in the graph and how they evolve over time. The proposed dynamic behavioral mixed-membership model (DBMM) is scalable, fully automatic (no user-defined parameters), non-parametric/data-driven (no specific functional form or parameterization), interpretable (identifies explainable patterns), and flexible (applicable to dynamic and streaming networks). Moreover, the interpretable behavioral roles are generalizable, computationally efficient, and natively supports attributes. We applied our model for (a) identifying patterns and trends of nodes and network states based on the temporal behavior, (b) predicting future structural changes, and (c) detecting unusual temporal behavior transitions. We use eight large real-world datasets from different time-evolving settings (dynamic and streaming). In particular, we model the evolving mixed-memberships and the corresponding behavioral transitions of Twitter, Facebook, IP-Traces, Email (University), Internet AS, Enron, Reality, and IMDB. The experiments demonstrate the scalability, flexibility, and effectiveness of our model for identifying interesting patterns, detecting unusual structural transitions, and predicting the future structural changes of the network and individual nodes.

  7. Analyzing the Mixing Dynamics of an Industrial Batch Bin Blender via Discrete Element Modeling Method

    Directory of Open Access Journals (Sweden)

    Maitraye Sen

    2017-04-01

    Full Text Available A discrete element model (DEM has been developed for an industrial batch bin blender in which three different types of materials are mixed. The mixing dynamics have been evaluated from a model-based study with respect to the blend critical quality attributes (CQAs which are relative standard deviation (RSD and segregation intensity. In the actual industrial setup, a sensor mounted on the blender lid is used to determine the blend composition in this region. A model-based analysis has been used to understand the mixing efficiency in the other zones inside the blender and to determine if the data obtained near the blender-lid region are able to provide a good representation of the overall blend quality. Sub-optimal mixing zones have been identified and other potential sampling locations have been investigated in order to obtain a good approximation of the blend variability. The model has been used to study how the mixing efficiency can be improved by varying the key processing parameters, i.e., blender RPM/speed, fill level/volume and loading order. Both segregation intensity and RSD reduce at a lower fill level and higher blender RPM and are a function of the mixing time. This work demonstrates the use of a model-based approach to improve process knowledge regarding a pharmaceutical mixing process. The model can be used to acquire qualitative information about the influence of different critical process parameters and equipment geometry on the mixing dynamics.

  8. Mixed Higher Order Variational Model for Image Recovery

    Directory of Open Access Journals (Sweden)

    Pengfei Liu

    2014-01-01

    Full Text Available A novel mixed higher order regularizer involving the first and second degree image derivatives is proposed in this paper. Using spectral decomposition, we reformulate the new regularizer as a weighted L1-L2 mixed norm of image derivatives. Due to the equivalent formulation of the proposed regularizer, an efficient fast projected gradient algorithm combined with monotone fast iterative shrinkage thresholding, called, FPG-MFISTA, is designed to solve the resulting variational image recovery problems under majorization-minimization framework. Finally, we demonstrate the effectiveness of the proposed regularization scheme by the experimental comparisons with total variation (TV scheme, nonlocal TV scheme, and current second degree methods. Specifically, the proposed approach achieves better results than related state-of-the-art methods in terms of peak signal to ratio (PSNR and restoration quality.

  9. Advective Mixing in a Nondivergent Barotropic Hurricane Model

    Science.gov (United States)

    2010-01-20

    voted to the mixing of fluid from different regions of a hurri- cane, which is considered as a fundamental mechanism that is intimately related to...range is governed by the Cauchy-Riemann deformation tensor , 1(x0,t0)= ( dx0φ t0+T t0 (x0) )∗( dx0φ t0+T t0 (x0) ) , and becomes maximal when ξ0 is

  10. Improving Mixed-phase Cloud Parameterization in Climate Model with the ACRF Measurements

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Zhien [Univ. of Wyoming, Laramie, WY (United States)

    2016-12-13

    Mixed-phase cloud microphysical and dynamical processes are still poorly understood, and their representation in GCMs is a major source of uncertainties in overall cloud feedback in GCMs. Thus improving mixed-phase cloud parameterizations in climate models is critical to reducing the climate forecast uncertainties. This study aims at providing improved knowledge of mixed-phase cloud properties from the long-term ACRF observations and improving mixed-phase clouds simulations in the NCAR Community Atmosphere Model version 5 (CAM5). The key accomplishments are: 1) An improved retrieval algorithm was developed to provide liquid droplet concentration for drizzling or mixed-phase stratiform clouds. 2) A new ice concentration retrieval algorithm for stratiform mixed-phase clouds was developed. 3) A strong seasonal aerosol impact on ice generation in Arctic mixed-phase clouds was identified, which is mainly attributed to the high dust occurrence during the spring season. 4) A suite of multi-senor algorithms was applied to long-term ARM observations at the Barrow site to provide a complete dataset (LWC and effective radius profile for liquid phase, and IWC, Dge profiles and ice concentration for ice phase) to characterize Arctic stratiform mixed-phase clouds. This multi-year stratiform mixed-phase cloud dataset provides necessary information to study related processes, evaluate model stratiform mixed-phase cloud simulations, and improve model stratiform mixed-phase cloud parameterization. 5). A new in situ data analysis method was developed to quantify liquid mass partition in convective mixed-phase clouds. For the first time, we reliably compared liquid mass partitions in stratiform and convective mixed-phase clouds. Due to the different dynamics in stratiform and convective mixed-phase clouds, the temperature dependencies of liquid mass partitions are significantly different due to much higher ice concentrations in convective mixed phase clouds. 6) Systematic evaluations

  11. Generalized linear mixed models modern concepts, methods and applications

    CERN Document Server

    Stroup, Walter W

    2012-01-01

    PART I The Big PictureModeling BasicsWhat Is a Model?Two Model Forms: Model Equation and Probability DistributionTypes of Model EffectsWriting Models in Matrix FormSummary: Essential Elements for a Complete Statement of the ModelDesign MattersIntroductory Ideas for Translating Design and Objectives into ModelsDescribing ""Data Architecture"" to Facilitate Model SpecificationFrom Plot Plan to Linear PredictorDistribution MattersMore Complex Example: Multiple Factors with Different Units of ReplicationSetting the StageGoals for Inference with Models: OverviewBasic Tools of InferenceIssue I: Data

  12. Analysis of baseline, average, and longitudinally measured blood pressure data using linear mixed models.

    Science.gov (United States)

    Hossain, Ahmed; Beyene, Joseph

    2014-01-01

    This article compares baseline, average, and longitudinal data analysis methods for identifying genetic variants in genome-wide association study using the Genetic Analysis Workshop 18 data. We apply methods that include (a) linear mixed models with baseline measures, (b) random intercept linear mixed models with mean measures outcome, and (c) random intercept linear mixed models with longitudinal measurements. In the linear mixed models, covariates are included as fixed effects, whereas relatedness among individuals is incorporated as the variance-covariance structure of the random effect for the individuals. The overall strategy of applying linear mixed models decorrelate the data is based on Aulchenko et al.'s GRAMMAR. By analyzing systolic and diastolic blood pressure, which are used separately as outcomes, we compare the 3 methods in identifying a known genetic variant that is associated with blood pressure from chromosome 3 and simulated phenotype data. We also analyze the real phenotype data to illustrate the methods. We conclude that the linear mixed model with longitudinal measurements of diastolic blood pressure is the most accurate at identifying the known single-nucleotide polymorphism among the methods, but linear mixed models with baseline measures perform best with systolic blood pressure as the outcome.

  13. Valid statistical approaches for analyzing sholl data: Mixed effects versus simple linear models.

    Science.gov (United States)

    Wilson, Machelle D; Sethi, Sunjay; Lein, Pamela J; Keil, Kimberly P

    2017-03-01

    The Sholl technique is widely used to quantify dendritic morphology. Data from such studies, which typically sample multiple neurons per animal, are often analyzed using simple linear models. However, simple linear models fail to account for intra-class correlation that occurs with clustered data, which can lead to faulty inferences. Mixed effects models account for intra-class correlation that occurs with clustered data; thus, these models more accurately estimate the standard deviation of the parameter estimate, which produces more accurate p-values. While mixed models are not new, their use in neuroscience has lagged behind their use in other disciplines. A review of the published literature illustrates common mistakes in analyses of Sholl data. Analysis of Sholl data collected from Golgi-stained pyramidal neurons in the hippocampus of male and female mice using both simple linear and mixed effects models demonstrates that the p-values and standard deviations obtained using the simple linear models are biased downwards and lead to erroneous rejection of the null hypothesis in some analyses. The mixed effects approach more accurately models the true variability in the data set, which leads to correct inference. Mixed effects models avoid faulty inference in Sholl analysis of data sampled from multiple neurons per animal by accounting for intra-class correlation. Given the widespread practice in neuroscience of obtaining multiple measurements per subject, there is a critical need to apply mixed effects models more widely. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. The consumer’s choice among television displays: A multinomial logit approach

    Directory of Open Access Journals (Sweden)

    Carlos Giovanni González Espitia

    2013-07-01

    Full Text Available The consumer’s choice over a bundle of products depends on observable and unobservable characteristics of goods and consumers. This choice is made in order to maximize utility subject to a budget constraint. At the same time, firms make product differentiation decisions to maximize profit. Quality is a form of differentiation. An example of this occurs in the TV market, where several displays are developed. Our objective is to determine the probability for a consumer of choosing a type of display from among five kinds: standard tube, LCD, plasma, projection and LED. Using a multinomial logit approach, we find that electronic appliances like DVDs and audio systems, as well as socioeconomic status, increase the probability of choosing a high-tech television display. Our empirical approximation contributes to further understanding rational consumer behavior through the theory of utility maximization and highlights the importance of studying market structure and analyzing changes in welfare and efficiency.

  15. Using continuation-ratio logits to analyze the variation of the age composition of fish catches

    DEFF Research Database (Denmark)

    Kvist, Trine; Gislason, Henrik; Thyregod, Poul

    2000-01-01

    Major sources of information for the estimation of the size of the fish stocks and the rate of their exploitation are samples from which the age composition of catches may be determined However, the age composition in the catches often varies as a result of several factors. Stratification...... of the sampling is desirable, because it leads to better estimates of the age composition, and the corresponding variances and covariances. The analysis is impeded by the fact that the response is ordered categorical. This paper introduces an easily applicable method to analyze such data. The method combines...... be applied separately to each level of the logits. The method is illustrated by the analysis of age-composition data collected from the Danish sandeel fishery in the North Sea in 1993. The significance of possible sources of variation is evaluated, and formulae for estimating the proportions of each age...

  16. Sample size determination for logistic regression on a logit-normal distribution.

    Science.gov (United States)

    Kim, Seongho; Heath, Elisabeth; Heilbrun, Lance

    2017-06-01

    Although the sample size for simple logistic regression can be readily determined using currently available methods, the sample size calculation for multiple logistic regression requires some additional information, such as the coefficient of determination ([Formula: see text]) of a covariate of interest with other covariates, which is often unavailable in practice. The response variable of logistic regression follows a logit-normal distribution which can be generated from a logistic transformation of a normal distribution. Using this property of logistic regression, we propose new methods of determining the sample size for simple and multiple logistic regressions using a normal transformation of outcome measures. Simulation studies and a motivating example show several advantages of the proposed methods over the existing methods: (i) no need for [Formula: see text] for multiple logistic regression, (ii) available interim or group-sequential designs, and (iii) much smaller required sample size.

  17. A Nonlinear Mixed Effects Model for the Prediction of Natural Gas Consumption by Individual Customers

    Czech Academy of Sciences Publication Activity Database

    Brabec, Marek; Konár, Ondřej; Pelikán, Emil; Malý, Marek

    2008-01-01

    Roč. 24, č. 4 (2008), s. 659-678 ISSN 0169-2070 R&D Projects: GA AV ČR 1ET400300513 Institutional research plan: CEZ:AV0Z10300504 Keywords : individual gas consumption * nonlinear mixed effects model * ARIMAX * ARX * generalized linear mixed model * conditional modeling Subject RIV: JE - Non-nuclear Energetics, Energy Consumption ; Use Impact factor: 1.685, year: 2008

  18. Mathematical, physical and numerical principles essential for models of turbulent mixing

    Energy Technology Data Exchange (ETDEWEB)

    Sharp, David Howland [Los Alamos National Laboratory; Lim, Hyunkyung [STONY BROOK UNIV; Yu, Yan [STONY BROOK UNIV; Glimm, James G [STONY BROOK UNIV

    2009-01-01

    We propose mathematical, physical and numerical principles which are important for the modeling of turbulent mixing, especially the classical and well studied Rayleigh-Taylor and Richtmyer-Meshkov instabilities which involve acceleration driven mixing of a fluid discontinuity layer, by a steady accerleration or an impulsive force.

  19. The Simulation of Financial Markets by Agent-Based Mix-Game Models

    OpenAIRE

    Chengling Gou

    2006-01-01

    This paper studies the simulation of financial markets using an agent-based mix-game model which is a variant of the minority game (MG). It specifies the spectra of parameters of mix-game models that fit financial markets by investigating the dynamic behaviors of mix-game models under a wide range of parameters. The main findings are (a) in order to approach efficiency, agents in a real financial market must be heterogeneous, boundedly rational and subject to asymmetric information; (b) an ac...

  20. The Simulation of Financial Markets by an Agent-Based Mix-Game Model

    OpenAIRE

    Chengling Gou

    2006-01-01

    This paper studies the simulation of financial markets using an agent-based mix-game model which is a variant of the minority game (MG). It specifies the spectra of parameters of mix-game models that fit financial markets by investigating the dynamic behaviors of mix-game models under a wide range of parameters. The main findings are (a) in order to approach efficiency, agents in a real financial market must be heterogeneous, boundedly rational and subject to asymmetric information; (b) an ac...

  1. Bayes factor between Student t and Gaussian mixed models within an animal breeding context

    Directory of Open Access Journals (Sweden)

    García-Cortés Luis

    2008-07-01

    Full Text Available Abstract The implementation of Student t mixed models in animal breeding has been suggested as a useful statistical tool to effectively mute the impact of preferential treatment or other sources of outliers in field data. Nevertheless, these additional sources of variation are undeclared and we do not know whether a Student t mixed model is required or if a standard, and less parameterized, Gaussian mixed model would be sufficient to serve the intended purpose. Within this context, our aim was to develop the Bayes factor between two nested models that only differed in a bounded variable in order to easily compare a Student t and a Gaussian mixed model. It is important to highlight that the Student t density converges to a Gaussian process when degrees of freedom tend to infinity. The twomodels can then be viewed as nested models that differ in terms of degrees of freedom. The Bayes factor can be easily calculated from the output of a Markov chain Monte Carlo sampling of the complex model (Student t mixed model. The performance of this Bayes factor was tested under simulation and on a real dataset, using the deviation information criterion (DIC as the standard reference criterion. The two statistical tools showed similar trends along the parameter space, although the Bayes factor appeared to be the more conservative. There was considerable evidence favoring the Student t mixed model for data sets simulated under Student t processes with limited degrees of freedom, and moderate advantages associated with using the Gaussian mixed model when working with datasets simulated with 50 or more degrees of freedom. For the analysis of real data (weight of Pietrain pigs at six months, both the Bayes factor and DIC slightly favored the Student t mixed model, with there being a reduced incidence of outlier individuals in this population.

  2. Swell impact on wind stress and atmospheric mixing in a regional coupled atmosphere-wave model

    DEFF Research Database (Denmark)

    Wu, Lichuan; Rutgersson, Anna; Sahlée, Erik

    2016-01-01

    Over the ocean, the atmospheric turbulence can be significantly affected by swell waves. Change in the atmospheric turbulence affects the wind stress and atmospheric mixing over swell waves. In this study, the influence of swell on atmospheric mixing and wind stress is introduced into an atmosphere-wave-coupled...... regional climate model, separately and combined. The swell influence on atmospheric mixing is introduced into the atmospheric mixing length formula by adding a swell-induced contribution to the mixing. The swell influence on the wind stress under wind-following swell, moderate-range wind, and near......-neutral and unstable stratification conditions is introduced by changing the roughness length. Five year simulation results indicate that adding the swell influence on atmospheric mixing has limited influence, only slightly increasing the near-surface wind speed; in contrast, adding the swell influence on wind stress...

  3. Modeling of Mixing Behavior in a Combined Blowing Steelmaking Converter with a Filter-Based Euler-Lagrange Model

    Science.gov (United States)

    Li, Mingming; Li, Lin; Li, Qiang; Zou, Zongshu

    2018-05-01

    A filter-based Euler-Lagrange multiphase flow model is used to study the mixing behavior in a combined blowing steelmaking converter. The Euler-based volume of fluid approach is employed to simulate the top blowing, while the Lagrange-based discrete phase model that embeds the local volume change of rising bubbles for the bottom blowing. A filter-based turbulence method based on the local meshing resolution is proposed aiming to improve the modeling of turbulent eddy viscosities. The model validity is verified through comparison with physical experiments in terms of mixing curves and mixing times. The effects of the bottom gas flow rate on bath flow and mixing behavior are investigated and the inherent reasons for the mixing result are clarified in terms of the characteristics of bottom-blowing plumes, the interaction between plumes and top-blowing jets, and the change of bath flow structure.

  4. Alpha-modeling strategy for LES of turbulent mixing

    NARCIS (Netherlands)

    Geurts, Bernard J.; Holm, Darryl D.; Drikakis, D.; Geurts, B.J.

    2002-01-01

    The α-modeling strategy is followed to derive a new subgrid parameterization of the turbulent stress tensor in large-eddy simulation (LES). The LES-α modeling yields an explicitly filtered subgrid parameterization which contains the filtered nonlinear gradient model as well as a model which

  5. Eliciting mixed emotions: A meta-analysis comparing models, types and measures.

    Directory of Open Access Journals (Sweden)

    Raul eBerrios

    2015-04-01

    Full Text Available The idea that people can experience two oppositely valenced emotions has been controversial ever since early attempts to investigate the construct of mixed emotions. This meta-analysis examined the robustness with which mixed emotions have been elicited experimentally. A systematic literature search identified 63 experimental studies that instigated the experience of mixed emotions. Studies were distinguished according to the structure of the underlying affect model – dimensional or discrete – as well as according to the type of mixed emotions studied (e.g., happy-sad, fearful-happy, positive-negative. The meta-analysis using a random-effects model revealed a moderate to high effect size for the elicitation of mixed emotions (dIG+ = .77, which remained consistent regardless of the structure of the affect model, and across different types of mixed emotions. Several methodological and design moderators were tested. Studies using the minimum index (i.e., the minimum value between a pair of opposite valenced affects resulted in smaller effect sizes, whereas subjective measures of mixed emotions increased the effect sizes. The presence of more women in the samples was also associated with larger effect sizes. The current study indicates that mixed emotions are a robust, measurable and non-artifactual experience. The results are discussed in terms of the implications for an affect system that has greater versatility and flexibility than previously thought.

  6. Unit physics performance of a mix model in Eulerian fluid computations

    Energy Technology Data Exchange (ETDEWEB)

    Vold, Erik [Los Alamos National Laboratory; Douglass, Rod [Los Alamos National Laboratory

    2011-01-25

    In this report, we evaluate the performance of a K-L drag-buoyancy mix model, described in a reference study by Dimonte-Tipton [1] hereafter denoted as [D-T]. The model was implemented in an Eulerian multi-material AMR code, and the results are discussed here for a series of unit physics tests. The tests were chosen to calibrate the model coefficients against empirical data, principally from RT (Rayleigh-Taylor) and RM (Richtmyer-Meshkov) experiments, and the present results are compared to experiments and to results reported in [D-T]. Results show the Eulerian implementation of the mix model agrees well with expectations for test problems in which there is no convective flow of the mass averaged fluid, i.e., in RT mix or in the decay of homogeneous isotropic turbulence (HIT). In RM shock-driven mix, the mix layer moves through the Eulerian computational grid, and there are differences with the previous results computed in a Lagrange frame [D-T]. The differences are attributed to the mass averaged fluid motion and examined in detail. Shock and re-shock mix are not well matched simultaneously. Results are also presented and discussed regarding model sensitivity to coefficient values and to initial conditions (IC), grid convergence, and the generation of atomically mixed volume fractions.

  7. Modelling ventricular fibrillation coarseness during cardiopulmonary resuscitation by mixed effects stochastic differential equations.

    Science.gov (United States)

    Gundersen, Kenneth; Kvaløy, Jan Terje; Eftestøl, Trygve; Kramer-Johansen, Jo

    2015-10-15

    For patients undergoing cardiopulmonary resuscitation (CPR) and being in a shockable rhythm, the coarseness of the electrocardiogram (ECG) signal is an indicator of the state of the patient. In the current work, we show how mixed effects stochastic differential equations (SDE) models, commonly used in pharmacokinetic and pharmacodynamic modelling, can be used to model the relationship between CPR quality measurements and ECG coarseness. This is a novel application of mixed effects SDE models to a setting quite different from previous applications of such models and where using such models nicely solves many of the challenges involved in analysing the available data. Copyright © 2015 John Wiley & Sons, Ltd.

  8. The Impact of Varied Discrimination Parameters on Mixed-Format Item Response Theory Model Selection

    Science.gov (United States)

    Whittaker, Tiffany A.; Chang, Wanchen; Dodd, Barbara G.

    2013-01-01

    Whittaker, Chang, and Dodd compared the performance of model selection criteria when selecting among mixed-format IRT models and found that the criteria did not perform adequately when selecting the more parameterized models. It was suggested by M. S. Johnson that the problems when selecting the more parameterized models may be because of the low…

  9. Review and comparison of bi-fluid interpenetration mixing models

    International Nuclear Information System (INIS)

    Enaux, C.

    2006-01-01

    Today, there is a lot of bi-fluid models with two different speeds: Baer-Nunziato models; Godunov-Romensky models. coupled Euler's equations, and so on. In this report, one compares the most used models in the fields of physics and mathematics while basing this study on the literature. From the point of view of physics. for each model. one reviews: -) the type of mixture considered and modeling assumptions, -) the technique of construction, -) some properties like the respect of thermodynamical principles, the respect of the Galilean invariance principle, or the equilibrium conservation. From the point of view of mathematics, for each model, one looks at: -) the possibility of writing the equations in conservative form, -) hyperbolicity, -) the existence of a mathematical entropy. Finally, a unified review of the models is proposed. It is shown that under certain closing assumptions or for certain flow types. some of the models become equivalent. (author)

  10. Mixed models, linear dependency, and identification in age-period-cohort models.

    Science.gov (United States)

    O'Brien, Robert M

    2017-07-20

    This paper examines the identification problem in age-period-cohort models that use either linear or categorically coded ages, periods, and cohorts or combinations of these parameterizations. These models are not identified using the traditional fixed effect regression model approach because of a linear dependency between the ages, periods, and cohorts. However, these models can be identified if the researcher introduces a single just identifying constraint on the model coefficients. The problem with such constraints is that the results can differ substantially depending on the constraint chosen. Somewhat surprisingly, age-period-cohort models that specify one or more of ages and/or periods and/or cohorts as random effects are identified. This is the case without introducing an additional constraint. I label this identification as statistical model identification and show how statistical model identification comes about in mixed models and why which effects are treated as fixed and which are treated as random can substantially change the estimates of the age, period, and cohort effects. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  11. Evaluation of a Linear Mixing Model to Retrieve Soil and Vegetation Temperatures of Land Targets

    International Nuclear Information System (INIS)

    Yang, Jinxin; Jia, Li; Cui, Yaokui; Zhou, Jie; Menenti, Massimo

    2014-01-01

    A simple linear mixing model of heterogeneous soil-vegetation system and retrieval of component temperatures from directional remote sensing measurements by inverting this model is evaluated in this paper using observations by a thermal camera. The thermal camera was used to obtain multi-angular TIR (Thermal Infra-Red) images over vegetable and orchard canopies. A whole thermal camera image was treated as a pixel of a satellite image to evaluate the model with the two-component system, i.e. soil and vegetation. The evaluation included two parts: evaluation of the linear mixing model and evaluation of the inversion of the model to retrieve component temperatures. For evaluation of the linear mixing model, the RMSE is 0.2 K between the observed and modelled brightness temperatures, which indicates that the linear mixing model works well under most conditions. For evaluation of the model inversion, the RMSE between the model retrieved and the observed vegetation temperatures is 1.6K, correspondingly, the RMSE between the observed and retrieved soil temperatures is 2.0K. According to the evaluation of the sensitivity of retrieved component temperatures on fractional cover, the linear mixing model gives more accurate retrieval accuracies for both soil and vegetation temperatures under intermediate fractional cover conditions

  12. An R2 statistic for fixed effects in the linear mixed model.

    Science.gov (United States)

    Edwards, Lloyd J; Muller, Keith E; Wolfinger, Russell D; Qaqish, Bahjat F; Schabenberger, Oliver

    2008-12-20

    Statisticians most often use the linear mixed model to analyze Gaussian longitudinal data. The value and familiarity of the R(2) statistic in the linear univariate model naturally creates great interest in extending it to the linear mixed model. We define and describe how to compute a model R(2) statistic for the linear mixed model by using only a single model. The proposed R(2) statistic measures multivariate association between the repeated outcomes and the fixed effects in the linear mixed model. The R(2) statistic arises as a 1-1 function of an appropriate F statistic for testing all fixed effects (except typically the intercept) in a full model. The statistic compares the full model with a null model with all fixed effects deleted (except typically the intercept) while retaining exactly the same covariance structure. Furthermore, the R(2) statistic leads immediately to a natural definition of a partial R(2) statistic. A mixed model in which ethnicity gives a very small p-value as a longitudinal predictor of blood pressure (BP) compellingly illustrates the value of the statistic. In sharp contrast to the extreme p-value, a very small R(2) , a measure of statistical and scientific importance, indicates that ethnicity has an almost negligible association with the repeated BP outcomes for the study.

  13. The Brown Muck of $B^0$ and $B^0_s$ Mixing: Beyond the Standard Model

    Energy Technology Data Exchange (ETDEWEB)

    Bouchard, Christopher Michael [Univ. of Illinois, Urbana-Champaign, IL (United States)

    2011-01-01

    Standard Model contributions to neutral $B$ meson mixing begin at the one loop level where they are further suppressed by a combination of the GIM mechanism and Cabibbo suppression. This combination makes $B$ meson mixing a promising probe of new physics, where as yet undiscovered particles and/or interactions can participate in the virtual loops. Relating underlying interactions of the mixing process to experimental observation requires a precise calculation of the non-perturbative process of hadronization, characterized by hadronic mixing matrix elements. This thesis describes a calculation of the hadronic mixing matrix elements relevant to a large class of new physics models. The calculation is performed via lattice QCD using the MILC collaboration's gauge configurations with $2+1$ dynamical sea quarks.

  14. Evaluation of scalar mixing and time scale models in PDF simulations of a turbulent premixed flame

    Energy Technology Data Exchange (ETDEWEB)

    Stoellinger, Michael; Heinz, Stefan [Department of Mathematics, University of Wyoming, Laramie, WY (United States)

    2010-09-15

    Numerical simulation results obtained with a transported scalar probability density function (PDF) method are presented for a piloted turbulent premixed flame. The accuracy of the PDF method depends on the scalar mixing model and the scalar time scale model. Three widely used scalar mixing models are evaluated: the interaction by exchange with the mean (IEM) model, the modified Curl's coalescence/dispersion (CD) model and the Euclidean minimum spanning tree (EMST) model. The three scalar mixing models are combined with a simple model for the scalar time scale which assumes a constant C{sub {phi}}=12 value. A comparison of the simulation results with available measurements shows that only the EMST model calculates accurately the mean and variance of the reaction progress variable. An evaluation of the structure of the PDF's of the reaction progress variable predicted by the three scalar mixing models confirms this conclusion: the IEM and CD models predict an unrealistic shape of the PDF. Simulations using various C{sub {phi}} values ranging from 2 to 50 combined with the three scalar mixing models have been performed. The observed deficiencies of the IEM and CD models persisted for all C{sub {phi}} values considered. The value C{sub {phi}}=12 combined with the EMST model was found to be an optimal choice. To avoid the ad hoc choice for C{sub {phi}}, more sophisticated models for the scalar time scale have been used in simulations using the EMST model. A new model for the scalar time scale which is based on a linear blending between a model for flamelet combustion and a model for distributed combustion is developed. The new model has proven to be very promising as a scalar time scale model which can be applied from flamelet to distributed combustion. (author)

  15. Mixed deterministic statistical modelling of regional ozone air pollution

    KAUST Repository

    Kalenderski, Stoitchko

    2011-03-17

    We develop a physically motivated statistical model for regional ozone air pollution by separating the ground-level pollutant concentration field into three components, namely: transport, local production and large-scale mean trend mostly dominated by emission rates. The model is novel in the field of environmental spatial statistics in that it is a combined deterministic-statistical model, which gives a new perspective to the modelling of air pollution. The model is presented in a Bayesian hierarchical formalism, and explicitly accounts for advection of pollutants, using the advection equation. We apply the model to a specific case of regional ozone pollution-the Lower Fraser valley of British Columbia, Canada. As a predictive tool, we demonstrate that the model vastly outperforms existing, simpler modelling approaches. Our study highlights the importance of simultaneously considering different aspects of an air pollution problem as well as taking into account the physical bases that govern the processes of interest. © 2011 John Wiley & Sons, Ltd..

  16. Consequences of observed Bd-anti Bd mixing in standard and nonstandard models

    International Nuclear Information System (INIS)

    Datta, A.; Paschos, E.A.; Tuerke, U.

    1987-01-01

    Implications of the B d -anti B d mixing report by the ARGUS group are investigated. We show that in order for the standard model to accomodate the result, the B → anti B hadronic matrix element must satisfy lower bounds as a function of top quark mass. In this case B S -anti B S mixing is necessarily large (r S > or approx. 0.74) irrespective of m t . This conclusion remains valid in several popular extensions of the standard model with three generations. In contrast to these models, four generation models can accomodate simultaneously the observed B d -anti B d mixing and a relatively small B S -anti B S mixing. (orig.)

  17. On the use of the Prandtl mixing length model in the cutting torch modeling

    Energy Technology Data Exchange (ETDEWEB)

    Mancinelli, B [Grupo de Descargas Electricas, Departamento Ing. Electromecanica, Universidad Tecnologica Nacional, Regional Venado Tuerto, Laprida 651, Venado Tuerto (2600), Santa Fe (Argentina); Minotti, F O; Kelly, H, E-mail: bmancinelli@arnet.com.ar [Instituto de Fisica del Plasma (CONICET), Departamento de Fisica, Facultad de Ciencias Exactas y Naturales (UBA) Ciudad Universitaria Pab. I, 1428 Buenos Aires (Argentina)

    2011-05-01

    The Prandtl mixing length model has been used to take into account the turbulent effects in a 30 A high-energy density cutting torch model. In particular, the model requires the introduction of only one adjustable coefficient c corresponding to the length of action of the turbulence. It is shown that the c value has little effect on the plasma temperature profiles outside the nozzle (the differences being less than 10 %), but severely affects the plasma velocity distribution, with differences reaching about 100% at the middle of the nozzle-anode gap. Within the experimental uncertainties it was also found that the value c = 0.08 allows to reproduce both, the experimental data of velocity and temperature

  18. On the use of the Prandtl mixing length model in the cutting torch modeling

    International Nuclear Information System (INIS)

    Mancinelli, B; Minotti, F O; Kelly, H

    2011-01-01

    The Prandtl mixing length model has been used to take into account the turbulent effects in a 30 A high-energy density cutting torch model. In particular, the model requires the introduction of only one adjustable coefficient c corresponding to the length of action of the turbulence. It is shown that the c value has little effect on the plasma temperature profiles outside the nozzle (the differences being less than 10 %), but severely affects the plasma velocity distribution, with differences reaching about 100% at the middle of the nozzle-anode gap. Within the experimental uncertainties it was also found that the value c = 0.08 allows to reproduce both, the experimental data of velocity and temperature

  19. Species Distribution Modeling: Comparison of Fixed and Mixed Effects Models Using INLA

    Directory of Open Access Journals (Sweden)

    Lara Dutra Silva

    2017-12-01

    Full Text Available Invasive alien species are among the most important, least controlled, and least reversible of human impacts on the world’s ecosystems, with negative consequences affecting biodiversity and socioeconomic systems. Species distribution models have become a fundamental tool in assessing the potential spread of invasive species in face of their native counterparts. In this study we compared two different modeling techniques: (i fixed effects models accounting for the effect of ecogeographical variables (EGVs; and (ii mixed effects models including also a Gaussian random field (GRF to model spatial correlation (Matérn covariance function. To estimate the potential distribution of Pittosporum undulatum and Morella faya (respectively, invasive and native trees, we used geo-referenced data of their distribution in Pico and São Miguel islands (Azores and topographic, climatic and land use EGVs. Fixed effects models run with maximum likelihood or the INLA (Integrated Nested Laplace Approximation approach provided very similar results, even when reducing the size of the presences data set. The addition of the GRF increased model adjustment (lower Deviance Information Criterion, particularly for the less abundant tree, M. faya. However, the random field parameters were clearly affected by sample size and species distribution pattern. A high degree of spatial autocorrelation was found and should be taken into account when modeling species distribution.

  20. A brief introduction to mixed effects modelling and multi-model inference in ecology.

    Science.gov (United States)

    Harrison, Xavier A; Donaldson, Lynda; Correa-Cano, Maria Eugenia; Evans, Julian; Fisher, David N; Goodwin, Cecily E D; Robinson, Beth S; Hodgson, David J; Inger, Richard

    2018-01-01

    The use of linear mixed effects models (LMMs) is increasingly common in the analysis of biological data. Whilst LMMs offer a flexible approach to modelling a broad range of data types, ecological data are often complex and require complex model structures, and the fitting and interpretation of such models is not always straightforward. The ability to achieve robust biological inference requires that practitioners know how and when to apply these tools. Here, we provide a general overview of current methods for the application of LMMs to biological data, and highlight the typical pitfalls that can be encountered in the statistical modelling process. We tackle several issues regarding methods of model selection, with particular reference to the use of information theory and multi-model inference in ecology. We offer practical solutions and direct the reader to key references that provide further technical detail for those seeking a deeper understanding. This overview should serve as a widely accessible code of best practice for applying LMMs to complex biological problems and model structures, and in doing so improve the robustness of conclusions drawn from studies investigating ecological and evolutionary questions.

  1. The Mixed Quark-Gluon Condensate from the Global Color Symmetry Model

    Institute of Scientific and Technical Information of China (English)

    ZONG Hong-Shi; PING Jia-Lun; LU Xiao-Fu; WANG Fan; ZHAO En-Guang

    2002-01-01

    The mixed quark-gluon condensate from the global color symmetry model is derived. It is shown that themixed quark-gluon condensate depends explicitly on the gluon propagator. This interesting feature may be regarded asan additional constraint on the model of gluon propagator. The values of the mixed quark-gluon condensate from someansatz for the gluon propagator are compared with those determined from QCD sum rules.

  2. Development of two phase turbulent mixing model for subchannel analysis relevant to BWR

    International Nuclear Information System (INIS)

    Sharma, M.P.; Nayak, A.K.; Kannan, Umasankari

    2014-01-01

    A two phase flow model is presented, which predicts both liquid and gas phase turbulent mixing rate between adjacent subchannels of reactor rod bundles. The model presented here is for slug churn flow regime, which is dominant as compared to the other regimes like bubbly flow and annular flow regimes, since turbulent mixing rate is the highest in slug churn flow regime. In this paper, we have defined new dimensionless parameters i.e. liquid mixing number and gas mixing number for two phase turbulent mixing. The liquid mixing number is a function of mixture Reynolds number whereas the gas phase mixing number is a function of both mixture Reynolds number and volumetric fraction of gas. The effect of pressure, geometrical influence of subchannel is also included in this model. The present model has been tested against low pressure and temperature air-water and high pressure and temperature steam-water experimental data found that it shows good agreement with available experimental data. (author)

  3. Are mixed explicit/implicit solvation models reliable for studying phosphate hydrolysis? A comparative study of continuum, explicit and mixed solvation models.

    Energy Technology Data Exchange (ETDEWEB)

    Kamerlin, Shina C. L.; Haranczyk, Maciej; Warshel, Arieh

    2009-05-01

    Phosphate hydrolysis is ubiquitous in biology. However, despite intensive research on this class of reactions, the precise nature of the reaction mechanism remains controversial. In this work, we have examined the hydrolysis of three homologous phosphate diesters. The solvation free energy was simulated by means of either an implicit solvation model (COSMO), hybrid quantum mechanical / molecular mechanical free energy perturbation (QM/MM-FEP) or a mixed solvation model in which N water molecules were explicitly included in the ab initio description of the reacting system (where N=1-3), with the remainder of the solvent being implicitly modelled as a continuum. Here, both COSMO and QM/MM-FEP reproduce Delta Gobs within an error of about 2kcal/mol. However, we demonstrate that in order to obtain any form of reliable results from a mixed model, it is essential to carefully select the explicit water molecules from short QM/MM runs that act as a model for the true infinite system. Additionally, the mixed models tend to be increasingly inaccurate the more explicit water molecules are placed into the system. Thus, our analysis indicates that this approach provides an unreliable way for modelling phosphate hydrolysis in solution.

  4. Eliciting mixed emotions: a meta-analysis comparing models, types, and measures

    Science.gov (United States)

    Berrios, Raul; Totterdell, Peter; Kellett, Stephen

    2015-01-01

    The idea that people can experience two oppositely valenced emotions has been controversial ever since early attempts to investigate the construct of mixed emotions. This meta-analysis examined the robustness with which mixed emotions have been elicited experimentally. A systematic literature search identified 63 experimental studies that instigated the experience of mixed emotions. Studies were distinguished according to the structure of the underlying affect model—dimensional or discrete—as well as according to the type of mixed emotions studied (e.g., happy-sad, fearful-happy, positive-negative). The meta-analysis using a random-effects model revealed a moderate to high effect size for the elicitation of mixed emotions (dIG+ = 0.77), which remained consistent regardless of the structure of the affect model, and across different types of mixed emotions. Several methodological and design moderators were tested. Studies using the minimum index (i.e., the minimum value between a pair of opposite valenced affects) resulted in smaller effect sizes, whereas subjective measures of mixed emotions increased the effect sizes. The presence of more women in the samples was also associated with larger effect sizes. The current study indicates that mixed emotions are a robust, measurable and non-artifactual experience. The results are discussed in terms of the implications for an affect system that has greater versatility and flexibility than previously thought. PMID:25926805

  5. Data on copula modeling of mixed discrete and continuous neural time series.

    Science.gov (United States)

    Hu, Meng; Li, Mingyao; Li, Wu; Liang, Hualou

    2016-06-01

    Copula is an important tool for modeling neural dependence. Recent work on copula has been expanded to jointly model mixed time series in neuroscience ("Hu et al., 2016, Joint Analysis of Spikes and Local Field Potentials using Copula" [1]). Here we present further data for joint analysis of spike and local field potential (LFP) with copula modeling. In particular, the details of different model orders and the influence of possible spike contamination in LFP data from the same and different electrode recordings are presented. To further facilitate the use of our copula model for the analysis of mixed data, we provide the Matlab codes, together with example data.

  6. Modeling the Bergeron-Findeisen Process Using PDF Methods With an Explicit Representation of Mixing

    Science.gov (United States)

    Jeffery, C.; Reisner, J.

    2005-12-01

    Currently, the accurate prediction of cloud droplet and ice crystal number concentration in cloud resolving, numerical weather prediction and climate models is a formidable challenge. The Bergeron-Findeisen process in which ice crystals grow by vapor deposition at the expense of super-cooled droplets is expected to be inhomogeneous in nature--some droplets will evaporate completely in centimeter-scale filaments of sub-saturated air during turbulent mixing while others remain unchanged [Baker et al., QJRMS, 1980]--and is unresolved at even cloud-resolving scales. Despite the large body of observational evidence in support of the inhomogeneous mixing process affecting cloud droplet number [most recently, Brenguier et al., JAS, 2000], it is poorly understood and has yet to be parameterized and incorporated into a numerical model. In this talk, we investigate the Bergeron-Findeisen process using a new approach based on simulations of the probability density function (PDF) of relative humidity during turbulent mixing. PDF methods offer a key advantage over Eulerian (spatial) models of cloud mixing and evaporation: the low probability (cm-scale) filaments of entrained air are explicitly resolved (in probability space) during the mixing event even though their spatial shape, size and location remain unknown. Our PDF approach reveals the following features of the inhomogeneous mixing process during the isobaric turbulent mixing of two parcels containing super-cooled water and ice, respectively: (1) The scavenging of super-cooled droplets is inhomogeneous in nature; some droplets evaporate completely at early times while others remain unchanged. (2) The degree of total droplet evaporation during the initial mixing period depends linearly on the mixing fractions of the two parcels and logarithmically on Damköhler number (Da)---the ratio of turbulent to evaporative time-scales. (3) Our simulations predict that the PDF of Lagrangian (time-integrated) subsaturation (S) goes as

  7. Estimating the numerical diapycnal mixing in an eddy-permitting ocean model

    Science.gov (United States)

    Megann, Alex

    2018-01-01

    Constant-depth (or "z-coordinate") ocean models such as MOM4 and NEMO have become the de facto workhorse in climate applications, having attained a mature stage in their development and are well understood. A generic shortcoming of this model type, however, is a tendency for the advection scheme to produce unphysical numerical diapycnal mixing, which in some cases may exceed the explicitly parameterised mixing based on observed physical processes, and this is likely to have effects on the long-timescale evolution of the simulated climate system. Despite this, few quantitative estimates have been made of the typical magnitude of the effective diapycnal diffusivity due to numerical mixing in these models. GO5.0 is a recent ocean model configuration developed jointly by the UK Met Office and the National Oceanography Centre. It forms the ocean component of the GC2 climate model, and is closely related to the ocean component of the UKESM1 Earth System Model, the UK's contribution to the CMIP6 model intercomparison. GO5.0 uses version 3.4 of the NEMO model, on the ORCA025 global tripolar grid. An approach to quantifying the numerical diapycnal mixing in this model, based on the isopycnal watermass analysis of Lee et al. (2002), is described, and the estimates thereby obtained of the effective diapycnal diffusivity in GO5.0 are compared with the values of the explicit diffusivity used by the model. It is shown that the effective mixing in this model configuration is up to an order of magnitude higher than the explicit mixing in much of the ocean interior, implying that mixing in the model below the mixed layer is largely dominated by numerical mixing. This is likely to have adverse consequences for the representation of heat uptake in climate models intended for decadal climate projections, and in particular is highly relevant to the interpretation of the CMIP6 class of climate models, many of which use constant-depth ocean models at ¼° resolution

  8. Mathematical modeling of the mixing zone for getting bimetallic compound

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Stanislav L. [Institute of Applied Mechanics, Ural Branch, Izhevsk (Russian Federation)

    2011-07-01

    A mathematical model of the formation of atomic bonds in metals and alloys, based on the electrostatic interaction between the outer electron shells of atoms of chemical elements. Key words: mathematical model, the interatomic bonds, the electron shell of atoms, the potential, the electron density, bimetallic compound.

  9. A Thermodynamic Mixed-Solid Asphaltene Precipitation Model

    DEFF Research Database (Denmark)

    Lindeloff, Niels; Heidemann, R.A.; Andersen, Simon Ivar

    1998-01-01

    A simple model for the prediction of asphaltene precipitation is proposed. The model is based on an equation of state and uses standard thermodynamics, thus assuming that the precipitation phenomenon is a reversible process. The solid phase is treated as an ideal multicomponent mixture. An activity...

  10. Mixed Frequency Data Sampling Regression Models: The R Package midasr

    Directory of Open Access Journals (Sweden)

    Eric Ghysels

    2016-08-01

    Full Text Available When modeling economic relationships it is increasingly common to encounter data sampled at different frequencies. We introduce the R package midasr which enables estimating regression models with variables sampled at different frequencies within a MIDAS regression framework put forward in work by Ghysels, Santa-Clara, and Valkanov (2002. In this article we define a general autoregressive MIDAS regression model with multiple variables of different frequencies and show how it can be specified using the familiar R formula interface and estimated using various optimization methods chosen by the researcher. We discuss how to check the validity of the estimated model both in terms of numerical convergence and statistical adequacy of a chosen regression specification, how to perform model selection based on a information criterion, how to assess forecasting accuracy of the MIDAS regression model and how to obtain a forecast aggregation of different MIDAS regression models. We illustrate the capabilities of the package with a simulated MIDAS regression model and give two empirical examples of application of MIDAS regression.

  11. Multivariate longitudinal data analysis with mixed effects hidden Markov models.

    Science.gov (United States)

    Raffa, Jesse D; Dubin, Joel A

    2015-09-01

    Multiple longitudinal responses are often collected as a means to capture relevant features of the true outcome of interest, which is often hidden and not directly measurable. We outline an approach which models these multivariate longitudinal responses as generated from a hidden disease process. We propose a class of models which uses a hidden Markov model with separate but correlated random effects between multiple longitudinal responses. This approach was motivated by a smoking cessation clinical trial, where a bivariate longitudinal response involving both a continuous and a binomial response was collected for each participant to monitor smoking behavior. A Bayesian method using Markov chain Monte Carlo is used. Comparison of separate univariate response models to the bivariate response models was undertaken. Our methods are demonstrated on the smoking cessation clinical trial dataset, and properties of our approach are examined through extensive simulation studies. © 2015, The International Biometric Society.

  12. Thermal hydraulic model validation for HOR mixed core fuel management

    International Nuclear Information System (INIS)

    Gibcus, H.P.M.; Vries, J.W. de; Leege, P.F.A. de

    1997-01-01

    A thermal-hydraulic core management model has been developed for the Hoger Onderwijsreactor (HOR), a 2 MW pool-type university research reactor. The model was adopted for safety analysis purposes in the framework of HEU/LEU core conversion studies. It is applied in the thermal-hydraulic computer code SHORT (Steady-state HOR Thermal-hydraulics) which is presently in use in designing core configurations and for in-core fuel management. An elaborate measurement program was performed for establishing the core hydraulic characteristics for a variety of conditions. The hydraulic data were obtained with a dummy fuel element with special equipment allowing a.o. direct measurement of the true core flow rate. Using these data the thermal-hydraulic model was validated experimentally. The model, experimental tests, and model validation are discussed. (author)

  13. Software engineering the mixed model for genome-wide association studies on large samples.

    Science.gov (United States)

    Zhang, Zhiwu; Buckler, Edward S; Casstevens, Terry M; Bradbury, Peter J

    2009-11-01

    Mixed models improve the ability to detect phenotype-genotype associations in the presence of population stratification and multiple levels of relatedness in genome-wide association studies (GWAS), but for large data sets the resource consumption becomes impractical. At the same time, the sample size and number of markers used for GWAS is increasing dramatically, resulting in greater statistical power to detect those associations. The use of mixed models with increasingly large data sets depends on the availability of software for analyzing those models. While multiple software packages implement the mixed model method, no single package provides the best combination of fast computation, ability to handle large samples, flexible modeling and ease of use. Key elements of association analysis with mixed models are reviewed, including modeling phenotype-genotype associations using mixed models, population stratification, kinship and its estimation, variance component estimation, use of best linear unbiased predictors or residuals in place of raw phenotype, improving efficiency and software-user interaction. The available software packages are evaluated, and suggestions made for future software development.

  14. Estimating the Numerical Diapycnal Mixing in the GO5.0 Ocean Model

    Science.gov (United States)

    Megann, A.; Nurser, G.

    2014-12-01

    Constant-depth (or "z-coordinate") ocean models such as MOM4 and NEMO have become the de facto workhorse in climate applications, and have attained a mature stage in their development and are well understood. A generic shortcoming of this model type, however, is a tendency for the advection scheme to produce unphysical numerical diapycnal mixing, which in some cases may exceed the explicitly parameterised mixing based on observed physical processes, and this is likely to have effects on the long-timescale evolution of the simulated climate system. Despite this, few quantitative estimations have been made of the magnitude of the effective diapycnal diffusivity due to numerical mixing in these models. GO5.0 is the latest ocean model configuration developed jointly by the UK Met Office and the National Oceanography Centre (Megann et al, 2014), and forms part of the GC1 and GC2 climate models. It uses version 3.4 of the NEMO model, on the ORCA025 ¼° global tripolar grid. We describe various approaches to quantifying the numerical diapycnal mixing in this model, and present results from analysis of the GO5.0 model based on the isopycnal watermass analysis of Lee et al (2002) that indicate that numerical mixing does indeed form a significant component of the watermass transformation in the ocean interior.

  15. A comparison of generalized multinomial logit, random parameters logit, wtp-space and latent class models to studying consumers' preferences for animal welfare

    OpenAIRE

    Kallas, Zein; Borrisser-Pairó,, Francesc; Martínez, Beatriz; Vieira, Ceferina; Panella-Riera, Nuria; Olivar, Maria Angels; Gil Roig, José María

    2016-01-01

    The European societies are requiring that animals to be raised as closely as possible to their natural conditions. The growing concerns about animal welfare is resulting in continuous modifications of regulations and policies that led to ban of a number of intensive farming methods. The European authorities consider the pig welfare as a priority issue. They are studying to ban surgical pig castration by 2018, which may seriously affect markets and consumers due to boar tainted-meat. This stud...

  16. A Monte Carlo simulation study comparing linear regression, beta regression, variable-dispersion beta regression and fractional logit regression at recovering average difference measures in a two sample design.

    Science.gov (United States)

    Meaney, Christopher; Moineddin, Rahim

    2014-01-24

    In biomedical research, response variables are often encountered which have bounded support on the open unit interval--(0,1). Traditionally, researchers have attempted to estimate covariate effects on these types of response data using linear regression. Alternative modelling strategies may include: beta regression, variable-dispersion beta regression, and fractional logit regression models. This study employs a Monte Carlo simulation design to compare the statistical properties of the linear regression model to that of the more novel beta regression, variable-dispersion beta regression, and fractional logit regression models. In the Monte Carlo experiment we assume a simple two sample design. We assume observations are realizations of independent draws from their respective probability models. The randomly simulated draws from the various probability models are chosen to emulate average proportion/percentage/rate differences of pre-specified magnitudes. Following simulation of the experimental data we estimate average proportion/percentage/rate differences. We compare the estimators in terms of bias, variance, type-1 error and power. Estimates of Monte Carlo error associated with these quantities are provided. If response data are beta distributed with constant dispersion parameters across the two samples, then all models are unbiased and have reasonable type-1 error rates and power profiles. If the response data in the two samples have different dispersion parameters, then the simple beta regression model is biased. When the sample size is small (N0 = N1 = 25) linear regression has superior type-1 error rates compared to the other models. Small sample type-1 error rates can be improved in beta regression models using bias correction/reduction methods. In the power experiments, variable-dispersion beta regression and fractional logit regression models have slightly elevated power compared to linear regression models. Similar results were observed if the

  17. Mixed deterministic statistical modelling of regional ozone air pollution

    KAUST Repository

    Kalenderski, Stoitchko; Steyn, Douw G.

    2011-01-01

    formalism, and explicitly accounts for advection of pollutants, using the advection equation. We apply the model to a specific case of regional ozone pollution-the Lower Fraser valley of British Columbia, Canada. As a predictive tool, we demonstrate

  18. Selecting an optimal mixed products using grey relationship model

    Directory of Open Access Journals (Sweden)

    Farshad Faezy Razi

    2013-06-01

    Full Text Available This paper presents an integrated supplier selection and inventory management using grey relationship model (GRM as well as multi-objective decision making process. The proposed model of this paper first ranks different suppliers based on GRM technique and then determines the optimum level of inventory by considering different objectives. To show the implementation of the proposed model, we use some benchmark data presented by Talluri and Baker [Talluri, S., & Baker, R. C. (2002. A multi-phase mathematical programming approach for effective supply chain design. European Journal of Operational Research, 141(3, 544-558.]. The preliminary results indicate that the proposed model of this paper is capable of handling different criteria for supplier selection.

  19. A Situative Space Model for Mobile Mixed-Reality Computing

    DEFF Research Database (Denmark)

    Pederson, Thomas; Janlert, Lars-Erik; Surie, Dipak

    2011-01-01

    This article proposes a situative space model that links the physical and virtual realms and sets the stage for complex human-computer interaction defined by what a human agent can see, hear, and touch, at any given point in time.......This article proposes a situative space model that links the physical and virtual realms and sets the stage for complex human-computer interaction defined by what a human agent can see, hear, and touch, at any given point in time....

  20. Development of a transverse mixing model for large scale impulsion phenomenon in tight lattice

    International Nuclear Information System (INIS)

    Liu, Xiaojing; Ren, Shuo; Cheng, Xu

    2017-01-01

    Highlights: • Experiment data of Krauss is used to validate the feasibility of CFD simulation method. • CFD simulation is performed to simulate the large scale impulsion phenomenon for tight-lattice bundle. • A mixing model to simulate the large scale impulsion phenomenon is proposed based on CFD result fitting. • The new developed mixing model has been added in the subchannel code. - Abstract: Tight-lattice is widely adopted in the innovative reactor fuel bundles design since it can increase the conversion ratio and improve the heat transfer between fuel bundles and coolant. It has been noticed that a large scale impulsion of cross-velocity exists in the gap region, which plays an important role on the transverse mixing flow and heat transfer. Although many experiments and numerical simulation have been carried out to study the impulsion of velocity, a model to describe the wave length, amplitude and frequency of mixing coefficient is still missing. This research work takes advantage of the CFD method to simulate the experiment of Krauss and to compare experiment data and simulation result in order to demonstrate the feasibility of simulation method and turbulence model. Then, based on this verified method and model, several simulations are performed with different Reynolds number and different Pitch-to-Diameter ratio. By fitting the CFD results achieved, a mixing model to simulate the large scale impulsion phenomenon is proposed and adopted in the current subchannel code. The new mixing model is applied to some fuel assembly analysis by subchannel calculation, it can be noticed that the new developed mixing model can reduce the hot channel factor and contribute to a uniform distribution of outlet temperature.

  1. Mixed Emotions: An Incentive Motivational Model of Sexual Deviance.

    Science.gov (United States)

    Smid, Wineke J; Wever, Edwin C

    2018-05-01

    Sexual offending behavior is a complex and multifaceted phenomenon. Most existing etiological models describe sexual offending behavior as a variant of offending behavior and mostly include factors referring to disinhibition and sexual deviance. In this article, we argue that there is additional value in describing sexual offending behavior as sexual behavior in terms of an incentive model of sexual motivation. The model describes sexual arousal as an emotion, triggered by a competent stimulus signaling potential reward, and comparable to other emotions coupled with strong bodily reactions. Consequently, we describe sexual offending behavior in terms of this new model with emphasis on the development of deviant sexual interests and preferences. Summarized, the model states that because sexual arousal itself is an emotion, there is a bidirectional relationship between sexual self-regulation and emotional self-regulation. Not only can sex be used to regulate emotional states (i.e., sexual coping), emotions can also be used, consciously or automatically, to regulate sexual arousal (i.e., sexual deviance). Preliminary support for the model is drawn from studies in the field of sex offender research as well as sexology and motivation research.

  2. Ocean bio-geophysical modeling using mixed layer-isopycnal general circulation model coupled with photosynthesis process

    Digital Repository Service at National Institute of Oceanography (India)

    Nakamoto, S.; Saito, H.; Muneyama, K.; Sato, T.; PrasannaKumar, S.; Kumar, A.; Frouin, R.

    -chemical system that supports steady carbon circulation in geological time scale in the world ocean using Mixed Layer-Isopycnal ocean General Circulation model with remotely sensed Coastal Zone Color Scanner (CZCS) chlorophyll pigment concentration....

  3. Criticality in the configuration-mixed interacting boson model (1) $U(5)-\\hat{Q}(\\chi)\\cdot\\hat{Q}(\\chi)$ mixing

    CERN Document Server

    Hellemans, V; De Baerdemacker, S; Heyde, K

    2008-01-01

    The case of U(5)--$\\hat{Q}(\\chi)\\cdot\\hat{Q}(\\chi)$ mixing in the configuration-mixed Interacting Boson Model is studied in its mean-field approximation. Phase diagrams with analytical and numerical solutions are constructed and discussed. Indications for first-order and second-order shape phase transitions can be obtained from binding energies and from critical exponents, respectively.

  4. Formulation and Validation of an Efficient Computational Model for a Dilute, Settling Suspension Undergoing Rotational Mixing

    Energy Technology Data Exchange (ETDEWEB)

    Sprague, Michael A.; Stickel, Jonathan J.; Sitaraman, Hariswaran; Crawford, Nathan C.; Fischer, Paul F.

    2017-04-11

    Designing processing equipment for the mixing of settling suspensions is a challenging problem. Achieving low-cost mixing is especially difficult for the application of slowly reacting suspended solids because the cost of impeller power consumption becomes quite high due to the long reaction times (batch mode) or due to large-volume reactors (continuous mode). Further, the usual scale-up metrics for mixing, e.g., constant tip speed and constant power per volume, do not apply well for mixing of suspensions. As an alternative, computational fluid dynamics (CFD) can be useful for analyzing mixing at multiple scales and determining appropriate mixer designs and operating parameters. We developed a mixture model to describe the hydrodynamics of a settling cellulose suspension. The suspension motion is represented as a single velocity field in a computationally efficient Eulerian framework. The solids are represented by a scalar volume-fraction field that undergoes transport due to particle diffusion, settling, fluid advection, and shear stress. A settling model and a viscosity model, both functions of volume fraction, were selected to fit experimental settling and viscosity data, respectively. Simulations were performed with the open-source Nek5000 CFD program, which is based on the high-order spectral-finite-element method. Simulations were performed for the cellulose suspension undergoing mixing in a laboratory-scale vane mixer. The settled-bed heights predicted by the simulations were in semi-quantitative agreement with experimental observations. Further, the simulation results were in quantitative agreement with experimentally obtained torque and mixing-rate data, including a characteristic torque bifurcation. In future work, we plan to couple this CFD model with a reaction-kinetics model for the enzymatic digestion of cellulose, allowing us to predict enzymatic digestion performance for various mixing intensities and novel reactor designs.

  5. An Investigation of a Hybrid Mixing Timescale Model for PDF Simulations of Turbulent Premixed Flames

    Science.gov (United States)

    Zhou, Hua; Kuron, Mike; Ren, Zhuyin; Lu, Tianfeng; Chen, Jacqueline H.

    2016-11-01

    Transported probability density function (TPDF) method features the generality for all combustion regimes, which is attractive for turbulent combustion simulations. However, the modeling of micromixing due to molecular diffusion is still considered to be a primary challenge for TPDF method, especially in turbulent premixed flames. Recently, a hybrid mixing rate model for TPDF simulations of turbulent premixed flames has been proposed, which recovers the correct mixing rates in the limits of flamelet regime and broken reaction zone regime while at the same time aims to properly account for the transition in between. In this work, this model is employed in TPDF simulations of turbulent premixed methane-air slot burner flames. The model performance is assessed by comparing the results from both direct numerical simulation (DNS) and conventional constant mechanical-to-scalar mixing rate model. This work is Granted by NSFC 51476087 and 91441202.

  6. Modelling Field Bus Communications in Mixed-Signal Embedded Systems

    Directory of Open Access Journals (Sweden)

    Alassir Mohamad

    2008-01-01

    Full Text Available Abstract We present a modelling platform using the SystemC-AMS language to simulate field bus communications for embedded systems. Our platform includes the model of an I/O controller IP (in this specific case an C controller that interfaces a master microprocessor with its peripherals on the field bus. Our platform shows the execution of the embedded software and its analog response on the lines of the bus. Moreover, it also takes into account the influence of the circuits's I/O by including their IBIS models in the SystemC-AMS description, as well as the bus lines imperfections. Finally, we present simulation results to validate our platform and measure the overhead introduced by SystemC-AMS over a pure digital SystemC simulation.

  7. Modelling Field Bus Communications in Mixed-Signal Embedded Systems

    Directory of Open Access Journals (Sweden)

    Patrick Garda

    2008-08-01

    Full Text Available We present a modelling platform using the SystemC-AMS language to simulate field bus communications for embedded systems. Our platform includes the model of an I/O controller IP (in this specific case an I2C controller that interfaces a master microprocessor with its peripherals on the field bus. Our platform shows the execution of the embedded software and its analog response on the lines of the bus. Moreover, it also takes into account the influence of the circuits's I/O by including their IBIS models in the SystemC-AMS description, as well as the bus lines imperfections. Finally, we present simulation results to validate our platform and measure the overhead introduced by SystemC-AMS over a pure digital SystemC simulation.

  8. Mathematical modeling of a mixed flow spray dryer

    International Nuclear Information System (INIS)

    Kasiri, N.; Delkhan, F.

    2001-01-01

    In this paper a mathematical model has been developed to simulate the behavior of spray dryers with an up-flowing spray. The model is based on mass, energy and momentum balance on a single droplet , and mass and energy balances on the drying gas. The system of nonlinear differential equations thus obtained is solved to predict the changes in temperature, humidity, diameter, velocity components and the density of the droplets as well as the temperature and the humidity changes of the drying gas. The predicted results were then compared with an industrially available set of results. A good degree of proximity between the two is reported

  9. Constraints on the mixing angle between ordinary and heavy leptons in a (V - A) model

    International Nuclear Information System (INIS)

    Hioki, Zenro

    1977-01-01

    The possibility of the mixing between ordinary and heavy leptons in a pure (V-A) model with SU(2) x U(1) gauge group is investigated. It is shown that to be consistent with the present experimental data on various neutral current reactions, this mixing must be small for any choice of the Weinberg angle in the case M sub(W)=M sub(Z) cos theta sub(W). The tri-muon production from the leptonic vertex through this mixing is also discussed. (auth.)

  10. Right-handed quark mixings in minimal left-right symmetric model with general CP violation

    International Nuclear Information System (INIS)

    Zhang Yue; Ji Xiangdong; An Haipeng; Mohapatra, R. N.

    2007-01-01

    We solve systematically for the right-handed quark mixings in the minimal left-right symmetric model which generally has both explicit and spontaneous CP violations. The leading-order result has the same hierarchical structure as the left-handed Cabibbo-Kobayashi-Maskawa mixing, but with additional CP phases originating from a spontaneous CP-violating phase in the Higgs vacuum expectation values. We explore the phenomenology entailed by the new right-handed mixing matrix, particularly the bounds on the mass of W R and the CP phase of the Higgs vacuum expectation values

  11. Influence of social networks on latent choice of electric cars : a mixed logit specification using experimental design data

    NARCIS (Netherlands)

    Rasouli, S.; Timmermans, H.J.P.

    2016-01-01

    Electric cars can potentially make a substantial contribution to the reduction of pollution and noise. The size of this contribution depends on the acceptance of this new technology in the market. This paper reports on the design and results of an elaborate stated choice experiment to investigate

  12. A SUB-GRID VOLUME-OF-FLUIDS (VOF) MODEL FOR MIXING IN RESOLVED SCALE AND IN UNRESOLVED SCALE COMPUTATIONS

    International Nuclear Information System (INIS)

    Vold, Erik L.; Scannapieco, Tony J.

    2007-01-01

    A sub-grid mix model based on a volume-of-fluids (VOF) representation is described for computational simulations of the transient mixing between reactive fluids, in which the atomically mixed components enter into the reactivity. The multi-fluid model allows each fluid species to have independent values for density, energy, pressure and temperature, as well as independent velocities and volume fractions. Fluid volume fractions are further divided into mix components to represent their 'mixedness' for more accurate prediction of reactivity. Time dependent conversion from unmixed volume fractions (denoted cf) to atomically mixed (af) fluids by diffusive processes is represented in resolved scale simulations with the volume fractions (cf, af mix). In unresolved scale simulations, the transition to atomically mixed materials begins with a conversion from unmixed material to a sub-grid volume fraction (pf). This fraction represents the unresolved small scales in the fluids, heterogeneously mixed by turbulent or multi-phase mixing processes, and this fraction then proceeds in a second step to the atomically mixed fraction by diffusion (cf, pf, af mix). Species velocities are evaluated with a species drift flux, ρ i u di = ρ i (u i -u), used to describe the fluid mixing sources in several closure options. A simple example of mixing fluids during 'interfacial deceleration mixing with a small amount of diffusion illustrates the generation of atomically mixed fluids in two cases, for resolved scale simulations and for unresolved scale simulations. Application to reactive mixing, including Inertial Confinement Fusion (ICF), is planned for future work.

  13. Safety of Mixed Model Access Control in a Multilevel System

    Science.gov (United States)

    2014-06-01

    42  H.  FIREWALL AND IPS LANGUAGES...Research Laboratory AIS automated information system ANOA advance notice of arrival APT advanced persistent threat BFM boundary flow modeling...of Investigation FW firewall GENSER general service xvi GUI graphical user interface HAG high-assurance guard HGS high-grade service H-H-H High

  14. Conflicts Management Model in School: A Mixed Design Study

    Science.gov (United States)

    Dogan, Soner

    2016-01-01

    The object of this study is to evaluate the reasons for conflicts occurring in school according to perceptions and views of teachers and resolution strategies used for conflicts and to build a model based on the results obtained. In the research, explanatory design including quantitative and qualitative methods has been used. The quantitative part…

  15. Impact of Lateral Mixing in the Ocean on El Nino in Fully Coupled Climate Models

    Science.gov (United States)

    Gnanadesikan, A.; Russell, A.; Pradal, M. A. S.; Abernathey, R. P.

    2016-02-01

    Given the large number of processes that can affect El Nino, it is difficult to understand why different climate models simulate El Nino differently. This paper focusses on the role of lateral mixing by mesoscale eddies. There is significant disagreement about the value of the mixing coefficient ARedi which parameterizes the lateral mixing of tracers. Coupled climate models usually prescribe small values of this coefficient, ranging between a few hundred and a few thousand m2/s. Observations, however, suggest values that are much larger. We present a sensitivity study with a suite of Earth System Models that examines the impact of varying ARedi on the amplitude of El Nino. We examine the effect of varying a spatially constant ARedi over a range of values similar to that seen in the IPCC AR5 models, as well as looking at two spatially varying distributions based on altimetric velocity estimates. While the expectation that higher values of ARedi should damp anomalies is borne out in the model, it is more than compensated by a weaker damping due to vertical mixing and a stronger response of atmospheric winds to SST anomalies. Under higher mixing, a weaker zonal SST gradient causes the center of convection over the Warm pool to shift eastward and to become more sensitive to changes in cold tongue SSTs . Changes in the SST gradient also explain interdecadal ENSO variability within individual model runs.

  16. Decision-case mix model for analyzing variation in cesarean rates.

    Science.gov (United States)

    Eldenburg, L; Waller, W S

    2001-01-01

    This article contributes a decision-case mix model for analyzing variation in c-section rates. Like recent contributions to the literature, the model systematically takes into account the effect of case mix. Going beyond past research, the model highlights differences in physician decision making in response to obstetric factors. Distinguishing the effects of physician decision making and case mix is important in understanding why c-section rates vary and in developing programs to effect change in physician behavior. The model was applied to a sample of deliveries at a hospital where physicians exhibited considerable variation in their c-section rates. Comparing groups with a low versus high rate, the authors' general conclusion is that the difference in physician decision tendencies (to perform a c-section), in response to specific obstetric factors, is at least as important as case mix in explaining variation in c-section rates. The exact effects of decision making versus case mix depend on how the model application defines the obstetric condition of interest and on the weighting of deliveries by their estimated "risk of Cesarean." The general conclusion is supported by an additional analysis that uses the model's elements to predict individual physicians' annual c-section rates.

  17. Sneutrino mixing

    International Nuclear Information System (INIS)

    Grossman, Y.

    1997-10-01

    In supersymmetric models with nonvanishing Majorana neutrino masses, the sneutrino and antisneutrino mix. The conditions under which this mixing is experimentally observable are studied, and mass-splitting of the sneutrino mass eigenstates and sneutrino oscillation phenomena are analyzed

  18. Modelling of Wheat-Flour Dough Mixing as an Open-Loop Hysteretic Process

    Czech Academy of Sciences Publication Activity Database

    Anderssen, R.; Kružík, Martin

    2013-01-01

    Roč. 18, č. 2 (2013), s. 283-293 ISSN 1531-3492 R&D Projects: GA AV ČR IAA100750802 Keywords : Dissipation * Dough mixing * Rate-independent systems Subject RIV: BA - General Mathematics Impact factor: 0.628, year: 2013 http://library.utia.cas.cz/separaty/2013/MTR/kruzik-modelling of wheat-flour dough mixing as an open-loop hysteretic process.pdf

  19. Color Mixing Correction for Post-printed Patterns on Colored Background Using Modified Particle Density Model

    OpenAIRE

    Suwa , Misako; Fujimoto , Katsuhito

    2006-01-01

    http://www.suvisoft.com; Color mixing occurs between background and foreground colors when a pattern is post-printed on a colored area because ink is not completely opaque. This paper proposes a new method for the correction of color mixing in line pattern such as characters and stamps, by using a modified particle density model. Parameters of the color correction can be calculated from two sets of foreground and background colors. By employing this method, the colors of foreground patterns o...

  20. Mixed Platoon Flow Dispersion Model Based on Speed-Truncated Gaussian Mixture Distribution

    Directory of Open Access Journals (Sweden)

    Weitiao Wu

    2013-01-01

    Full Text Available A mixed traffic flow feature is presented on urban arterials in China due to a large amount of buses. Based on field data, a macroscopic mixed platoon flow dispersion model (MPFDM was proposed to simulate the platoon dispersion process along the road section between two adjacent intersections from the flow view. More close to field observation, truncated Gaussian mixture distribution was adopted as the speed density distribution for mixed platoon. Expectation maximum (EM algorithm was used for parameters estimation. The relationship between the arriving flow distribution at downstream intersection and the departing flow distribution at upstream intersection was investigated using the proposed model. Comparison analysis using virtual flow data was performed between the Robertson model and the MPFDM. The results confirmed the validity of the proposed model.

  1. The effect of turbulent mixing models on the predictions of subchannel codes

    International Nuclear Information System (INIS)

    Tapucu, A.; Teyssedou, A.; Tye, P.; Troche, N.

    1994-01-01

    In this paper, the predictions of the COBRA-IV and ASSERT-4 subchannel codes have been compared with experimental data on void fraction, mass flow rate, and pressure drop obtained for two interconnected subchannels. COBRA-IV is based on a one-dimensional separated flow model with the turbulent intersubchannel mixing formulated as an extension of the single-phase mixing model, i.e. fluctuating equal mass exchange. ASSERT-4 is based on a drift flux model with the turbulent mixing modelled by assuming an exchange of equal volumes with different densities thus allowing a net fluctuating transverse mass flux from one subchannel to the other. This feature is implemented in the constitutive relationship for the relative velocity required by the conservation equations. It is observed that the predictions of ASSERT-4 follow the experimental trends better than COBRA-IV; therefore the approach of equal volume exchange constitutes an improvement over that of the equal mass exchange. ((orig.))

  2. Improved Expectation Maximization Algorithm for Gaussian Mixed Model Using the Kernel Method

    Directory of Open Access Journals (Sweden)

    Mohd Izhan Mohd Yusoff

    2013-01-01

    Full Text Available Fraud activities have contributed to heavy losses suffered by telecommunication companies. In this paper, we attempt to use Gaussian mixed model, which is a probabilistic model normally used in speech recognition to identify fraud calls in the telecommunication industry. We look at several issues encountered when calculating the maximum likelihood estimates of the Gaussian mixed model using an Expectation Maximization algorithm. Firstly, we look at a mechanism for the determination of the initial number of Gaussian components and the choice of the initial values of the algorithm using the kernel method. We show via simulation that the technique improves the performance of the algorithm. Secondly, we developed a procedure for determining the order of the Gaussian mixed model using the log-likelihood function and the Akaike information criteria. Finally, for illustration, we apply the improved algorithm to real telecommunication data. The modified method will pave the way to introduce a comprehensive method for detecting fraud calls in future work.

  3. The use of copulas to practical estimation of multivariate stochastic differential equation mixed effects models

    International Nuclear Information System (INIS)

    Rupšys, P.

    2015-01-01

    A system of stochastic differential equations (SDE) with mixed-effects parameters and multivariate normal copula density function were used to develop tree height model for Scots pine trees in Lithuania. A two-step maximum likelihood parameter estimation method is used and computational guidelines are given. After fitting the conditional probability density functions to outside bark diameter at breast height, and total tree height, a bivariate normal copula distribution model was constructed. Predictions from the mixed-effects parameters SDE tree height model calculated during this research were compared to the regression tree height equations. The results are implemented in the symbolic computational language MAPLE

  4. The use of copulas to practical estimation of multivariate stochastic differential equation mixed effects models

    Energy Technology Data Exchange (ETDEWEB)

    Rupšys, P. [Aleksandras Stulginskis University, Studenų g. 11, Akademija, Kaunas district, LT – 53361 Lithuania (Lithuania)

    2015-10-28

    A system of stochastic differential equations (SDE) with mixed-effects parameters and multivariate normal copula density function were used to develop tree height model for Scots pine trees in Lithuania. A two-step maximum likelihood parameter estimation method is used and computational guidelines are given. After fitting the conditional probability density functions to outside bark diameter at breast height, and total tree height, a bivariate normal copula distribution model was constructed. Predictions from the mixed-effects parameters SDE tree height model calculated during this research were compared to the regression tree height equations. The results are implemented in the symbolic computational language MAPLE.

  5. A model of radiative neutrino masses. Mixing and a possible fourth generation

    International Nuclear Information System (INIS)

    Babu, K.S.; Ma, E.; Pantaleone, J.

    1989-01-01

    We consider the phenomenological consequences of a recently proposed model with four lepton generations such that the three known neutrinos have radiatively induced Majorana masses. Mixing among generations in the presence of a heavy fourth neutrino necessitates a reevaluation of the usual experimental tests of the standard model. One interesting possibility is to have a τ lifetime longer than predicted by the standard three-generation model. Another is to have neutrino masses and mixing angles in the range needed for a natural explanation of the solar-neutrino puzzle in terms of the Mikheyev-Smirnov-Wolfenstein effect. (orig.)

  6. An applied model for the height of the daytime mixed layer and the entrainment zone

    DEFF Research Database (Denmark)

    Batchvarova, E.; Gryning, Sven-Erik

    1994-01-01

    A model is presented for the height of the mixed layer and the depth of the entrainment zone under near-neutral and unstable atmospheric conditions. It is based on the zero-order mixed layer height model of Batchvarova and Gryning (1991) and the parameterization of the entrainment zone depth......-layer height: friction velocity, kinematic heat flux near the ground and potential temperature gradient in the free atmosphere above the entrainment zone. When information is available on the horizontal divergence of the large-scale flow field, the model also takes into account the effect of subsidence...

  7. Minimization of required model runs in the Random Mixing approach to inverse groundwater flow and transport modeling

    Science.gov (United States)

    Hoerning, Sebastian; Bardossy, Andras; du Plessis, Jaco

    2017-04-01

    Most geostatistical inverse groundwater flow and transport modelling approaches utilize a numerical solver to minimize the discrepancy between observed and simulated hydraulic heads and/or hydraulic concentration values. The optimization procedure often requires many model runs, which for complex models lead to long run times. Random Mixing is a promising new geostatistical technique for inverse modelling. The method is an extension of the gradual deformation approach. It works by finding a field which preserves the covariance structure and maintains observed hydraulic conductivities. This field is perturbed by mixing it with new fields that fulfill the homogeneous conditions. This mixing is expressed as an optimization problem which aims to minimize the difference between the observed and simulated hydraulic heads and/or concentration values. To preserve the spatial structure, the mixing weights must lie on the unit hyper-sphere. We present a modification to the Random Mixing algorithm which significantly reduces the number of model runs required. The approach involves taking n equally spaced points on the unit circle as weights for mixing conditional random fields. Each of these mixtures provides a solution to the forward model at the conditioning locations. For each of the locations the solutions are then interpolated around the circle to provide solutions for additional mixing weights at very low computational cost. The interpolated solutions are used to search for a mixture which maximally reduces the objective function. This is in contrast to other approaches which evaluate the objective function for the n mixtures and then interpolate the obtained values. Keeping the mixture on the unit circle makes it easy to generate equidistant sampling points in the space; however, this means that only two fields are mixed at a time. Once the optimal mixture for two fields has been found, they are combined to form the input to the next iteration of the algorithm. This

  8. Reliability Estimation of Aero-engine Based on Mixed Weibull Distribution Model

    Science.gov (United States)

    Yuan, Zhongda; Deng, Junxiang; Wang, Dawei

    2018-02-01

    Aero-engine is a complex mechanical electronic system, based on analysis of reliability of mechanical electronic system, Weibull distribution model has an irreplaceable role. Till now, only two-parameter Weibull distribution model and three-parameter Weibull distribution are widely used. Due to diversity of engine failure modes, there is a big error with single Weibull distribution model. By contrast, a variety of engine failure modes can be taken into account with mixed Weibull distribution model, so it is a good statistical analysis model. Except the concept of dynamic weight coefficient, in order to make reliability estimation result more accurately, three-parameter correlation coefficient optimization method is applied to enhance Weibull distribution model, thus precision of mixed distribution reliability model is improved greatly. All of these are advantageous to popularize Weibull distribution model in engineering applications.

  9. Semiparametric mixed-effects analysis of PK/PD models using differential equations.

    Science.gov (United States)

    Wang, Yi; Eskridge, Kent M; Zhang, Shunpu

    2008-08-01

    Motivated by the use of semiparametric nonlinear mixed-effects modeling on longitudinal data, we develop a new semiparametric modeling approach to address potential structural model misspecification for population pharmacokinetic/pharmacodynamic (PK/PD) analysis. Specifically, we use a set of ordinary differential equations (ODEs) with form dx/dt = A(t)x + B(t) where B(t) is a nonparametric function that is estimated using penalized splines. The inclusion of a nonparametric function in the ODEs makes identification of structural model misspecification feasible by quantifying the model uncertainty and provides flexibility for accommodating possible structural model deficiencies. The resulting model will be implemented in a nonlinear mixed-effects modeling setup for population analysis. We illustrate the method with an application to cefamandole data and evaluate its performance through simulations.

  10. A consistency assessment of coupled cohesive zone models for mixed-mode debonding problems

    Directory of Open Access Journals (Sweden)

    R. Dimitri

    2014-07-01

    Full Text Available Due to their simplicity, cohesive zone models (CZMs are very attractive to describe mixed-mode failure and debonding processes of materials and interfaces. Although a large number of coupled CZMs have been proposed, and despite the extensive related literature, little attention has been devoted to ensuring the consistency of these models for mixed-mode conditions, primarily in a thermodynamical sense. A lack of consistency may affect the local or global response of a mechanical system. This contribution deals with the consistency check for some widely used exponential and bilinear mixed-mode CZMs. The coupling effect on stresses and energy dissipation is first investigated and the path-dependance of the mixed-mode debonding work of separation is analitically evaluated. Analytical predictions are also compared with results from numerical implementations, where the interface is described with zero-thickness contact elements. A node-to-segment strategy is here adopted, which incorporates decohesion and contact within a unified framework. A new thermodynamically consistent mixed-mode CZ model based on a reformulation of the Xu-Needleman model as modified by van den Bosch et al. is finally proposed and derived by applying the Coleman and Noll procedure in accordance with the second law of thermodynamics. The model holds monolithically for loading and unloading processes, as well as for decohesion and contact, and its performance is demonstrated through suitable examples.

  11. Evaluation of a Linear Mixing Model to Retrieve Soil and Vegetation Temperatures of Land Targets

    NARCIS (Netherlands)

    Yang, J.; Jia, L.; Cui, Y.; Zhou, J.; Menenti, M.

    2014-01-01

    A simple linear mixing model of heterogeneous soil-vegetation system and retrieval of component temperatures from directional remote sensing measurements by inverting this model is evaluated in this paper using observations by a thermal camera. The thermal camera was used to obtain multi-angular TIR

  12. Testing the family replication-model through Bsup(O)-Bsup(-O) mixing

    International Nuclear Information System (INIS)

    Datta, A.; Pati, J.C.

    1985-07-01

    It is observed that the family-replication idea, proposed in the context of a minimal preon-model, necessarily implies a maximal mixing (i.e. ΔM>>GAMMA) either in the Bsub(s)sup(O)-B-barsub(s)sup(O) or the Bsub(d)sup(O)-B-barsub(d)sup(O) system, in contrast to the standard model. (author)

  13. Bayesian prediction of spatial count data using generalized linear mixed models

    DEFF Research Database (Denmark)

    Christensen, Ole Fredslund; Waagepetersen, Rasmus Plenge

    2002-01-01

    Spatial weed count data are modeled and predicted using a generalized linear mixed model combined with a Bayesian approach and Markov chain Monte Carlo. Informative priors for a data set with sparse sampling are elicited using a previously collected data set with extensive sampling. Furthermore, ...

  14. Examples of mixed-effects modeling with crossed random effects and with binomial data

    NARCIS (Netherlands)

    Quené, H.; van den Bergh, H.

    2008-01-01

    Psycholinguistic data are often analyzed with repeated-measures analyses of variance (ANOVA), but this paper argues that mixed-effects (multilevel) models provide a better alternative method. First, models are discussed in which the two random factors of participants and items are crossed, and not

  15. The problem with time in mixed continuous/discrete time modelling

    NARCIS (Netherlands)

    Rovers, K.C.; Kuper, Jan; Smit, Gerardus Johannes Maria

    The design of cyber-physical systems requires the use of mixed continuous time and discrete time models. Current modelling tools have problems with time transformations (such as a time delay) or multi-rate systems. We will present a novel approach that implements signals as functions of time,

  16. Mixed-effects height–diameter models for ten conifers in the inland ...

    African Journals Online (AJOL)

    To demonstrate the utility of mixed-effects height–diameter models when conducting forest inventories, mixedeffects height–diameter models are presented for several commercially and ecologically important conifers in the inland Northwest of the USA. After obtaining height–diameter measurements from a plot/stand of ...

  17. A Second-Order Conditionally Linear Mixed Effects Model with Observed and Latent Variable Covariates

    Science.gov (United States)

    Harring, Jeffrey R.; Kohli, Nidhi; Silverman, Rebecca D.; Speece, Deborah L.

    2012-01-01

    A conditionally linear mixed effects model is an appropriate framework for investigating nonlinear change in a continuous latent variable that is repeatedly measured over time. The efficacy of the model is that it allows parameters that enter the specified nonlinear time-response function to be stochastic, whereas those parameters that enter in a…

  18. BAYESIAN PARAMETER ESTIMATION IN A MIXED-ORDER MODEL OF BOD DECAY. (U915590)

    Science.gov (United States)

    We describe a generalized version of the BOD decay model in which the reaction is allowed to assume an order other than one. This is accomplished by making the exponent on BOD concentration a free parameter to be determined by the data. This "mixed-order" model may be ...

  19. Marketing for a Web-Based Master's Degree Program in Light of Marketing Mix Model

    Science.gov (United States)

    Pan, Cheng-Chang

    2012-01-01

    The marketing mix model was applied with a focus on Web media to re-strategize a Web-based Master's program in a southern state university in U.S. The program's existing marketing strategy was examined using the four components of the model: product, price, place, and promotion, in hopes to repackage the program (product) to prospective students…

  20. Generalized Path Analysis and Generalized Simultaneous Equations Model for Recursive Systems with Responses of Mixed Types

    Science.gov (United States)

    Tsai, Tien-Lung; Shau, Wen-Yi; Hu, Fu-Chang

    2006-01-01

    This article generalizes linear path analysis (PA) and simultaneous equations models (SiEM) to deal with mixed responses of different types in a recursive or triangular system. An efficient instrumental variable (IV) method for estimating the structural coefficients of a 2-equation partially recursive generalized path analysis (GPA) model and…

  1. Computational model for turbulent flow around a grid spacer with mixing vane

    International Nuclear Information System (INIS)

    Tsutomu Ikeno; Takeo Kajishima

    2005-01-01

    Turbulent mixing coefficient and pressure drop are important factors in subchannel analysis to predict onset of DNB. However, universal correlations are difficult since these factors are significantly affected by the geometry of subchannel and a grid spacer with mixing vane. Therefore, we propose a computational model to estimate these factors. Computational model: To represent the effect of geometry of grid spacer in computational model, we applied a large eddy simulation (LES) technique in couple with an improved immersed-boundary method. In our previous work (Ikeno, et al., NURETH-10), detailed properties of turbulence in subchannel were successfully investigated by developing the immersed boundary method in LES. In this study, additional improvements are given: new one-equation dynamic sub-grid scale (SGS) model is introduced to account for the complex geometry without any artificial modification; the higher order accuracy is maintained by consistent treatment for boundary conditions for velocity and pressure. NUMERICAL TEST AND DISCUSSION: Turbulent mixing coefficient and pressure drop are affected strongly by the arrangement and inclination of mixing vane. Therefore, computations are carried out for each of convolute and periodic arrangements, and for each of 30 degree and 20 degree inclinations. The difference in turbulent mixing coefficient due to these factors is reasonably predicted by our method. (An example of this numerical test is shown in Fig. 1.) Turbulent flow of the problem includes unsteady separation behind the mixing vane and vortex shedding in downstream. Anisotropic distribution of turbulent stress is also appeared in rod gap. Therefore, our computational model has advantage for assessing the influence of arrangement and inclination of mixing vane. By coarser computational mesh, one can screen several candidates for spacer design. Then, by finer mesh, more quantitative analysis is possible. By such a scheme, we believe this method is useful

  2. Sensitivity of surface temperature to radiative forcing by contrail cirrus in a radiative-mixing model

    Directory of Open Access Journals (Sweden)

    U. Schumann

    2017-11-01

    Full Text Available Earth's surface temperature sensitivity to radiative forcing (RF by contrail cirrus and the related RF efficacy relative to CO2 are investigated in a one-dimensional idealized model of the atmosphere. The model includes energy transport by shortwave (SW and longwave (LW radiation and by mixing in an otherwise fixed reference atmosphere (no other feedbacks. Mixing includes convective adjustment and turbulent diffusion, where the latter is related to the vertical component of mixing by large-scale eddies. The conceptual study shows that the surface temperature sensitivity to given contrail RF depends strongly on the timescales of energy transport by mixing and radiation. The timescales are derived for steady layered heating (ghost forcing and for a transient contrail cirrus case. The radiative timescales are shortest at the surface and shorter in the troposphere than in the mid-stratosphere. Without mixing, a large part of the energy induced into the upper troposphere by radiation due to contrails or similar disturbances gets lost to space before it can contribute to surface warming. Because of the different radiative forcing at the surface and at top of atmosphere (TOA and different radiative heating rate profiles in the troposphere, the local surface temperature sensitivity to stratosphere-adjusted RF is larger for SW than for LW contrail forcing. Without mixing, the surface energy budget is more important for surface warming than the TOA budget. Hence, surface warming by contrails is smaller than suggested by the net RF at TOA. For zero mixing, cooling by contrails cannot be excluded. This may in part explain low efficacy values for contrails found in previous global circulation model studies. Possible implications of this study are discussed. Since the results of this study are model dependent, they should be tested with a comprehensive climate model in the future.

  3. Water-rock interaction modelling and uncertainties of mixing modelling. SDM-Site Laxemar

    International Nuclear Information System (INIS)

    Gimeno, Maria J.; Auque, Luis F.; Gomez, Javier B.; Acero, Patricia

    2009-01-01

    , hydrogeochemistry, microbiology, geomicrobiology, analytical chemistry etc. The resulting site descriptive model version, mainly based on available primary data from the extended data freeze L2.3 at Laxemar (November 30 2007). The data interpretation was carried out during November 2007 to September 2008. Several groups within ChemNet were involved and the evaluation was conducted independently using different approaches ranging from expert knowledge to geochemical and mathematical modelling including transport modelling. During regular ChemNet meetings the results have been presented and discussed. The original works by the ChemNet modellers are presented in four level III reports containing complementary information for the bedrock hydrogeochemistry Laxemar Site Descriptive Model (SDM-Site Laxemar, R-08-93) level II report. There is also a fifth level III report: Fracture mineralogy of the Laxemar area (R-08-99). This report presents the modelling work performed by the UZ (Univ. of Zaragoza) group as part of the work plan for Laxemar-Simpevarp 2.2 and 2.3. The main processes determining the global geochemical evolution of the Laxemar-Simpevarp groundwaters system are mixing and reaction processes. Mixing has taken place between different types of waters (end members) over time, making the discrimination of the main influences not always straightforward. Several lines of evidence suggest the input of dilute waters (cold or warm), at different stages, into a bedrock with pre-existing very saline groundwaters. Subsequently, marine water entered the system over the Littorina period (when the topography and the distance to the coast allowed it) and mixed with pre-existent groundwaters of variable salinity. In the Laxemar subarea mainland, the Littorina input occurred only locally and it has mostly been flushed out by the subsequent input of warm meteoric waters with a distinctive modern isotopic signature. In addition to mixing processes and superimposed to their effects, different

  4. Water-rock interaction modelling and uncertainties of mixing modelling. SDM-Site Laxemar

    Energy Technology Data Exchange (ETDEWEB)

    Gimeno, Maria J.; Auque, Luis F.; Gomez, Javier B.; Acero, Patricia (Univ. of Zaragoza, Zaragoza (Spain))

    2009-01-15

    , hydrochemistry, hydrogeochemistry, microbiology, geomicrobiology, analytical chemistry etc. The resulting site descriptive model version, mainly based on available primary data from the extended data freeze L2.3 at Laxemar (November 30 2007). The data interpretation was carried out during November 2007 to September 2008. Several groups within ChemNet were involved and the evaluation was conducted independently using different approaches ranging from expert knowledge to geochemical and mathematical modelling including transport modelling. During regular ChemNet meetings the results have been presented and discussed. The original works by the ChemNet modellers are presented in four level III reports containing complementary information for the bedrock hydrogeochemistry Laxemar Site Descriptive Model (SDM-Site Laxemar, R-08-93) level II report. There is also a fifth level III report: Fracture mineralogy of the Laxemar area (R-08-99). This report presents the modelling work performed by the UZ (Univ. of Zaragoza) group as part of the work plan for Laxemar-Simpevarp 2.2 and 2.3. The main processes determining the global geochemical evolution of the Laxemar-Simpevarp groundwaters system are mixing and reaction processes. Mixing has taken place between different types of waters (end members) over time, making the discrimination of the main influences not always straightforward. Several lines of evidence suggest the input of dilute waters (cold or warm), at different stages, into a bedrock with pre-existing very saline groundwaters. Subsequently, marine water entered the system over the Littorina period (when the topography and the distance to the coast allowed it) and mixed with pre-existent groundwaters of variable salinity. In the Laxemar subarea mainland, the Littorina input occurred only locally and it has mostly been flushed out by the subsequent input of warm meteoric waters with a distinctive modern isotopic signature. In addition to mixing processes and superimposed to their

  5. Mixing Phenomena in a Bottom Blown Copper Smelter: A Water Model Study

    Science.gov (United States)

    Shui, Lang; Cui, Zhixiang; Ma, Xiaodong; Akbar Rhamdhani, M.; Nguyen, Anh; Zhao, Baojun

    2015-03-01

    The first commercial bottom blown oxygen copper smelting furnace has been installed and operated at Dongying Fangyuan Nonferrous Metals since 2008. Significant advantages have been demonstrated in this technology mainly due to its bottom blown oxygen-enriched gas. In this study, a scaled-down 1:12 model was set up to simulate the flow behavior for understanding the mixing phenomena in the furnace. A single lance was used in the present study for gas blowing to establish a reliable research technique and quantitative characterisation of the mixing behavior. Operating parameters such as horizontal distance from the blowing lance, detector depth, bath height, and gas flow rate were adjusted to investigate the mixing time under different conditions. It was found that when the horizontal distance between the lance and detector is within an effective stirring range, the mixing time decreases slightly with increasing the horizontal distance. Outside this range, the mixing time was found to increase with increasing the horizontal distance and it is more significant on the surface. The mixing time always decreases with increasing gas flow rate and bath height. An empirical relationship of mixing time as functions of gas flow rate and bath height has been established first time for the horizontal bottom blowing furnace.

  6. Quantifying the effect of mixing on the mean age of air in CCMVal-2 and CCMI-1 models

    Science.gov (United States)

    Dietmüller, Simone; Eichinger, Roland; Garny, Hella; Birner, Thomas; Boenisch, Harald; Pitari, Giovanni; Mancini, Eva; Visioni, Daniele; Stenke, Andrea; Revell, Laura; Rozanov, Eugene; Plummer, David A.; Scinocca, John; Jöckel, Patrick; Oman, Luke; Deushi, Makoto; Kiyotaka, Shibata; Kinnison, Douglas E.; Garcia, Rolando; Morgenstern, Olaf; Zeng, Guang; Stone, Kane Adam; Schofield, Robyn

    2018-05-01

    The stratospheric age of air (AoA) is a useful measure of the overall capabilities of a general circulation model (GCM) to simulate stratospheric transport. Previous studies have reported a large spread in the simulation of AoA by GCMs and coupled chemistry-climate models (CCMs). Compared to observational estimates, simulated AoA is mostly too low. Here we attempt to untangle the processes that lead to the AoA differences between the models and between models and observations. AoA is influenced by both mean transport by the residual circulation and two-way mixing; we quantify the effects of these processes using data from the CCM inter-comparison projects CCMVal-2 (Chemistry-Climate Model Validation Activity 2) and CCMI-1 (Chemistry-Climate Model Initiative, phase 1). Transport along the residual circulation is measured by the residual circulation transit time (RCTT). We interpret the difference between AoA and RCTT as additional aging by mixing. Aging by mixing thus includes mixing on both the resolved and subgrid scale. We find that the spread in AoA between the models is primarily caused by differences in the effects of mixing and only to some extent by differences in residual circulation strength. These effects are quantified by the mixing efficiency, a measure of the relative increase in AoA by mixing. The mixing efficiency varies strongly between the models from 0.24 to 1.02. We show that the mixing efficiency is not only controlled by horizontal mixing, but by vertical mixing and vertical diffusion as well. Possible causes for the differences in the models' mixing efficiencies are discussed. Differences in subgrid-scale mixing (including differences in advection schemes and model resolutions) likely contribute to the differences in mixing efficiency. However, differences in the relative contribution of resolved versus parameterized wave forcing do not appear to be related to differences in mixing efficiency or AoA.

  7. Translational mixed-effects PKPD modelling of recombinant human growth hormone - from hypophysectomized rat to patients

    DEFF Research Database (Denmark)

    Thorsted, A; Thygesen, P; Agersø, H

    2016-01-01

    BACKGROUND AND PURPOSE: We aimed to develop a mechanistic mixed-effects pharmacokinetic (PK)-pharmacodynamic (PD) (PKPD) model for recombinant human growth hormone (rhGH) in hypophysectomized rats and to predict the human PKPD relationship. EXPERIMENTAL APPROACH: A non-linear mixed-effects model...... was developed from experimental PKPD studies of rhGH and effects of long-term treatment as measured by insulin-like growth factor 1 (IGF-1) and bodyweight gain in rats. Modelled parameter values were scaled to human values using the allometric approach with fixed exponents for PKs and unscaled for PDs...... s.c. administration was over predicted. After correction of the human s.c. absorption model, the induction model for IGF-1 well described the human PKPD data. CONCLUSIONS: A translational mechanistic PKPD model for rhGH was successfully developed from experimental rat data. The model links...

  8. Comment on Hoffman and Rovine (2007): SPSS MIXED can estimate models with heterogeneous variances.

    Science.gov (United States)

    Weaver, Bruce; Black, Ryan A

    2015-06-01

    Hoffman and Rovine (Behavior Research Methods, 39:101-117, 2007) have provided a very nice overview of how multilevel models can be useful to experimental psychologists. They included two illustrative examples and provided both SAS and SPSS commands for estimating the models they reported. However, upon examining the SPSS syntax for the models reported in their Table 3, we found no syntax for models 2B and 3B, both of which have heterogeneous error variances. Instead, there is syntax that estimates similar models with homogeneous error variances and a comment stating that SPSS does not allow heterogeneous errors. But that is not correct. We provide SPSS MIXED commands to estimate models 2B and 3B with heterogeneous error variances and obtain results nearly identical to those reported by Hoffman and Rovine in their Table 3. Therefore, contrary to the comment in Hoffman and Rovine's syntax file, SPSS MIXED can estimate models with heterogeneous error variances.

  9. A New Model for Inclusive Sports? An Evaluation of Participants’ Experiences of Mixed Ability Rugby

    Directory of Open Access Journals (Sweden)

    Martino Corazza

    2017-06-01

    Full Text Available Sport has been recognised as a potential catalyst for social inclusion. The Mixed Ability Model represents an innovative approach to inclusive sport by encouraging disabled and non-disabled players to interact in a mainstream club environment. However, research around the impacts of the Model is currently lacking. This paper aims to contribute empirical data to this gap by evaluating participants’ experiences of Mixed Ability Rugby and highlighting implications for future initiatives. Primary qualitative data were collected within two Mixed Ability Rugby teams in the UK and Italy through online questionnaires and focus groups. Data were analysed using Simplican et al.’s (2015 model of social inclusion. Data show that Mixed Ability Rugby has significant potential for achieving inclusionary outcomes. Positive social impacts, reported by all participants, regardless of (disability, include enhanced social networks, an increase in social capital, personal development and fundamental perception shifts. Factors relevant to the Mixed Ability Model are identified that enhance these impacts and inclusionary outcomes. The mainstream setting was reportedly the most important, with further aspects including a supportive club environment and promotion of self-advocacy. A ‘Wheel of Inclusion’ is developed that provides a useful basis for evaluating current inclusive sport initiatives and for designing new ones.

  10. A model for quasi parity-doublet spectra with strong coriolis mixing

    International Nuclear Information System (INIS)

    Minkov, N.; Drenska, S.; Strecker, M.

    2013-01-01

    The model of coherent quadrupole and octupole motion (CQOM) is combined with the reflection-asymmetric deformed shell model (DSM) in a way allowing fully microscopic description of the Coriolis decoupling and K-mixing effects in the quasi parity-doublet spectra of odd-mass nuclei. In this approach the even-even core is considered within the CQOM model, while the odd nucleon is described within DSM with pairing interaction. The Coriolis decoupling/mixing factors are calculated through a parity-projection of the single-particle wave function. Expressions for the Coriolis mixed quasi parity-doublet levels are obtained in the second order of perturbation theory, while the K-mixed core plus particle wave function is obtained in the first order. Expressions for the B(E1), B(E2) and B(E3) reduced probabilities for transitions within and between different quasi-doublets are obtained by using the total K-mixed wave function. The model scheme is elaborated in a form capable of describing the yrast and non-yrast quasi parity-doublet spectra in odd-mass nuclei. (author)

  11. Validation of mixing heights derived from the operational NWP models at the German weather service

    Energy Technology Data Exchange (ETDEWEB)

    Fay, B.; Schrodin, R.; Jacobsen, I. [Deutscher Wetterdienst, Offenbach (Germany); Engelbart, D. [Deutscher Wetterdienst, Meteorol. Observ. Lindenberg (Germany)

    1997-10-01

    NWP models incorporate an ever-increasing number of observations via four-dimensional data assimilation and are capable of providing comprehensive information about the atmosphere both in space and time. They describe not only near surface parameters but also the vertical structure of the atmosphere. They operate daily, are well verified and successfully used as meteorological pre-processors in large-scale dispersion modelling. Applications like ozone forecasts, emission or power plant control calculations require highly resolved, reliable, and routine values of the temporal evolution of the mixing height (MH) which is a critical parameter in determining the mixing and transformation of substances and the resulting pollution levels near the ground. The purpose of development at the German Weather Service is a straightforward mixing height scheme that uses only parameters derived from NWP model variables and thus automatically provides spatial and temporal fields of mixing heights on an operational basis. An universal parameter to describe stability is the Richardson number Ri. Compared to the usual diagnostic or rate equations, the Ri number concept of determining mixing heights has the advantage of using not only surface layer parameters but also regarding the vertical structure of the boundary layer resolved in the NWP models. (au)

  12. Comparison of measured and modelled mixing heights during the Borex`95 experiment

    Energy Technology Data Exchange (ETDEWEB)

    Mikkelsen, T.; Astrup, P.; Joergensen, H.E.; Ott, S. [Risoe National Lab., Roskilde (Denmark); Soerensen, J.H. [Danish Meteorological Inst., Copenhagen (Denmark); Loefstroem, P. [National Environmental Research Inst., Roskilde (Denmark)

    1997-10-01

    A real-time modelling system designed for `on-the-fly` assessment of atmospheric dispersion during accidental releases is under establishment within the framework of the European Union. It integrates real-time dispersion models for both local scale and long range transport with wind, turbulence and deposition models. As meteorological input, the system uses both on-situ measured and on-line available meteorology. The resulting real-time dispersion system is called MET-RODOS. This paper focuses on evaluation of the MET-RODOS systems build-in local scale pre-processing software for real-time determination of mixing height, - an important parameter for the local scale dispersion assessments. The paper discusses the systems local scale mixing height algorithms as well as its in-line mixing height acquisition from the DMI-HIRLAM model. Comparisons of the diurnal mixing height evolution is made with measured mixing heights from in-situ radio-sonde data during the Borex`95 field trials, and recently also with remote sensed (LIDAR) aerosol profiles measured at Risoe. (LN)

  13. A knowledge representation model for the optimisation of electricity generation mixes

    International Nuclear Information System (INIS)

    Chee Tahir, Aidid; Bañares-Alcántara, René

    2012-01-01

    Highlights: ► Prototype energy model which uses semantic representation (ontologies). ► Model accepts both quantitative and qualitative based energy policy goals. ► Uses logic inference to formulate equations for linear optimisation. ► Proposes electricity generation mix based on energy policy goals. -- Abstract: Energy models such as MARKAL, MESSAGE and DNE-21 are optimisation tools which aid in the formulation of energy policies. The strength of these models lie in their solid theoretical foundations built on rigorous mathematical equations designed to process numerical (quantitative) data related to economics and the environment. Nevertheless, a complete consideration of energy policy issues also requires the consideration of the political and social aspects of energy. These political and social issues are often associated with non-numerical (qualitative) information. To enable the evaluation of these aspects in a computer model, we hypothesise that a different approach to energy model optimisation design is required. A prototype energy model that is based on a semantic representation using ontologies and is integrated to engineering models implemented in Java has been developed. The model provides both quantitative and qualitative evaluation capabilities through the use of logical inference. The semantic representation of energy policy goals is used (i) to translate a set of energy policy goals into a set of logic queries which is then used to determine the preferred electricity generation mix and (ii) to assist in the formulation of a set of equations which is then solved in order to obtain a proposed electricity generation mix. Scenario case studies have been developed and tested on the prototype energy model to determine its capabilities. Knowledge queries were made on the semantic representation to determine an electricity generation mix which fulfilled a set of energy policy goals (e.g. CO 2 emissions reduction, water conservation, energy supply

  14. Proposed model for fuel-coolant mixing during a core-melt accident

    International Nuclear Information System (INIS)

    Corradini, M.L.

    1983-01-01

    If complete failure of normal and emergency coolant flow occurs in a light water reactor, fission product decay heat would eventually cause melting of the reactor fuel and cladding. The core melt may then slump into the lower plenum and later into the reactor cavity and contact residual liquid water. A model is proposed to describe the fuel-coolant mixing process upon contact. The model is compared to intermediate scale experiments being conducted at Sandia. The modelling of this mixing process will aid in understanding three important processes: (1) fuel debris sizes upon quenching in water, (2) the hydrogen source term during fuel quench, and (3) the rate of steam production. Additional observations of Sandia data indicate that the steam explosion is affected by this mixing process

  15. Two-level mixed modeling of longitudinal pedigree data for genetic association analysis

    DEFF Research Database (Denmark)

    Tan, Q.

    2013-01-01

    of follow-up. Approaches have been proposed to integrate kinship correlation into the mixed effect models to explicitly model the genetic relationship which have been proven as an efficient way for dealing with sample clustering in pedigree data. Although useful for adjusting relatedness in the mixed...... assess the genetic associations with the mean level and the rate of change in a phenotype both with kinship correlation integrated in the mixed effect models. We apply our method to longitudinal pedigree data to estimate the genetic effects on systolic blood pressure measured over time in large pedigrees......Genetic association analysis on complex phenotypes under a longitudinal design involving pedigrees encounters the problem of correlation within pedigrees which could affect statistical assessment of the genetic effects on both the mean level of the phenotype and its rate of change over the time...

  16. Laminar/transition sweeping flow-mixing model for wire-wrapped LMFBR assemblies

    International Nuclear Information System (INIS)

    Burns, K.F.; Rohsenow, W.M.; Todreas, N.E.

    1980-07-01

    Recent interest in analyzing the thermal hydraulic characteristics of LMFBR assemblies operating in the mixed convection regime motivates the extension of the aforementioned turbulent sweeping flow model to low Reynolds number flows. The accuracy to which knowledge of the mixing parameters is required has not been well determined, due to the increased influence of conduction and buoyancy effects with respect to energy transport at low Reynolds numbers. This study represents a best estimate attempt to correlate the existing low Reynolds number sweeping flow data. The laminar/transition model which is presented is expected to be useful in anayzing mixed convection conditions. However, the justification for making additional improvemements is contingent upon two factors. First, the ability of the proposed laminar/transition model to predict additional low Reynolds number sweeping flow data for other geometries needs to be investigated. Secondly, the sensitivity of temperature predictions to uncertainties in the values of the sweeping flow parameters should be quantified

  17. Modelling lactation curve for milk fat to protein ratio in Iranian buffaloes (Bubalus bubalis) using non-linear mixed models.

    Science.gov (United States)

    Hossein-Zadeh, Navid Ghavi

    2016-08-01

    The aim of this study was to compare seven non-linear mathematical models (Brody, Wood, Dhanoa, Sikka, Nelder, Rook and Dijkstra) to examine their efficiency in describing the lactation curves for milk fat to protein ratio (FPR) in Iranian buffaloes. Data were 43 818 test-day records for FPR from the first three lactations of Iranian buffaloes which were collected on 523 dairy herds in the period from 1996 to 2012 by the Animal Breeding Center of Iran. Each model was fitted to monthly FPR records of buffaloes using the non-linear mixed model procedure (PROC NLMIXED) in SAS and the parameters were estimated. The models were tested for goodness of fit using Akaike's information criterion (AIC), Bayesian information criterion (BIC) and log maximum likelihood (-2 Log L). The Nelder and Sikka mixed models provided the best fit of lactation curve for FPR in the first and second lactations of Iranian buffaloes, respectively. However, Wood, Dhanoa and Sikka mixed models provided the best fit of lactation curve for FPR in the third parity buffaloes. Evaluation of first, second and third lactation features showed that all models, except for Dijkstra model in the third lactation, under-predicted test time at which daily FPR was minimum. On the other hand, minimum FPR was over-predicted by all equations. Evaluation of the different models used in this study indicated that non-linear mixed models were sufficient for fitting test-day FPR records of Iranian buffaloes.

  18. Simulation of annual cycles of phytoplankton, zooplankton and nutrients using a mixed layer model coupled with a biological model

    OpenAIRE

    Troupin, Charles

    2006-01-01

    In oceanography, the mixed layer refers to the near surface part of the water column where physical and biological variables are distributed quasi homogeneously. Its depth depends on conditions at the air-sea interface (heat and freshwater fluxes, wind stress) and on the characteristics of the flow (stratification, shear), and has a strong influence on biological dynamics. The aim of this work is to model the behaviour of the mixed layer in waters situated to the south of Gr...

  19. Identifying response styles: A latent-class bilinear multinomial logit model

    NARCIS (Netherlands)

    van Rosmalen, J.; van Herk, H.; Groenen, P.J.F.

    2010-01-01

    Respondents can vary strongly in the way they use rating scales. Specifically, respondents can exhibit a variety of response styles, which threatens the validity of the responses. The purpose of this article Is to investigate how response style and content of the items affect rating scale responses.

  20. Generalized Partial Least Squares Approach for Nominal Multinomial Logit Regression Models with a Functional Covariate

    Science.gov (United States)

    Albaqshi, Amani Mohammed H.

    2017-01-01

    Functional Data Analysis (FDA) has attracted substantial attention for the last two decades. Within FDA, classifying curves into two or more categories is consistently of interest to scientists, but multi-class prediction within FDA is challenged in that most classification tools have been limited to binary response applications. The functional…

  1. INCLUSION OF THE LATENT PERSONALITY VARIABLE IN MULTINOMIAL LOGIT MODELS USING THE 16PF PSYCHOMETRIC TEST

    Directory of Open Access Journals (Sweden)

    JORGE E. CÓRDOBA MAQUILÓN

    2012-01-01

    Full Text Available Los modelos de demanda de viajes utilizan principalmente los atributos modales y las características socioeconómicas como variables explicativas. También se ha establecido que las actitudes y percepciones influyen en el comportamiento de los usuarios. Sin embargo, las variables psicológicas del individuo condicionan la conducta del usuario. En este estudio se incluyó la variable latente personalidad, en la estimación del modelo híbrido de elección discreta, el cual constituye una buena alternativa para incorporar los efectos de los factores subjetivos. La variable latente personalidad se evaluó con la prueba psicométrica 16PF de validez internacional. El artículo analiza los resultados de la aplicación de este modelo a una población de empleados y docentes universitarios, y también propone un camino para la utilización de pruebas psicométricas en los modelos híbridos de elección discreta. Nuestros resultados muestran que los modelos híbridos que incluyen variables latentes psicológicas son superiores a los modelos tradicionales que ignoran los efectos de la conducta de los usuarios.

  2. Financial gradualism and banking crises in North Africa region: an investigation by a panel logit model

    OpenAIRE

    KHATTAB, Ahmed; IHADIYAN, Abid

    2017-01-01

    Abstract. In order to overcome the troubles of the crisis in the seventies, North African countries have adopted financial liberalization policies to enhance their economic growth. Moreover, these policies have affected the stability of their banking systems. The purpose of this study is to test the impact of financial liberalization on the probability of appearance of banking crises which covers a sample of four countries of the North Africa region during the period 1970-2003 by using a pane...

  3. Reasons for not buying a car : a probit-selection multinomial logit choice model

    NARCIS (Netherlands)

    Gao, Y.; Rasouli, S.; Timmermans, H.J.P.

    2014-01-01

    Generating and maintaining gradients of cell density and extracellular matrix (ECM) components is a prerequisite for the development of functionality of healthy tissue. Therefore, gaining insights into the drivers of spatial organization of cells and the role of ECM during tissue morphogenesis is

  4. Minimum Wages and Teenagers' Enrollment--Employment Outcomes: A Multinominal Logit Model.

    Science.gov (United States)

    Ehrenberg, Ronald G.; Marcus, Alan J.

    1982-01-01

    This paper tests the hypothesis that the effect of minimum wage legislation on teenagers' education decisions is asymmetrical across family income classes, with the legislation inducing children from low-income families to reduce their levels of schooling and children from higher-income families to increase their educational attainment. (Author)

  5. Identifying Unknown Response Styles: A Latent-Class Bilinear Multinomial Logit Model

    NARCIS (Netherlands)

    J.M. van Rosmalen (Joost); H. van Herk (Hester); P.J.F. Groenen (Patrick)

    2007-01-01

    textabstractRespondents can vary significantly in the way they use rating scales. Specifically, respondents can exhibit varying degrees of response style, which threatens the validity of the responses. The purpose of this article is to investigate to what extent rating scale responses show response

  6. Constraints from stellar models on mixing as a viable explanation of abundance anomalies in globular clusters

    International Nuclear Information System (INIS)

    Vandenberg, D.A.; Smith, G.H.

    1988-01-01

    Published observational data on changes in the surface abundances of evolving stars in globular clusters are compiled and compared with the predictions of theoretical evolutionary sequences (for stars of mass 0.8 solar mass and metallicity Z = 0.0001 or mass 0.9 solar mass and Z = 0.006) and of models incorporating enhanced envelope-interior mixing at various evolutionary phases. The results are presented in graphs and characterized in detail. It is found that mixing models of CN bimodality in globular-cluster stars can encounter difficulties when abundance anomalies appear early in the evolution of the star. 63 references

  7. A multilevel nonlinear mixed-effects approach to model growth in pigs

    DEFF Research Database (Denmark)

    Strathe, Anders Bjerring; Danfær, Allan Christian; Sørensen, H.

    2010-01-01

    Growth functions have been used to predict market weight of pigs and maximize return over feed costs. This study was undertaken to compare 4 growth functions and methods of analyzing data, particularly one that considers nonlinear repeated measures. Data were collected from an experiment with 40...... pigs maintained from birth to maturity and their BW measured weekly or every 2 wk up to 1,007 d. Gompertz, logistic, Bridges, and Lopez functions were fitted to the data and compared using information criteria. For each function, a multilevel nonlinear mixed effects model was employed because....... Furthermore, studies should consider adding continuous autoregressive process when analyzing nonlinear mixed models with repeated measures....

  8. Markov and semi-Markov switching linear mixed models used to identify forest tree growth components.

    Science.gov (United States)

    Chaubert-Pereira, Florence; Guédon, Yann; Lavergne, Christian; Trottier, Catherine

    2010-09-01

    Tree growth is assumed to be mainly the result of three components: (i) an endogenous component assumed to be structured as a succession of roughly stationary phases separated by marked change points that are asynchronous among individuals, (ii) a time-varying environmental component assumed to take the form of synchronous fluctuations among individuals, and (iii) an individual component corresponding mainly to the local environment of each tree. To identify and characterize these three components, we propose to use semi-Markov switching linear mixed models, i.e., models that combine linear mixed models in a semi-Markovian manner. The underlying semi-Markov chain represents the succession of growth phases and their lengths (endogenous component) whereas the linear mixed models attached to each state of the underlying semi-Markov chain represent-in the corresponding growth phase-both the influence of time-varying climatic covariates (environmental component) as fixed effects, and interindividual heterogeneity (individual component) as random effects. In this article, we address the estimation of Markov and semi-Markov switching linear mixed models in a general framework. We propose a Monte Carlo expectation-maximization like algorithm whose iterations decompose into three steps: (i) sampling of state sequences given random effects, (ii) prediction of random effects given state sequences, and (iii) maximization. The proposed statistical modeling approach is illustrated by the analysis of successive annual shoots along Corsican pine trunks influenced by climatic covariates. © 2009, The International Biometric Society.

  9. Mixed-order phase transition in a one-dimensional model.

    Science.gov (United States)

    Bar, Amir; Mukamel, David

    2014-01-10

    We introduce and analyze an exactly soluble one-dimensional Ising model with long range interactions that exhibits a mixed-order transition, namely a phase transition in which the order parameter is discontinuous as in first order transitions while the correlation length diverges as in second order transitions. Such transitions are known to appear in a diverse classes of models that are seemingly unrelated. The model we present serves as a link between two classes of models that exhibit a mixed-order transition in one dimension, namely, spin models with a coupling constant that decays as the inverse distance squared and models of depinning transitions, thus making a step towards a unifying framework.

  10. Modeling Photodetachment from HO2- Using the pd Case of the Generalized Mixed Character Molecular Orbital Model

    Science.gov (United States)

    Blackstone, Christopher C.; Sanov, Andrei

    2016-06-01

    Using the generalized model for photodetachment of electrons from mixed-character molecular orbitals, we gain insight into the nature of the HOMO of HO2- by treating it as a coherent superpostion of one p- and one d-type atomic orbital. Fitting the pd model function to the ab initio calculated HOMO of HO2- yields a fractional d-character, γp, of 0.979. The modeled curve of the anisotropy parameter, β, as a function of electron kinetic energy for a pd-type mixed character orbital is matched to the experimental data.

  11. Modeling the oxygen uptake kinetics during exercise testing of patients with chronic obstructive pulmonary diseases using nonlinear mixed models

    DEFF Research Database (Denmark)

    Baty, Florent; Ritz, Christian; van Gestel, Arnoldus

    2016-01-01

    describe functionality of the R package medrc that extends the framework of the commonly used packages drc and nlme and allows fitting nonlinear mixed effects models for automated nonlinear regression modeling. The methodology was applied to a data set including 6MWT [Formula: see text]O2 kinetics from 61...... patients with chronic obstructive pulmonary disease (disease severity stage II to IV). The mixed effects approach was compared to a traditional curve-by-curve approach. RESULTS: A six-parameter nonlinear regression model was jointly fitted to the set of [Formula: see text]O2 kinetics. Significant...

  12. Mixed butanols addition to gasoline surrogates: Shock tube ignition delay time measurements and chemical kinetic modeling

    KAUST Repository

    AlRamadan, Abdullah S.

    2015-10-01

    The demand for fuels with high anti-knock quality has historically been rising, and will continue to increase with the development of downsized and turbocharged spark-ignition engines. Butanol isomers, such as 2-butanol and tert-butanol, have high octane ratings (RON of 105 and 107, respectively), and thus mixed butanols (68.8% by volume of 2-butanol and 31.2% by volume of tert-butanol) can be added to the conventional petroleum-derived gasoline fuels to improve octane performance. In the present work, the effect of mixed butanols addition to gasoline surrogates has been investigated in a high-pressure shock tube facility. The ignition delay times of mixed butanols stoichiometric mixtures were measured at 20 and 40bar over a temperature range of 800-1200K. Next, 10vol% and 20vol% of mixed butanols (MB) were blended with two different toluene/n-heptane/iso-octane (TPRF) fuel blends having octane ratings of RON 90/MON 81.7 and RON 84.6/MON 79.3. These MB/TPRF mixtures were investigated in the shock tube conditions similar to those mentioned above. A chemical kinetic model was developed to simulate the low- and high-temperature oxidation of mixed butanols and MB/TPRF blends. The proposed model is in good agreement with the experimental data with some deviations at low temperatures. The effect of mixed butanols addition to TPRFs is marginal when examining the ignition delay times at high temperatures. However, when extended to lower temperatures (T < 850K), the model shows that the mixed butanols addition to TPRFs causes the ignition delay times to increase and hence behaves like an octane booster at engine-like conditions. © 2015 The Combustion Institute.

  13. BWR MARK I pressure suppression pool mixing and stratification analysis using GOTHIC lumped parameter modeling methodology

    International Nuclear Information System (INIS)

    Ozdemir, Ozkan Emre; George, Thomas L.

    2015-01-01

    As a part of the GOTHIC (GOTHIC incorporates technology developed for the electric power industry under the sponsorship of EPRI.) Fukushima Technical Evaluation project (EPRI, 2014a, b, 2015), GOTHIC (EPRI, 2014c) has been benchmarked against test data for pool stratification (EPRI, 2014a, b, Ozdemir and George, 2013). These tests confirmed GOTHIC’s ability to simulate pool mixing and stratification under a variety of anticipated suppression pool operating conditions. The multidimensional modeling requires long simulation times for events that may occur over a period of hours or days. For these scenarios a lumped model of the pressure suppression chamber is desirable to maintain reasonable simulation times. However, a lumped model for the pool is not able to predict the effects of pool stratification that can influence the overall containment response. The main objective of this work is on the development of a correlation that can be used to estimate pool mixing and stratification effects in a lumped modeling approach. A simplified lumped GOTHIC model that includes a two zone model for the suppression pool with controlled circulation between the upper and lower zones was constructed. A pump and associated flow connections are included to provide mixing between the upper and lower pool volumes. Using numerically generated data from a multidimensional GOTHIC model for the suppression pool, a correlation was developed for the mixing rate between the upper and lower pool volumes in a two-zone, lumped model. The mixing rate depends on the pool subcooling, the steam injection rate and the injection depth

  14. An open source Bayesian Monte Carlo isotope mixing model with applications in Earth surface processes

    Science.gov (United States)

    Arendt, Carli A.; Aciego, Sarah M.; Hetland, Eric A.

    2015-05-01

    The implementation of isotopic tracers as constraints on source contributions has become increasingly relevant to understanding Earth surface processes. Interpretation of these isotopic tracers has become more accessible with the development of Bayesian Monte Carlo (BMC) mixing models, which allow uncertainty in mixing end-members and provide methodology for systems with multicomponent mixing. This study presents an open source multiple isotope BMC mixing model that is applicable to Earth surface environments with sources exhibiting distinct end-member isotopic signatures. Our model is first applied to new δ18O and δD measurements from the Athabasca Glacier, which showed expected seasonal melt evolution trends and vigorously assessed the statistical relevance of the resulting fraction estimations. To highlight the broad applicability of our model to a variety of Earth surface environments and relevant isotopic systems, we expand our model to two additional case studies: deriving melt sources from δ18O, δD, and 222Rn measurements of Greenland Ice Sheet bulk water samples and assessing nutrient sources from ɛNd and 87Sr/86Sr measurements of Hawaiian soil cores. The model produces results for the Greenland Ice Sheet and Hawaiian soil data sets that are consistent with the originally published fractional contribution estimates. The advantage of this method is that it quantifies the error induced by variability in the end-member compositions, unrealized by the models previously applied to the above case studies. Results from all three case studies demonstrate the broad applicability of this statistical BMC isotopic mixing model for estimating source contribution fractions in a variety of Earth surface systems.

  15. Individual taper models for natural cedar and Taurus fir mixed stands of Bucak Region, Turkey

    Directory of Open Access Journals (Sweden)

    Ramazan Özçelik

    2017-11-01

    Full Text Available In this study, we assessed the performance of different types of taper equations for predicting tree diameters at specific heights and total stem volumes for mixed stands of Taurus cedar (Cedrus libani A. Rich. and Taurus fir (Abies cilicica Carr.. We used data from mixed stands containing a total of 131 cedar and 124 Taurus fir trees. We evaluated six commonly used and well-known forestry taper functions developed by a variety of researchers (Biging (1984, Zakrzewski (1999, Muhairwe (1999, Fang et al. (2000, Kozak (2004, and Sharma and Zhang (2004. To address problems related to autocorrelation and multicollinearity in the hierarchical data associated with the construction of taper models, we used appropriate statistical procedures for the model fitting. We compared model performances based on the analysis of three goodness-of-fit statistics and found the compatible segmented model of Fang et al. (2000 to be superior in describing the stem profile and stem volume of both tree species in mixed stands. The equation used by Zakrzewski (1999 exhibited the poorest fitting results of the three taper equations. In general, we found segmented taper equations to provide more accurate predictions than variable-form models for both tree species. Results from the non-linear extra sum of squares method indicate that stem tapers differ among tree species in mixed stands. Therefore, a different taper function should be used for each tree species in mixed stands in the Bucak district. Using individual-specific taper equations yields more robust estimations and, therefore, will enhance the prediction accuracy of diameters at different heights and volumes in mixed stands.

  16. Scattering of long folded strings and mixed correlators in the two-matrix model

    International Nuclear Information System (INIS)

    Bourgine, J.-E.; Hosomichi, K.; Kostov, I.; Matsuo, Y.

    2008-01-01

    We study the interactions of Maldacena's long folded strings in two-dimensional string theory. We find the amplitude for a state containing two long folded strings to come and go back to infinity. We calculate this amplitude both in the worldsheet theory and in the dual matrix model, the matrix quantum mechanics. The matrix model description allows to evaluate the amplitudes involving any number of long strings, which are given by the mixed trace correlators in an effective two-matrix model

  17. Skew-t partially linear mixed-effects models for AIDS clinical studies.

    Science.gov (United States)

    Lu, Tao

    2016-01-01

    We propose partially linear mixed-effects models with asymmetry and missingness to investigate the relationship between two biomarkers in clinical studies. The proposed models take into account irregular time effects commonly observed in clinical studies under a semiparametric model framework. In addition, commonly assumed symmetric distributions for model errors are substituted by asymmetric distribution to account for skewness. Further, informative missing data mechanism is accounted for. A Bayesian approach is developed to perform parameter estimation simultaneously. The proposed model and method are applied to an AIDS dataset and comparisons with alternative models are performed.

  18. Modeling Bimolecular Reactive Transport With Mixing-Limitation: Theory and Application to Column Experiments

    Science.gov (United States)

    Ginn, T. R.

    2018-01-01

    The challenge of determining mixing extent of solutions undergoing advective-dispersive-diffusive transport is well known. In particular, reaction extent between displacing and displaced solutes depends on mixing at the pore scale, that is, generally smaller than continuum scale quantification that relies on dispersive fluxes. Here a novel mobile-mobile mass transfer approach is developed to distinguish diffusive mixing from dispersive spreading in one-dimensional transport involving small-scale velocity variations with some correlation, such as occurs in hydrodynamic dispersion, in which short-range ballistic transports give rise to dispersed but not mixed segregation zones, termed here ballisticules. When considering transport of a single solution, this approach distinguishes self-diffusive mixing from spreading, and in the case of displacement of one solution by another, each containing a participant reactant of an irreversible bimolecular reaction, this results in time-delayed diffusive mixing of reactants. The approach generates models for both kinetically controlled and equilibrium irreversible reaction cases, while honoring independently measured reaction rates and dispersivities. The mathematical solution for the equilibrium case is a simple analytical expression. The approach is applied to published experimental data on bimolecular reactions for homogeneous porous media under postasymptotic dispersive conditions with good results.

  19. Using multilevel modeling to assess case-mix adjusters in consumer experience surveys in health care.

    Science.gov (United States)

    Damman, Olga C; Stubbe, Janine H; Hendriks, Michelle; Arah, Onyebuchi A; Spreeuwenberg, Peter; Delnoij, Diana M J; Groenewegen, Peter P

    2009-04-01

    Ratings on the quality of healthcare from the consumer's perspective need to be adjusted for consumer characteristics to ensure fair and accurate comparisons between healthcare providers or health plans. Although multilevel analysis is already considered an appropriate method for analyzing healthcare performance data, it has rarely been used to assess case-mix adjustment of such data. The purpose of this article is to investigate whether multilevel regression analysis is a useful tool to detect case-mix adjusters in consumer assessment of healthcare. We used data on 11,539 consumers from 27 Dutch health plans, which were collected using the Dutch Consumer Quality Index health plan instrument. We conducted multilevel regression analyses of consumers' responses nested within health plans to assess the effects of consumer characteristics on consumer experience. We compared our findings to the results of another methodology: the impact factor approach, which combines the predictive effect of each case-mix variable with its heterogeneity across health plans. Both multilevel regression and impact factor analyses showed that age and education were the most important case-mix adjusters for consumer experience and ratings of health plans. With the exception of age, case-mix adjustment had little impact on the ranking of health plans. On both theoretical and practical grounds, multilevel modeling is useful for adequate case-mix adjustment and analysis of performance ratings.

  20. A method for fitting regression splines with varying polynomial order in the linear mixed model.

    Science.gov (United States)

    Edwards, Lloyd J; Stewart, Paul W; MacDougall, James E; Helms, Ronald W

    2006-02-15

    The linear mixed model has become a widely used tool for longitudinal analysis of continuous variables. The use of regression splines in these models offers the analyst additional flexibility in the formulation of descriptive analyses, exploratory analyses and hypothesis-driven confirmatory analyses. We propose a method for fitting piecewise polynomial regression splines with varying polynomial order in the fixed effects and/or random effects of the linear mixed model. The polynomial segments are explicitly constrained by side conditions for continuity and some smoothness at the points where they join. By using a reparameterization of this explicitly constrained linear mixed model, an implicitly constrained linear mixed model is constructed that simplifies implementation of fixed-knot regression splines. The proposed approach is relatively simple, handles splines in one variable or multiple variables, and can be easily programmed using existing commercial software such as SAS or S-plus. The method is illustrated using two examples: an analysis of longitudinal viral load data from a study of subjects with acute HIV-1 infection and an analysis of 24-hour ambulatory blood pressure profiles.

  1. Numerical Modeling of Mixing of Chemically Reacting, Non-Newtonian Slurry for Tank Waste Retrieval

    International Nuclear Information System (INIS)

    Yuen, David A.; Onishi, Yasuo; Rustad, James R.; Michener, Thomas E.; Felmy, Andrew R.; Ten, Arkady A.; Hier, Catherine A.

    2000-01-01

    Many highly radioactive wastes will be retrieved by installing mixer pumps that inject high-speed jets to stir up the sludge, saltcake, and supernatant liquid in the tank, blending them into a slurry. This slurry will then be pumped out of the tank into a waste treatment facility. Our objectives are to investigate interactions-chemical reactions, waste rheology, and slurry mixing-occurring during the retrieval operation and to provide a scientific basis for the waste retrieval decision-making process. Specific objectives are to: (1) Evaluate numerical modeling of chemically active, non-Newtonian tank waste mixing, coupled with chemical reactions and realistic rheology; (2) Conduct numerical modeling analysis of local and global mixing of non-Newtonian and Newtonian slurries; and (3) Provide the bases to develop a scientifically justifiable, decision-making support tool for the tank waste retrieval operation

  2. Sediment fingerprinting experiments to test the sensitivity of multivariate mixing models

    Science.gov (United States)

    Gaspar, Leticia; Blake, Will; Smith, Hugh; Navas, Ana

    2014-05-01

    Sediment fingerprinting techniques provide insight into the dynamics of sediment transfer processes and support for catchment management decisions. As questions being asked of fingerprinting datasets become increasingly complex, validation of model output and sensitivity tests are increasingly important. This study adopts an experimental approach to explore the validity and sensitivity of mixing model outputs for materials with contrasting geochemical and particle size composition. The experiments reported here focused on (i) the sensitivity of model output to different fingerprint selection procedures and (ii) the influence of source material particle size distributions on model output. Five soils with significantly different geochemistry, soil organic matter and particle size distributions were selected as experimental source materials. A total of twelve sediment mixtures were prepared in the laboratory by combining different quantified proportions of the Kruskal-Wallis test, Discriminant Function Analysis (DFA), Principal Component Analysis (PCA), or correlation matrix). Summary results for the use of the mixing model with the different sets of fingerprint properties for the twelve mixed soils were reasonably consistent with the initial mixing percentages initially known. Given the experimental nature of the work and dry mixing of materials, geochemical conservative behavior was assumed for all elements, even for those that might be disregarded in aquatic systems (e.g. P). In general, the best fits between actual and modeled proportions were found using a set of nine tracer properties (Sr, Rb, Fe, Ti, Ca, Al, P, Si, K, Si) that were derived using DFA coupled with a multivariate stepwise algorithm, with errors between real and estimated value that did not exceed 6.7 % and values of GOF above 94.5 %. The second set of experiments aimed to explore the sensitivity of model output to variability in the particle size of source materials assuming that a degree of

  3. Trending in Probability of Collision Measurements via a Bayesian Zero-Inflated Beta Mixed Model

    Science.gov (United States)

    Vallejo, Jonathon; Hejduk, Matt; Stamey, James

    2015-01-01

    We investigate the performance of a generalized linear mixed model in predicting the Probabilities of Collision (Pc) for conjunction events. Specifically, we apply this model to the log(sub 10) transformation of these probabilities and argue that this transformation yields values that can be considered bounded in practice. Additionally, this bounded random variable, after scaling, is zero-inflated. Consequently, we model these values using the zero-inflated Beta distribution, and utilize the Bayesian paradigm and the mixed model framework to borrow information from past and current events. This provides a natural way to model the data and provides a basis for answering questions of interest, such as what is the likelihood of observing a probability of collision equal to the effective value of zero on a subsequent observation.

  4. Chlorophyll modulation of mixed layer thermodynamics in a mixed-layer isopycnal General Circulation Model - An example from Arabian Sea and equatorial Pacific

    Digital Repository Service at National Institute of Oceanography (India)

    Nakamoto, S.; PrasannaKumar, S.; Oberhuber, J.M.; Saito, H.; Muneyama, K.; Frouin, R.

    is influenced not only by local vertical mixing but also by horizontal con- vergence of mass and heat, a mixed layer model must consider both full dynamics due to the use of primitive equations and a parameterization for the vertical mass transfer and related... is dynamically determined without such a con- straint. Instantaneous atmospheric elds are inter- polated from the monthly means. Monthly mean climatology of chlorophyll pigment concentrations were obtained from the Coastal Zone Color Scan- ner (CZCS) from...

  5. Deviations from tribimaximal mixing due to the vacuum expectation value misalignment in A4 models

    International Nuclear Information System (INIS)

    Barry, James; Rodejohann, Werner

    2010-01-01

    The addition of an A 4 family symmetry and extended Higgs sector to the standard model can generate the tribimaximal mixing pattern for leptons, assuming the correct vacuum expectation value alignment of the Higgs scalars. Deviating this alignment affects the predictions for the neutrino oscillation and neutrino mass observables. An attempt is made to classify the plethora of models in the literature, with respect to the chosen A 4 particle assignments. Of these models, two particularly popular examples have been analyzed for deviations from tribimaximal mixing by perturbing the vacuum expectation value alignments. The effect of perturbations on the mixing angle observables is studied. However, it is only investigation of the mass-related observables (the effective mass for neutrinoless double beta decay and the sum of masses from cosmology) that can lead to the exclusion of particular models by constraints from future data, which indicates the importance of neutrino mass in disentangling models. The models have also been tested for fine-tuning of the parameters. Furthermore, a well-known seesaw model is generalized to include additional scalars, which transform as representations of A 4 not included in the original model.

  6. CP violation for electroweak baryogenesis from mixing of standard model and heavy vector quarks

    International Nuclear Information System (INIS)

    McDonald, J.

    1996-01-01

    It is known that the CP violation in the minimal standard model is insufficient to explain the observed baryon asymmetry of the Universe in the context electroweak baryogenesis. In this paper we consider the possibility that the additional CP violation required could originate in the mixing of the standard model quarks and heavy vector quark pairs. We consider the baryon asymmetry in the context of the spontaneous baryogenesis scenario. It is shown that, in general, the CP-violating phase entering the mass matrix of the standard model and heavy vector quarks must be space dependent in order to produce a baryon asymmetry, suggesting that the additional CP violation must be spontaneous in nature. This is true for the case of the simplest models which mix the standard model and heavy vector quarks. We derive a charge potential term for the model by diagonalizing the quark mass matrix in the presence of the electroweak bubble wall, which turns out to be quite different from the fermionic hypercharge potentials usually considered in spontaneous baryogenesis models, and obtain the rate of baryon number generation within the wall. We find, for the particular example where the standard model quarks mix with weak-isodoublet heavy vector quarks via the expectation value of a gauge singlet scalar, that we can account for the observed baryon asymmetry with conservative estimates for the uncertain parameters of electroweak baryogenesis, provided that the heavy vector quarks are not heavier than a few hundred GeV and that the coupling of the standard model quarks to the heavy vector quarks and gauge singlet scalars is not much smaller than order of 1, corresponding to a mixing angle of the heavy vector quarks and standard model quarks not much smaller than order of 10 -1 . copyright 1996 The American Physical Society

  7. Mixing characterisation of full-scale membrane bioreactors: CFD modelling with experimental validation.

    Science.gov (United States)

    Brannock, M; Wang, Y; Leslie, G

    2010-05-01

    Membrane Bioreactors (MBRs) have been successfully used in aerobic biological wastewater treatment to solve the perennial problem of effective solids-liquid separation. The optimisation of MBRs requires knowledge of the membrane fouling, biokinetics and mixing. However, research has mainly concentrated on the fouling and biokinetics (Ng and Kim, 2007). Current methods of design for a desired flow regime within MBRs are largely based on assumptions (e.g. complete mixing of tanks) and empirical techniques (e.g. specific mixing energy). However, it is difficult to predict how sludge rheology and vessel design in full-scale installations affects hydrodynamics, hence overall performance. Computational Fluid Dynamics (CFD) provides a method for prediction of how vessel features and mixing energy usage affect the hydrodynamics. In this study, a CFD model was developed which accounts for aeration, sludge rheology and geometry (i.e. bioreactor and membrane module). This MBR CFD model was then applied to two full-scale MBRs and was successfully validated against experimental results. The effect of sludge settling and rheology was found to have a minimal impact on the bulk mixing (i.e. the residence time distribution).

  8. MATRIX (Multiconfiguration Aerosol TRacker of mIXing state): an aerosol microphysical module for global atmospheric models

    OpenAIRE

    Bauer , S. E.; Wright , D.; Koch , D.; Lewis , E. R.; Mcgraw , R.; Chang , L.-S.; Schwartz , S. E.; Ruedy , R.

    2008-01-01

    A new aerosol microphysical module MATRIX, the Multiconfiguration Aerosol TRacker of mIXing state, and its application in the Goddard Institute for Space Studies (GISS) climate model (ModelE) are described. This module, which is based on the quadrature method of moments (QMOM), represents nucleation, condensation, coagulation, internal and external mixing, and cloud-drop activation and provides aerosol particle mass and number concentration and particle size information for up to 16 mixed-mod...

  9. Generating synthetic wave climates for coastal modelling: a linear mixed modelling approach

    Science.gov (United States)

    Thomas, C.; Lark, R. M.

    2013-12-01

    Numerical coastline morphological evolution models require wave climate properties to drive morphological change through time. Wave climate properties (typically wave height, period and direction) may be temporally fixed, culled from real wave buoy data, or allowed to vary in some way defined by a Gaussian or other pdf. However, to examine sensitivity of coastline morphologies to wave climate change, it seems desirable to be able to modify wave climate time series from a current to some new state along a trajectory, but in a way consistent with, or initially conditioned by, the properties of existing data, or to generate fully synthetic data sets with realistic time series properties. For example, mean or significant wave height time series may have underlying periodicities, as revealed in numerous analyses of wave data. Our motivation is to develop a simple methodology to generate synthetic wave climate time series that can change in some stochastic way through time. We wish to use such time series in a coastline evolution model to test sensitivities of coastal landforms to changes in wave climate over decadal and centennial scales. We have worked initially on time series of significant wave height, based on data from a Waverider III buoy located off the coast of Yorkshire, England. The statistical framework for the simulation is the linear mixed model. The target variable, perhaps after transformation (Box-Cox), is modelled as a multivariate Gaussian, the mean modelled as a function of a fixed effect, and two random components, one of which is independently and identically distributed (iid) and the second of which is temporally correlated. The model was fitted to the data by likelihood methods. We considered the option of a periodic mean, the period either fixed (e.g. at 12 months) or estimated from the data. We considered two possible correlation structures for the second random effect. In one the correlation decays exponentially with time. In the second

  10. Neutrino bilarge mixing and flavor physics in the flipped SU(5) model

    Energy Technology Data Exchange (ETDEWEB)

    Huang Chaoshang; Li Tianjun; Liao Wei E-mail: liaow@ictp.trieste.it

    2003-11-24

    We have constructed a specific supersymmetric flipped SU(5) GUT model in which bilarge neutrino mixing is incorporated. Because the up-type and down-type quarks in the model are flipped in the representations ten and five with respect to the usual SU(5), the radiatively generated flavor mixing in squark mass matrices due to the large neutrino mixing has a pattern different from those in the conventional SU(5) and SO(10) supersymmetric GUTs. This leads to phenomenological consequences quite different from SU(5) or SO(10) supersymmetric GUT models. That is, it has almost no impact on B physics. On the contrary, the model has effects in top and charm physics as well as lepton physics. In particular, it gives promising prediction on the mass difference, {delta}M{sub D}, of the D-D-bar mixing which for some ranges of the parameter space with large tan{beta} can be at the order of 10{sup 9} {Dirac_h} s{sup -1}, one order of magnitude smaller than the experimental upper bound. In some regions of the parameter space {delta}M{sub D} can saturate the present bound. For these ranges of parameter space, t{yields}u,c+h{sup 0} can reach 10{sup -5}-10{sup -6} which would be observed at the LHC and future {gamma}-{gamma} colliders.

  11. Mixing Studies in a 1:60 scale model of a cornerfired boiler with OFA

    DEFF Research Database (Denmark)

    Matlok, Simon; Scheel Larsen, Poul; Gjernes, Erik

    1998-01-01

    In a model of a boiler, concentration distributions of injected gas into a swirling bulk flow are determined from quantitative laser-sheet visualization. Together with LDA-measurements of velocity fields this describes the mixing process and its efficiency expressed by several measures (unmixedness...

  12. Evaluating the Social Validity of the Early Start Denver Model: A Convergent Mixed Methods Study

    Science.gov (United States)

    Ogilvie, Emily; McCrudden, Matthew T.

    2017-01-01

    An intervention has social validity to the extent that it is socially acceptable to participants and stakeholders. This pilot convergent mixed methods study evaluated parents' perceptions of the social validity of the Early Start Denver Model (ESDM), a naturalistic behavioral intervention for children with autism. It focused on whether the parents…

  13. Development of a model for case-mix adjustment of pressure ulcer prevalence rates.

    NARCIS (Netherlands)

    Bours, G.J.J.W.; Halfens, J.; Berger, M.P.; Abu-Saad, H.H.; Grol, R.P.T.M.

    2003-01-01

    BACKGROUND: Acute care hospitals participating in the Dutch national pressure ulcer prevalence survey use the results of this survey to compare their outcomes and assess their quality of care regarding pressure ulcer prevention. The development of a model for case-mix adjustment is essential for the

  14. Analytical model of asymmetrical Mixed-Mode Bending test of adhesively bonded GFRP joint

    Czech Academy of Sciences Publication Activity Database

    Ševčík, Martin; Hutař, Pavel; Vassilopoulos, Anastasios P.; Shahverdi, M.

    2015-01-01

    Roč. 9, č. 34 (2015), s. 237-246 ISSN 1971-8993 R&D Projects: GA MŠk(CZ) EE2.3.30.0063; GA ČR GA15-09347S Institutional support: RVO:68081723 Keywords : GFRP materials * Mixed-Mode bending * Fiber bridging * Analytical model Subject RIV: JL - Materials Fatigue, Friction Mechanics

  15. A Mixed-Effects Heterogeneous Negative Binomial Model for Postfire Conifer Regeneration in Northeastern California, USA

    Science.gov (United States)

    Justin S. Crotteau; Martin W. Ritchie; J. Morgan. Varner

    2014-01-01

    Many western USA fire regimes are typified by mixed-severity fire, which compounds the variability inherent to natural regeneration densities in associated forests. Tree regeneration data are often discrete and nonnegative; accordingly, we fit a series of Poisson and negative binomial variation models to conifer seedling counts across four distinct burn severities and...

  16. A Mixed-Layer Model perspective on stratocumulus steady-states in a perturbed climate

    NARCIS (Netherlands)

    Dal Gesso, S.; Siebesma, A.P.; de Roode, S.R.; van Wessem, J.M.

    2013-01-01

    Equilibrium states of stratocumulus are evaluated for a range of free tropospheric conditions in a Mixed-Layer Model framework using a number of different entrainment formulations. The equilibrium states show that a reduced lower tropospheric stability (LTS) and a dryer free troposphere support a

  17. Extraction and identification of mixed pesticides’ Raman signal and establishment of their prediction models

    Science.gov (United States)

    A nondestructive and sensitive method was developed to detect the presence of mixed pesticides of acetamiprid, chlorpyrifos and carbendazim on apples by surface-enhanced Raman spectroscopy (SERS). Self-modeling mixture analysis (SMA) was used to extract and identify the Raman spectra of individual p...

  18. A mixed integer program to model spatial wildfire behavior and suppression placement decisions

    Science.gov (United States)

    Erin J. Belval; Yu Wei; Michael. Bevers

    2015-01-01

    Wildfire suppression combines multiple objectives and dynamic fire behavior to form a complex problem for decision makers. This paper presents a mixed integer program designed to explore integrating spatial fire behavior and suppression placement decisions into a mathematical programming framework. Fire behavior and suppression placement decisions are modeled using...

  19. Item Response Theory Models for Wording Effects in Mixed-Format Scales

    Science.gov (United States)

    Wang, Wen-Chung; Chen, Hui-Fang; Jin, Kuan-Yu

    2015-01-01

    Many scales contain both positively and negatively worded items. Reverse recoding of negatively worded items might not be enough for them to function as positively worded items do. In this study, we commented on the drawbacks of existing approaches to wording effect in mixed-format scales and used bi-factor item response theory (IRT) models to…

  20. Model for transversal turbulent mixing in axial flow in rod bundles

    International Nuclear Information System (INIS)

    Carajilescov, P.

    1990-01-01

    The present work consists in the development of a model for the transversal eddy diffusivity to account for the effect of turbulent thermal mixing in axial flows in rod bundles. The results were compared to existing correlations that are currently being used in reactor thermalhydraulic analysis and considered satisfactory. (author)

  1. Multi-environment QTL mixed models for drought stress adaptation in wheat

    NARCIS (Netherlands)

    Mathews, K.L.; Malosetti, M.; Chapman, S.; McIntyre, L.; Reynolds, M.; Shorter, R.; Eeuwijk, van F.A.

    2008-01-01

    Many quantitative trait loci (QTL) detection methods ignore QTL-by-environment interaction (QEI) and are limited in accommodation of error and environment-specific variance. This paper outlines a mixed model approach using a recombinant inbred spring wheat population grown in six drought stress

  2. Comparing mixing-length models of the diabatic wind profile over homogeneous terrain

    DEFF Research Database (Denmark)

    Pena Diaz, Alfredo; Gryning, Sven-Erik; Hasager, Charlotte Bay

    2010-01-01

    Models of the diabatic wind profile over homogeneous terrain for the entire atmospheric boundary layer are developed using mixing-length theory and are compared to wind speed observations up to 300 m at the National Test Station for Wind Turbines at Høvsøre, Denmark. The measurements are performe...

  3. Software engineering the mixed model for genome-wide association studies on large samples

    Science.gov (United States)

    Mixed models improve the ability to detect phenotype-genotype associations in the presence of population stratification and multiple levels of relatedness in genome-wide association studies (GWAS), but for large data sets the resource consumption becomes impractical. At the same time, the sample siz...

  4. Error characterization of CO2 vertical mixing in the atmospheric transport model WRF-VPRM

    Directory of Open Access Journals (Sweden)

    U. Karstens

    2012-03-01

    Full Text Available One of the dominant uncertainties in inverse estimates of regional CO2 surface-atmosphere fluxes is related to model errors in vertical transport within the planetary boundary layer (PBL. In this study we present the results from a synthetic experiment using the atmospheric model WRF-VPRM to realistically simulate transport of CO2 for large parts of the European continent at 10 km spatial resolution. To elucidate the impact of vertical mixing error on modeled CO2 mixing ratios we simulated a month during the growing season (August 2006 with different commonly used parameterizations of the PBL (Mellor-Yamada-Janjić (MYJ and Yonsei-University (YSU scheme. To isolate the effect of transport errors we prescribed the same CO2 surface fluxes for both simulations. Differences in simulated CO2 mixing ratios (model bias were on the order of 3 ppm during daytime with larger values at night. We present a simple method to reduce this bias by 70–80% when the true height of the mixed layer is known.

  5. Mixed effects modeling of proliferation rates in cell-based models: consequence for pharmacogenomics and cancer.

    Directory of Open Access Journals (Sweden)

    Hae Kyung Im

    2012-02-01

    Full Text Available The International HapMap project has made publicly available extensive genotypic data on a number of lymphoblastoid cell lines (LCLs. Building on this resource, many research groups have generated a large amount of phenotypic data on these cell lines to facilitate genetic studies of disease risk or drug response. However, one problem that may reduce the usefulness of these resources is the biological noise inherent to cellular phenotypes. We developed a novel method, termed Mixed Effects Model Averaging (MEM, which pools data from multiple sources and generates an intrinsic cellular growth rate phenotype. This intrinsic growth rate was estimated for each of over 500 HapMap cell lines. We then examined the association of this intrinsic growth rate with gene expression levels and found that almost 30% (2,967 out of 10,748 of the genes tested were significant with FDR less than 10%. We probed further to demonstrate evidence of a genetic effect on intrinsic growth rate by determining a significant enrichment in growth-associated genes among genes targeted by top growth-associated SNPs (as eQTLs. The estimated intrinsic growth rate as well as the strength of the association with genetic variants and gene expression traits are made publicly available through a cell-based pharmacogenomics database, PACdb. This resource should enable researchers to explore the mediating effects of proliferation rate on other phenotypes.

  6. Groundwater contamination from an inactive uranium mill tailings pile. 2. Application of a dynamic mixing model

    International Nuclear Information System (INIS)

    Narashimhan, T.N.; White, A.F.; Tokunaga, T.

    1986-01-01

    At Riverton, Wyoming, low pH process waters from an abandoned uranium mill tailings pile have been infiltrating into and contaminating the shallow water table aquifer. The contamination process has been governed by transient infiltration rates, saturated-unsaturated flow, as well as transient chemical reactions between the many chemical species present in the mixing waters and the sediments. In the first part of this two-part series the authors presented field data as well as an interpretation based on a static mixing models. As an upper bound, the authors estimated that 1.7% of the tailings water had mixed with the native groundwater. In the present work they present the results of numerical investigation of the dynamic mixing process. The model, DYNAMIX (DYNamic MIXing), couples a chemical speciation algorithm, PHREEQE, with a modified form of the transport algorithm, TRUMP, specifically designed to handle the simultaneous migration of several chemical constituents. The overall problem of simulating the evolution and migration of the contaminant plume was divided into three sub problems that were solved in sequential stages. These were the infiltration problem, the reactive mixing problem, and the plume-migration problem. The results of the application agree reasonably with the detailed field data. The methodology developed in the present study demonstrates the feasibility of analyzing the evolution of natural hydrogeochemical systems through a coupled analysis of transient fluid flow as well as chemical reactions. It seems worthwhile to devote further effort toward improving the physicochemical capabilities of the model as well as to enhance its computational efficiency

  7. A Modified Cellular Automaton Approach for Mixed Bicycle Traffic Flow Modeling

    Directory of Open Access Journals (Sweden)

    Xiaonian Shan

    2015-01-01

    Full Text Available Several previous studies have used the Cellular Automaton (CA for the modeling of bicycle traffic flow. However, previous CA models have several limitations, resulting in differences between the simulated and the observed traffic flow features. The primary objective of this study is to propose a modified CA model for simulating the characteristics of mixed bicycle traffic flow. Field data were collected on physically separated bicycle path in Shanghai, China, and were used to calibrate the CA model using the genetic algorithm. Traffic flow features between simulations of several CA models and field observations were compared. The results showed that our modified CA model produced more accurate simulation for the fundamental diagram and the passing events in mixed bicycle traffic flow. Based on our model, the bicycle traffic flow features, including the fundamental diagram, the number of passing events, and the number of lane changes, were analyzed. We also analyzed the traffic flow features with different traffic densities, traffic components on different travel lanes. Results of the study can provide important information for understanding and simulating the operations of mixed bicycle traffic flow.

  8. Incorporating vehicle mix in stimulus-response car-following models

    Directory of Open Access Journals (Sweden)

    Saidi Siuhi

    2016-06-01

    Full Text Available The objective of this paper is to incorporate vehicle mix in stimulus-response car-following models. Separate models were estimated for acceleration and deceleration responses to account for vehicle mix via both movement state and vehicle type. For each model, three sub-models were developed for different pairs of following vehicles including “automobile following automobile,” “automobile following truck,” and “truck following automobile.” The estimated model parameters were then validated against other data from a similar region and roadway. The results indicated that drivers' behaviors were significantly different among the different pairs of following vehicles. Also the magnitude of the estimated parameters depends on the type of vehicle being driven and/or followed. These results demonstrated the need to use separate models depending on movement state and vehicle type. The differences in parameter estimates confirmed in this paper highlight traffic safety and operational issues of mixed traffic operation on a single lane. The findings of this paper can assist transportation professionals to improve traffic simulation models used to evaluate the impact of different strategies on ameliorate safety and performance of highways. In addition, driver response time lag estimates can be used in roadway design to calculate important design parameters such as stopping sight distance on horizontal and vertical curves for both automobiles and trucks.

  9. A D-vine copula-based model for repeated measurements extending linear mixed models with homogeneous correlation structure.

    Science.gov (United States)

    Killiches, Matthias; Czado, Claudia

    2018-03-22

    We propose a model for unbalanced longitudinal data, where the univariate margins can be selected arbitrarily and the dependence structure is described with the help of a D-vine copula. We show that our approach is an extremely flexible extension of the widely used linear mixed model if the correlation is homogeneous over the considered individuals. As an alternative to joint maximum-likelihood a sequential estimation approach for the D-vine copula is provided and validated in a simulation study. The model can handle missing values without being forced to discard data. Since conditional distributions are known analytically, we easily make predictions for future events. For model selection, we adjust the Bayesian information criterion to our situation. In an application to heart surgery data our model performs clearly better than competing linear mixed models. © 2018, The International Biometric Society.

  10. Development of a nonlocal convective mixing scheme with varying upward mixing rates for use in air quality and chemical transport models.

    Science.gov (United States)

    Mihailović, Dragutin T; Alapaty, Kiran; Sakradzija, Mirjana

    2008-06-01

    Asymmetrical convective non-local scheme (CON) with varying upward mixing rates is developed for simulation of vertical turbulent mixing in the convective boundary layer in air quality and chemical transport models. The upward mixing rate form the surface layer is parameterized using the sensible heat flux and the friction and convective velocities. Upward mixing rates varying with height are scaled with an amount of turbulent kinetic energy in layer, while the downward mixing rates are derived from mass conservation. This scheme provides a less rapid mass transport out of surface layer into other layers than other asymmetrical convective mixing schemes. In this paper, we studied the performance of a nonlocal convective mixing scheme with varying upward mixing in the atmospheric boundary layer and its impact on the concentration of pollutants calculated with chemical and air-quality models. This scheme was additionally compared versus a local eddy-diffusivity scheme (KSC). Simulated concentrations of NO(2) and the nitrate wet deposition by the CON scheme are closer to the observations when compared to those obtained from using the KSC scheme. Concentrations calculated with the CON scheme are in general higher and closer to the observations than those obtained by the KSC scheme (of the order of 15-20%). Nitrate wet deposition calculated with the CON scheme are in general higher and closer to the observations than those obtained by the KSC scheme. To examine the performance of the scheme, simulated and measured concentrations of a pollutant (NO(2)) and nitrate wet deposition was compared for the year 2002. The comparison was made for the whole domain used in simulations performed by the chemical European Monitoring and Evaluation Programme Unified model (version UNI-ACID, rv2.0) where schemes were incorporated.

  11. Efficient and robust estimation for longitudinal mixed models for binary data

    DEFF Research Database (Denmark)

    Holst, René

    2009-01-01

    This paper proposes a longitudinal mixed model for binary data. The model extends the classical Poisson trick, in which a binomial regression is fitted by switching to a Poisson framework. A recent estimating equations method for generalized linear longitudinal mixed models, called GEEP, is used...... as a vehicle for fitting the conditional Poisson regressions, given a latent process of serial correlated Tweedie variables. The regression parameters are estimated using a quasi-score method, whereas the dispersion and correlation parameters are estimated by use of bias-corrected Pearson-type estimating...... equations, using second moments only. Random effects are predicted by BLUPs. The method provides a computationally efficient and robust approach to the estimation of longitudinal clustered binary data and accommodates linear and non-linear models. A simulation study is used for validation and finally...

  12. A novel modeling approach to the mixing process in twin-screw extruders

    Science.gov (United States)

    Kennedy, Amedu Osaighe; Penlington, Roger; Busawon, Krishna; Morgan, Andy

    2014-05-01

    In this paper, a theoretical model for the mixing process in a self-wiping co-rotating twin screw extruder by combination of statistical techniques and mechanistic modelling has been proposed. The approach was to examine the mixing process in the local zones via residence time distribution and the flow dynamics, from which predictive models of the mean residence time and mean time delay were determined. Increase in feed rate at constant screw speed was found to narrow the shape of the residence time distribution curve, reduction in the mean residence time and time delay and increase in the degree of fill. Increase in screw speed at constant feed rate was found to narrow the shape of the residence time distribution curve, decrease in the degree of fill in the extruder and thus an increase in the time delay. Experimental investigation was also done to validate the modeling approach.

  13. Analysis of the type II robotic mixed-model assembly line balancing problem

    Science.gov (United States)

    Çil, Zeynel Abidin; Mete, Süleyman; Ağpak, Kürşad

    2017-06-01

    In recent years, there has been an increasing trend towards using robots in production systems. Robots are used in different areas such as packaging, transportation, loading/unloading and especially assembly lines. One important step in taking advantage of robots on the assembly line is considering them while balancing the line. On the other hand, market conditions have increased the importance of mixed-model assembly lines. Therefore, in this article, the robotic mixed-model assembly line balancing problem is studied. The aim of this study is to develop a new efficient heuristic algorithm based on beam search in order to minimize the sum of cycle times over all models. In addition, mathematical models of the problem are presented for comparison. The proposed heuristic is tested on benchmark problems and compared with the optimal solutions. The results show that the algorithm is very competitive and is a promising tool for further research.

  14. A Mixed Prediction Model of Ground Subsidence for Civil Infrastructures on Soft Ground

    Directory of Open Access Journals (Sweden)

    Kiyoshi Kobayashi

    2012-01-01

    Full Text Available The estimation of ground subsidence processes is an important subject for the asset management of civil infrastructures on soft ground, such as airport facilities. In the planning and design stage, there exist many uncertainties in geotechnical conditions, and it is impossible to estimate the ground subsidence process by deterministic methods. In this paper, the sets of sample paths designating ground subsidence processes are generated by use of a one-dimensional consolidation model incorporating inhomogeneous ground subsidence. Given the sample paths, the mixed subsidence model is presented to describe the probabilistic structure behind the sample paths. The mixed model can be updated by the Bayesian methods based upon the newly obtained monitoring data. Concretely speaking, in order to estimate the updating models, Markov Chain Monte Calro method, which is the frontier technique in Bayesian statistics, is applied. Through a case study, this paper discussed the applicability of the proposed method and illustrated its possible application and future works.

  15. An Investigation of a Hybrid Mixing Model for PDF Simulations of Turbulent Premixed Flames

    Science.gov (United States)

    Zhou, Hua; Li, Shan; Wang, Hu; Ren, Zhuyin

    2015-11-01

    Predictive simulations of turbulent premixed flames over a wide range of Damköhler numbers in the framework of Probability Density Function (PDF) method still remain challenging due to the deficiency in current micro-mixing models. In this work, a hybrid micro-mixing model, valid in both the flamelet regime and broken reaction zone regime, is proposed. A priori testing of this model is first performed by examining the conditional scalar dissipation rate and conditional scalar diffusion in a 3-D direct numerical simulation dataset of a temporally evolving turbulent slot jet flame of lean premixed H2-air in the thin reaction zone regime. Then, this new model is applied to PDF simulations of the Piloted Premixed Jet Burner (PPJB) flames, which are a set of highly shear turbulent premixed flames and feature strong turbulence-chemistry interaction at high Reynolds and Karlovitz numbers. Supported by NSFC 51476087 and NSFC 91441202.

  16. ρ-ω mixing and the Nolen-Schiffer anomaly in the Walecka model

    International Nuclear Information System (INIS)

    Barreiro, L.A.; Galeao, A.P.; Krein, G.

    1995-01-01

    The Nolen-Schiffer anomaly is the long standing discrepancy between theory and experiment of binding energy differences of mirror nuclei. It appears that the anomaly is largely explained by the charge symmetry breaking force generated by the ρ 0 -ω mixing. In the present contribution we present the results of a calculation of the effect of the ρ 0 -ω mixing to the binding energy differences for nuclei with A = 15, 17, 39, 41 using the Walecka model for the nuclear structure. (author)

  17. Optimization model of energy mix taking into account the environmental impact

    International Nuclear Information System (INIS)

    Gruenwald, O.; Oprea, D.

    2012-01-01

    At present, the energy system in the Czech Republic needs to decide some important issues regarding limited fossil resources, greater efficiency in producing of electrical energy and reducing emission levels of pollutants. These problems can be decided only by formulating and implementing an energy mix that will meet these conditions: rational, reliable, sustainable and competitive. The aim of this article is to find a new way of determining an optimal mix for the energy system in the Czech Republic. To achieve the aim, the linear optimization model comprising several economics, environmental and technical aspects will be applied. (Authors)

  18. Stochastic Modelling and Self Tuning Control of a Continuous Cement Raw Material Mixing System

    Directory of Open Access Journals (Sweden)

    Hannu T. Toivonen

    1980-01-01

    Full Text Available The control of a continuously operating system for cement raw material mixing is studied. The purpose of the mixing system is to maintain a constant composition of the cement raw meal for the kiln despite variations of the raw material compositions. Experimental knowledge of the process dynamics and the characteristics of the various disturbances is used for deriving a stochastic model of the system. The optimal control strategy is then obtained as a minimum variance strategy. The control problem is finally solved using a self-tuning minimum variance regulator, and results from a successful implementation of the regulator are given.

  19. A Mixed Method Research for Finding a Model of Administrative Decentralization

    OpenAIRE

    Tahereh Feizy; Alireza Moghali; Masuod Geramipoor; Reza Zare

    2015-01-01

    One of the critical issues of administrative decentralization in translating theory into practice is understanding its meaning. An important method to identify administrative decentralization is to address how it can be planned and implemented, and what are its implications, and how it would overcome challenges. The purpose of this study is finding a model for analyzing and evaluating administrative decentralization, so a mixed method research was used to explore and confirm the model of Admi...

  20. QCD mixing effects in a gauge invariant quark model for photo- and electroproduction of baryon resonances

    International Nuclear Information System (INIS)

    Zhenping Li; Close, F.E.

    1990-03-01

    The photo and electroproduction of baryon resonances has been calculated using the Constituent Quark Model with chromodynamics consistent with O(υ 2 /c 2 ) for the quarks. We find that the successes of the nonrelativistic quark model are preserved, some problems are removed and that QCD mixing effects may become important with increasing q 2 in electroproduction. For the first time both spectroscopy and transitions receive a unified treatment with a single set of parameters. (author)

  1. Intercomparison of model simulations of mixed-phase clouds observed during the ARM Mixed-Phase Arctic Cloud Experiment. Part II: Multi-layered cloud

    Energy Technology Data Exchange (ETDEWEB)

    Morrison, H; McCoy, R B; Klein, S A; Xie, S; Luo, Y; Avramov, A; Chen, M; Cole, J; Falk, M; Foster, M; Genio, A D; Harrington, J; Hoose, C; Khairoutdinov, M; Larson, V; Liu, X; McFarquhar, G; Poellot, M; Shipway, B; Shupe, M; Sud, Y; Turner, D; Veron, D; Walker, G; Wang, Z; Wolf, A; Xu, K; Yang, F; Zhang, G

    2008-02-27

    Results are presented from an intercomparison of single-column and cloud-resolving model simulations of a deep, multi-layered, mixed-phase cloud system observed during the ARM Mixed-Phase Arctic Cloud Experiment. This cloud system was associated with strong surface turbulent sensible and latent heat fluxes as cold air flowed over the open Arctic Ocean, combined with a low pressure system that supplied moisture at mid-level. The simulations, performed by 13 single-column and 4 cloud-resolving models, generally overestimate the liquid water path and strongly underestimate the ice water path, although there is a large spread among the models. This finding is in contrast with results for the single-layer, low-level mixed-phase stratocumulus case in Part I of this study, as well as previous studies of shallow mixed-phase Arctic clouds, that showed an underprediction of liquid water path. The overestimate of liquid water path and underestimate of ice water path occur primarily when deeper mixed-phase clouds extending into the mid-troposphere were observed. These results suggest important differences in the ability of models to simulate Arctic mixed-phase clouds that are deep and multi-layered versus shallow and single-layered. In general, models with a more sophisticated, two-moment treatment of the cloud microphysics produce a somewhat smaller liquid water path that is closer to observations. The cloud-resolving models tend to produce a larger cloud fraction than the single-column models. The liquid water path and especially the cloud fraction have a large impact on the cloud radiative forcing at the surface, which is dominated by the longwave flux for this case.

  2. Mixing methodology, nursing theory and research design for a practice model of district nursing advocacy.

    Science.gov (United States)

    Reed, Frances M; Fitzgerald, Les; Rae, Melanie

    2016-01-01

    To highlight philosophical and theoretical considerations for planning a mixed methods research design that can inform a practice model to guide rural district nursing end of life care. Conceptual models of nursing in the community are general and lack guidance for rural district nursing care. A combination of pragmatism and nurse agency theory can provide a framework for ethical considerations in mixed methods research in the private world of rural district end of life care. Reflection on experience gathered in a two-stage qualitative research phase, involving rural district nurses who use advocacy successfully, can inform a quantitative phase for testing and complementing the data. Ongoing data analysis and integration result in generalisable inferences to achieve the research objective. Mixed methods research that creatively combines philosophical and theoretical elements to guide design in the particular ethical situation of community end of life care can be used to explore an emerging field of interest and test the findings for evidence to guide quality nursing practice. Combining philosophy and nursing theory to guide mixed methods research design increases the opportunity for sound research outcomes that can inform a nursing model of care.

  3. Fermion masses and flavor mixings in a model with S4 flavor symmetry

    International Nuclear Information System (INIS)

    Ding Guijun

    2010-01-01

    We present a supersymmetric model of quark and lepton based on S 4 xZ 3 xZ 4 flavor symmetry. The S 4 symmetry is broken down to Klein four and Z 3 subgroups in the neutrino and the charged lepton sectors, respectively. Tri-Bimaximal mixing and the charged lepton mass hierarchies are reproduced simultaneously at leading order. Moreover, a realistic pattern of quark masses and mixing angles is generated with the exception of the mixing angle between the first two generations, which requires a small accidental enhancement. It is remarkable that the mass hierarchies are controlled by the spontaneous breaking of flavor symmetry in our model. The next to leading order contributions are studied, all the fermion masses and mixing angles receive corrections of relative order λ c 2 with respect to the leading order results. The phenomenological consequences of the model are analyzed, the neutrino mass spectrum can be normal hierarchy or inverted hierarchy, and the combined measurement of the 0ν2β decay effective mass m ββ and the lightest neutrino mass can distinguish the normal hierarchy from the inverted hierarchy.

  4. Stochastic Mixed-Effects Parameters Bertalanffy Process, with Applications to Tree Crown Width Modeling

    Directory of Open Access Journals (Sweden)

    Petras Rupšys

    2015-01-01

    Full Text Available A stochastic modeling approach based on the Bertalanffy law gained interest due to its ability to produce more accurate results than the deterministic approaches. We examine tree crown width dynamic with the Bertalanffy type stochastic differential equation (SDE and mixed-effects parameters. In this study, we demonstrate how this simple model can be used to calculate predictions of crown width. We propose a parameter estimation method and computational guidelines. The primary goal of the study was to estimate the parameters by considering discrete sampling of the diameter at breast height and crown width and by using maximum likelihood procedure. Performance statistics for the crown width equation include statistical indexes and analysis of residuals. We use data provided by the Lithuanian National Forest Inventory from Scots pine trees to illustrate issues of our modeling technique. Comparison of the predicted crown width values of mixed-effects parameters model with those obtained using fixed-effects parameters model demonstrates the predictive power of the stochastic differential equations model with mixed-effects parameters. All results were implemented in a symbolic algebra system MAPLE.

  5. A Mixed Land Cover Spatio-temporal Data Model Based on Object-oriented and Snapshot

    Directory of Open Access Journals (Sweden)

    LI Yinchao

    2016-07-01

    Full Text Available Spatio-temporal data model (STDM is one of the hot topics in the domains of spatio-temporal database and data analysis. There is a common view that a universal STDM is always of high complexity due to the various situation of spatio-temporal data. In this article, a mixed STDM is proposed based on object-oriented and snapshot models for modelling and analyzing landcover change (LCC. This model uses the object-oriented STDM to describe the spatio-temporal processes of land cover patches and organize their spatial and attributive properties. In the meantime, it uses the snapshot STDM to present the spatio-temporal distribution of LCC on the whole via snapshot images. The two types of models are spatially and temporally combined into a mixed version. In addition to presenting the spatio-temporal events themselves, this model could express the transformation events between different classes of spatio-temporal objects. It can be used to create database for historical data of LCC, do spatio-temporal statistics, simulation and data mining with the data. In this article, the LCC data in Heilongjiang province is used for case study to validate spatio-temporal data management and analysis abilities of mixed STDM, including creating database, spatio-temporal query, global evolution analysis and patches spatio-temporal process expression.

  6. Modeling policy mix to improve the competitiveness of Indonesian palm oil industry

    Energy Technology Data Exchange (ETDEWEB)

    Silitonga, R. Y.H.; Siswanto, J.; Simatupang, T.; Bahagia, S.N.

    2016-07-01

    The purpose of this research is to develop a model that will explain the impact of government policies to the competitiveness of palm oil industry. The model involves two commodities in this industry, namely crude palm oil (CPO) and refined palm oil (RPO), each has different added value. The model built will define the behavior of government in controlling palm oil industry, and their interactions with macro-environment, in order to improve the competitiveness of the industry. Therefore the first step was to map the main activities in this industry using value chain analysis. After that a conceptual model was built, where the output of the model is competitiveness of the industry based on market share. The third step was model formulation. The model is then utilized to simulate the policy mix given by government in improving the competitiveness of Palm Oil Industry. The model was developed using only some policies which give direct impact to the competitiveness of the industry. For macro environment input, only price is considered in this model. The model can simulate the output of the industry for various government policies mix given to the industry. This research develops a model that can represent the structure and relationship between industry, government and macro environment, using value chain analysis and hierarchical multilevel system approach. (Author)

  7. Modeling policy mix to improve the competitiveness of Indonesian palm oil industry

    Directory of Open Access Journals (Sweden)

    Roland Y H Silitonga

    2016-04-01

    Full Text Available Purpose: The purpose of this research is to develop a model that will explain the impact of government policies to the competitiveness of palm oil industry. The model involves two commodities in this industry, namely crude palm oil (CPO and refined palm oil (RPO, each has different added value. Design/methodology/approach: The model built will define the behavior of government in controlling palm oil industry, and their interactions with macro-environment, in order to improve the competitiveness of the industry. Therefore the first step was to map the main activities in this industry using value chain analysis. After that a conceptual model was built, where the output of the model is competitiveness of the industry based on market share. The third step was model formulation. The model is then utilized to simulate the policy mix given by government in improving the competitiveness of Palm Oil Industry. Research limitations/implications: The model was developed using only some policies which give direct impact to the competitiveness of the industry. For macro environment input, only price is considered in this model. Practical implications: The model can simulate the output of the industry for various government policies mix given to the industry. Originality/value: This research develops a model that can represent the structure and relationship between industry, government and macro environment, using value chain analysis and hierarchical multilevel system approach.

  8. Two-equation and multi-fluid turbulence models for Rayleigh–Taylor mixing

    International Nuclear Information System (INIS)

    Kokkinakis, I.W.; Drikakis, D.; Youngs, D.L.; Williams, R.J.R.

    2015-01-01

    Highlights: • We present a new improved version of the K–L model. • The improved K–L is found in good agreement with the multi-fluid model and ILES. • The study concerns Rayleigh–Taylor flows at initial density ratios 3:1 and 20:1. - Abstract: This paper presents a new, improved version of the K–L model, as well as a detailed investigation of K–L and multi-fluid models with reference to high-resolution implicit large eddy simulations of compressible Rayleigh–Taylor mixing. The accuracy of the models is examined for different interface pressures and specific heat ratios for Rayleigh–Taylor flows at initial density ratios 3:1 and 20:1. It is shown that the original version of the K–L model requires modifications in order to provide comparable results to the multi-fluid model. The modifications concern the addition of an enthalpy diffusion term to the energy equation; the formulation of the turbulent kinetic energy (source) term in the K equation; and the calculation of the local Atwood number. The proposed modifications significantly improve the results of the K–L model, which are found in good agreement with the multi-fluid model and implicit large eddy simulations with respect to the self-similar mixing width; peak turbulent kinetic energy growth rate, as well as volume fraction and turbulent kinetic energy profiles. However, a key advantage of the two-fluid model is that it can represent the degree of molecular mixing in a direct way, by transferring mass between the two phases. The limitations of the single-fluid K–L model as well as the merits of more advanced Reynolds-averaged Navier–Stokes models are also discussed throughout the paper.

  9. Intercomparison of model simulations of mixed-phase clouds observed during the ARM Mixed-Phase Arctic Cloud Experiment. Part I: Single layer cloud

    Energy Technology Data Exchange (ETDEWEB)

    Klein, S A; McCoy, R B; Morrison, H; Ackerman, A; Avramov, A; deBoer, G; Chen, M; Cole, J; DelGenio, A; Golaz, J; Hashino, T; Harrington, J; Hoose, C; Khairoutdinov, M; Larson, V; Liu, X; Luo, Y; McFarquhar, G; Menon, S; Neggers, R; Park, S; Poellot, M; von Salzen, K; Schmidt, J; Sednev, I; Shipway, B; Shupe, M; Spangenberg, D; Sud, Y; Turner, D; Veron, D; Falk, M; Foster, M; Fridlind, A; Walker, G; Wang, Z; Wolf, A; Xie, S; Xu, K; Yang, F; Zhang, G

    2008-02-27

    Results are presented from an intercomparison of single-column and cloud-resolving model simulations of a cold-air outbreak mixed-phase stratocumulus cloud observed during the Atmospheric Radiation Measurement (ARM) program's Mixed-Phase Arctic Cloud Experiment. The observed cloud occurred in a well-mixed boundary layer with a cloud top temperature of -15 C. The observed liquid water path of around 160 g m{sup -2} was about two-thirds of the adiabatic value and much greater than the mass of ice crystal precipitation which when integrated from the surface to cloud top was around 15 g m{sup -2}. The simulations were performed by seventeen single-column models (SCMs) and nine cloud-resolving models (CRMs). While the simulated ice water path is generally consistent with the observed values, the median SCM and CRM liquid water path is a factor of three smaller than observed. Results from a sensitivity study in which models removed ice microphysics indicate that in many models the interaction between liquid and ice-phase microphysics is responsible for the large model underestimate of liquid water path. Despite this general underestimate, the simulated liquid and ice water paths of several models are consistent with the observed values. Furthermore, there is some evidence that models with more sophisticated microphysics simulate liquid and ice water paths that are in better agreement with the observed values, although considerable scatter is also present. Although no single factor guarantees a good simulation, these results emphasize the need for improvement in the model representation of mixed-phase microphysics. This case study, which has been well observed from both aircraft and ground-based remote sensors, could be a benchmark for model simulations of mixed-phase clouds.

  10. Modeling of hot-mix asphalt compaction : a thermodynamics-based compressible viscoelastic model

    Science.gov (United States)

    2010-12-01

    Compaction is the process of reducing the volume of hot-mix asphalt (HMA) by the application of external forces. As a result of compaction, the volume of air voids decreases, aggregate interlock increases, and interparticle friction increases. The qu...

  11. A general mixed boundary model reduction method for component mode synthesis

    International Nuclear Information System (INIS)

    Voormeeren, S N; Van der Valk, P L C; Rixen, D J

    2010-01-01

    A classic issue in component mode synthesis (CMS) methods is the choice for fixed or free boundary conditions at the interface degrees of freedom (DoF) and the associated vibration modes in the components reduction base. In this paper, a novel mixed boundary CMS method called the 'Mixed Craig-Bampton' method is proposed. The method is derived by dividing the substructure DoF into a set of internal DoF, free interface DoF and fixed interface DoF. To this end a simple but effective scheme is introduced that, for every pair of interface DoF, selects a free or fixed boundary condition for each DoF individually. Based on this selection a reduction basis is computed consisting of vibration modes, static constraint modes and static residual flexibility modes. In order to assemble the reduced substructures a novel mixed assembly procedure is developed. It is shown that this approach leads to relatively sparse reduced matrices, whereas other mixed boundary methods often lead to full matrices. As such, the Mixed Craig-Bampton method forms a natural generalization of the classic Craig-Bampton and more recent Dual Craig-Bampton methods. Finally, the method is applied to a finite element test model. Analysis reveals that the proposed method has comparable or better accuracy and superior versatility with respect to the existing methods.

  12. Unlearning of Mixed States in the Hopfield Model —Extensive Loading Case—

    Science.gov (United States)

    Hayashi, Kao; Hashimoto, Chinami; Kimoto, Tomoyuki; Uezu, Tatsuya

    2018-05-01

    We study the unlearning of mixed states in the Hopfield model for the extensive loading case. Firstly, we focus on case I, where several embedded patterns are correlated with each other, whereas the rest are uncorrelated. Secondly, we study case II, where patterns are divided into clusters in such a way that patterns in any cluster are correlated but those in two different clusters are not correlated. By using the replica method, we derive the saddle point equations for order parameters under the ansatz of replica symmetry. The same equations are also derived by self-consistent signal-to-noise analysis in case I. In both cases I and II, we find that when the correlation between patterns is large, the network loses its ability to retrieve the embedded patterns and, depending on the parameters, a confused memory, which is a mixed state and/or spin glass state, emerges. By unlearning the mixed state, the network acquires the ability to retrieve the embedded patterns again in some parameter regions. We find that to delete the mixed state and to retrieve the embedded patterns, the coefficient of unlearning should be chosen appropriately. We perform Markov chain Monte Carlo simulations and find that the simulation and theoretical results agree reasonably well, except for the spin glass solution in a parameter region due to the replica symmetry breaking. Furthermore, we find that the existence of many correlated clusters reduces the stabilities of both embedded patterns and mixed states.

  13. Mixing of ν/sub e/ and ν/sub μ/ in SO(10) models

    International Nuclear Information System (INIS)

    Milton, K.; Nandi, S.; Tanaka, K.

    1982-01-01

    We found previously in SO(10) grand unified theories that if the neutrinos have a Dirac mass and a right-handed Majorana mass (approx.10 15 GeV) but no left-handed Majorana mass, there is small ν/sub e/ mixing but ν/sub μ/-ν/sub tau/ mixing can be substantial. We reexamine this problem on the basis of a formalism that assumes that the up, down, lepton, and neutrino mass matrices arise from a single complex 10 and a single 126 Higgs boson. This formalism determines the Majorana mass matrix in terms of quark mass matrices. Adopting three different sets of quark mass matrices that produce acceptable fermion mass ratios and Cabbibo mixing, we obtain results consistent with the above; however, in the optimum case, ν/sub e/-ν/sub μ/ mixing can be of the order of the Cabbibo angle. In an extension of this model wherein the Witten mechanism generates the Majorana mass, we illustrate quantitatively how the parameter characterizing the Majorana sector must be tuned in order to achieve large ν/sub e/-ν/sub μ/ mixing

  14. Model's sparse representation based on reduced mixed GMsFE basis methods

    Energy Technology Data Exchange (ETDEWEB)

    Jiang, Lijian, E-mail: ljjiang@hnu.edu.cn [Institute of Mathematics, Hunan University, Changsha 410082 (China); Li, Qiuqi, E-mail: qiuqili@hnu.edu.cn [College of Mathematics and Econometrics, Hunan University, Changsha 410082 (China)

    2017-06-01

    In this paper, we propose a model's sparse representation based on reduced mixed generalized multiscale finite element (GMsFE) basis methods for elliptic PDEs with random inputs. A typical application for the elliptic PDEs is the flow in heterogeneous random porous media. Mixed generalized multiscale finite element method (GMsFEM) is one of the accurate and efficient approaches to solve the flow problem in a coarse grid and obtain the velocity with local mass conservation. When the inputs of the PDEs are parameterized by the random variables, the GMsFE basis functions usually depend on the random parameters. This leads to a large number degree of freedoms for the mixed GMsFEM and substantially impacts on the computation efficiency. In order to overcome the difficulty, we develop reduced mixed GMsFE basis methods such that the multiscale basis functions are independent of the random parameters and span a low-dimensional space. To this end, a greedy algorithm is used to find a set of optimal samples from a training set scattered in the parameter space. Reduced mixed GMsFE basis functions are constructed based on the optimal samples using two optimal sampling strategies: basis-oriented cross-validation and proper orthogonal decomposition. Although the dimension of the space spanned by the reduced mixed GMsFE basis functions is much smaller than the dimension of the original full order model, the online computation still depends on the number of coarse degree of freedoms. To significantly improve the online computation, we integrate the reduced mixed GMsFE basis methods with sparse tensor approximation and obtain a sparse representation for the model's outputs. The sparse representation is very efficient for evaluating the model's outputs for many instances of parameters. To illustrate the efficacy of the proposed methods, we present a few numerical examples for elliptic PDEs with multiscale and random inputs. In particular, a two-phase flow model in

  15. A Multiphase Non-Linear Mixed Effects Model: An Application to Spirometry after Lung Transplantation

    Science.gov (United States)

    Rajeswaran, Jeevanantham; Blackstone, Eugene H.

    2014-01-01

    In medical sciences, we often encounter longitudinal temporal relationships that are non-linear in nature. The influence of risk factors may also change across longitudinal follow-up. A system of multiphase non-linear mixed effects model is presented to model temporal patterns of longitudinal continuous measurements, with temporal decomposition to identify the phases and risk factors within each phase. Application of this model is illustrated using spirometry data after lung transplantation using readily available statistical software. This application illustrates the usefulness of our flexible model when dealing with complex non-linear patterns and time varying coefficients. PMID:24919830

  16. An efficient model for predicting mixing lengths in serial pumping of petroleum products

    Energy Technology Data Exchange (ETDEWEB)

    Baptista, Renan Martins [PETROBRAS S.A., Rio de Janeiro, RJ (Brazil). Centro de Pesquisas. Div. de Explotacao]. E-mail: renan@cenpes.petrobras.com.br; Rachid, Felipe Bastos de Freitas [Universidade Federal Fluminense, Niteroi, RJ (Brazil). Dept. de Engenharia Mecanica]. E-mail: rachid@mec.uff.br; Araujo, Jose Henrique Carneiro de [Universidade Federal Fluminense, Niteroi, RJ (Brazil). Dept. de Ciencia da Computacao]. E-mail: jhca@dcc.ic.uff.br

    2000-07-01

    This paper presents a new model for estimating mixing volumes which arises in batching transfers in multi product pipelines. The novel features of the model are the incorporation of the flow rate variation with time and the use of a more precise effective dispersion coefficient, which is considered to depend on the concentration. The governing equation of the model forms a non linear initial value problem that is solved by using a predictor corrector finite difference method. A comparison among the theoretical predictions of the proposed model, a field test and other classical procedures show that it exhibits the best estimate over the whole range of admissible concentrations investigated. (author)

  17. Euler-Lagrange CFD modelling of unconfined gas mixing in anaerobic digestion.

    Science.gov (United States)

    Dapelo, Davide; Alberini, Federico; Bridgeman, John

    2015-11-15

    A novel Euler-Lagrangian (EL) computational fluid dynamics (CFD) finite volume-based model to simulate the gas mixing of sludge for anaerobic digestion is developed and described. Fluid motion is driven by momentum transfer from bubbles to liquid. Model validation is undertaken by assessing the flow field in a labscale model with particle image velocimetry (PIV). Conclusions are drawn about the upscaling and applicability of the model to full-scale problems, and recommendations are given for optimum application. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Water-rock interaction modelling and uncertainties of mixing modelling. SDM-Site Forsmark

    Energy Technology Data Exchange (ETDEWEB)

    Gimeno, Maria J.; Auque, Luis F.; Gomez, Javier B.; Acero, Patricia (Univ. of Zaragoza, Zaragoza (Spain))

    2008-08-15

    in geochemistry, hydrochemistry, hydrogeochemistry, microbiology, geomicrobiology, analytical chemistry etc. The resulting site descriptive model version, mainly based on 2.2 data and complementary 2.3 data, was carried out during September 2006 to December 2007. Several groups within ChemNet were involved and the evaluation was conducted independently using different approaches ranging from expert knowledge to geochemical and mathematical modelling including transport modelling. During regular ChemNet meetings the results have been presented and discussed. This report presents the modelling work performed by the University of Zaragoza group as part of the work planned for Forsmark during stages 2.2 and 2.3. The chemical characteristics of the groundwaters in the Forsmark and Laxemar areas are the result of a complex mixing process driven by the input of different recharge waters since the last glaciation. The successive penetration at different depths of dilute glacial melt-waters, Littorina Sea waters and dilute meteoric waters has triggered complex density and hydraulically driven flows that have mixed them with long residence time, highly saline waters present in the fractures and in the rock matrix. A general description of the main characteristics and processes controlling the hydrogeochemical evolution with depth in the Forsmark groundwater system is presented in this report: The hydrochemical characteristics and evolution of the Near surface waters (up to 20 m depth) is mainly determined by weathering reactions and especially affected by the presence of limestones. The biogenic CO{sub 2} input (derived from decay of organic matter and root respiration) and the associated weathering of carbonates control the pH and the concentrations of Ca and HCO{sub 3}- in the near-surface environment. Current seasonal variability of CO{sub 2} input produces variable but high calcium and bicarbonate contents in the Forsmark near-surface waters: up to 240 mg/L Ca and 150 to

  19. Water-rock interaction modelling and uncertainties of mixing modelling. SDM-Site Forsmark

    International Nuclear Information System (INIS)

    Gimeno, Maria J.; Auque, Luis F.; Gomez, Javier B.; Acero, Patricia

    2008-08-01

    geochemistry, hydrochemistry, hydrogeochemistry, microbiology, geomicrobiology, analytical chemistry etc. The resulting site descriptive model version, mainly based on 2.2 data and complementary 2.3 data, was carried out during September 2006 to December 2007. Several groups within ChemNet were involved and the evaluation was conducted independently using different approaches ranging from expert knowledge to geochemical and mathematical modelling including transport modelling. During regular ChemNet meetings the results have been presented and discussed. This report presents the modelling work performed by the University of Zaragoza group as part of the work planned for Forsmark during stages 2.2 and 2.3. The chemical characteristics of the groundwaters in the Forsmark and Laxemar areas are the result of a complex mixing process driven by the input of different recharge waters since the last glaciation. The successive penetration at different depths of dilute glacial melt-waters, Littorina Sea waters and dilute meteoric waters has triggered complex density and hydraulically driven flows that have mixed them with long residence time, highly saline waters present in the fractures and in the rock matrix. A general description of the main characteristics and processes controlling the hydrogeochemical evolution with depth in the Forsmark groundwater system is presented in this report: The hydrochemical characteristics and evolution of the Near surface waters (up to 20 m depth) is mainly determined by weathering reactions and especially affected by the presence of limestones. The biogenic CO 2 input (derived from decay of organic matter and root respiration) and the associated weathering of carbonates control the pH and the concentrations of Ca and HCO 3 - in the near-surface environment. Current seasonal variability of CO 2 input produces variable but high calcium and bicarbonate contents in the Forsmark near-surface waters: up to 240 mg/L Ca and 150 to 900 mg/L HCO 3 - . These

  20. Renormalisation group corrections to the littlest seesaw model and maximal atmospheric mixing

    International Nuclear Information System (INIS)

    King, Stephen F.; Zhang, Jue; Zhou, Shun

    2016-01-01

    The Littlest Seesaw (LS) model involves two right-handed neutrinos and a very constrained Dirac neutrino mass matrix, involving one texture zero and two independent Dirac masses, leading to a highly predictive scheme in which all neutrino masses and the entire PMNS matrix is successfully predicted in terms of just two real parameters. We calculate the renormalisation group (RG) corrections to the LS predictions, with and without supersymmetry, including also the threshold effects induced by the decoupling of heavy Majorana neutrinos both analytically and numerically. We find that the predictions for neutrino mixing angles and mass ratios are rather stable under RG corrections. For example we find that the LS model with RG corrections predicts close to maximal atmospheric mixing, θ_2_3=45"∘±1"∘, in most considered cases, in tension with the latest NOvA results. The techniques used here apply to other seesaw models with a strong normal mass hierarchy.

  1. On the TAP Free Energy in the Mixed p-Spin Models

    Science.gov (United States)

    Chen, Wei-Kuo; Panchenko, Dmitry

    2018-05-01

    Thouless et al. (Phys Mag 35(3):593-601, 1977), derived a representation for the free energy of the Sherrington-Kirkpatrick model, called the TAP free energy, written as the difference of the energy and entropy on the extended configuration space of local magnetizations with an Onsager correction term. In the setting of mixed p-spin models with Ising spins, we prove that the free energy can indeed be written as the supremum of the TAP free energy over the space of local magnetizations whose Edwards-Anderson order parameter (self-overlap) is to the right of the support of the Parisi measure. Furthermore, for generic mixed p-spin models, we prove that the free energy is equal to the TAP free energy evaluated on the local magnetization of any pure state.

  2. Dark matter and electroweak phase transition in the mixed scalar dark matter model

    Science.gov (United States)

    Liu, Xuewen; Bian, Ligong

    2018-03-01

    We study the electroweak phase transition in the framework of the scalar singlet-doublet mixed dark matter model, in which the particle dark matter candidate is the lightest neutral Higgs that comprises the C P -even component of the inert doublet and a singlet scalar. The dark matter can be dominated by the inert doublet or singlet scalar depending on the mixing. We present several benchmark models to investigate the two situations after imposing several theoretical and experimental constraints. An additional singlet scalar and the inert doublet drive the electroweak phase transition to be strongly first order. A strong first-order electroweak phase transition and a viable dark matter candidate can be accomplished in two benchmark models simultaneously, for which a proper mass splitting among the neutral and charged Higgs masses is needed.

  3. Mixed integer linear programming model for dynamic supplier selection problem considering discounts

    Directory of Open Access Journals (Sweden)

    Adi Wicaksono Purnawan

    2018-01-01

    Full Text Available Supplier selection is one of the most important elements in supply chain management. This function involves evaluation of many factors such as, material costs, transportation costs, quality, delays, supplier capacity, storage capacity and others. Each of these factors varies with time, therefore, supplier identified for one period is not necessarily be same for the next period to supply the same product. So, mixed integer linear programming (MILP was developed to overcome the dynamic supplier selection problem (DSSP. In this paper, a mixed integer linear programming model is built to solve the lot-sizing problem with multiple suppliers, multiple periods, multiple products and quantity discounts. The buyer has to make a decision for some products which will be supplied by some suppliers for some periods cosidering by discount. To validate the MILP model with randomly generated data. The model is solved by Lingo 16.

  4. Renormalisation group corrections to the littlest seesaw model and maximal atmospheric mixing

    Energy Technology Data Exchange (ETDEWEB)

    King, Stephen F. [School of Physics and Astronomy, University of Southampton,SO17 1BJ Southampton (United Kingdom); Zhang, Jue [Center for High Energy Physics, Peking University,Beijing 100871 (China); Zhou, Shun [Center for High Energy Physics, Peking University,Beijing 100871 (China); Institute of High Energy Physics, Chinese Academy of Sciences,Beijing 100049 (China)

    2016-12-06

    The Littlest Seesaw (LS) model involves two right-handed neutrinos and a very constrained Dirac neutrino mass matrix, involving one texture zero and two independent Dirac masses, leading to a highly predictive scheme in which all neutrino masses and the entire PMNS matrix is successfully predicted in terms of just two real parameters. We calculate the renormalisation group (RG) corrections to the LS predictions, with and without supersymmetry, including also the threshold effects induced by the decoupling of heavy Majorana neutrinos both analytically and numerically. We find that the predictions for neutrino mixing angles and mass ratios are rather stable under RG corrections. For example we find that the LS model with RG corrections predicts close to maximal atmospheric mixing, θ{sub 23}=45{sup ∘}±1{sup ∘}, in most considered cases, in tension with the latest NOvA results. The techniques used here apply to other seesaw models with a strong normal mass hierarchy.

  5. Mixed layer modeling in the East Pacific warm pool during 2002

    Science.gov (United States)

    Van Roekel, Luke P.; Maloney, Eric D.

    2012-06-01

    Two vertical mixing models (the modified dynamic instability model of Price et al.; PWP, and K-Profile Parameterizaton; KPP) are used to analyze intraseasonal sea surface temperature (SST) variability in the northeast tropical Pacific near the Costa Rica Dome during boreal summer of 2002. Anomalies in surface latent heat flux and shortwave radiation are the root cause of the three intraseasonal SST oscillations of order 1°C amplitude that occur during this time, although surface stress variations have a significant impact on the third event. A slab ocean model that uses observed monthly varying mixed layer depths and accounts for penetrating shortwave radiation appears to well-simulate the first two SST oscillations, but not the third. The third oscillation is associated with small mixed layer depths (impact these intraseasonal oscillations. These results suggest that a slab ocean coupled to an atmospheric general circulation model, as used in previous studies of east Pacific intraseasonal variability, may not be entirely adequate to realistically simulate SST variations. Further, while most of the results from the PWP and KPP models are similar, some important differences that emerge are discussed.

  6. Inflow, Outflow, Yields, and Stellar Population Mixing in Chemical Evolution Models

    International Nuclear Information System (INIS)

    Andrews, Brett H.; Weinberg, David H.; Schönrich, Ralph; Johnson, Jennifer A.

    2017-01-01

    Chemical evolution models are powerful tools for interpreting stellar abundance surveys and understanding galaxy evolution. However, their predictions depend heavily on the treatment of inflow, outflow, star formation efficiency (SFE), the stellar initial mass function, the SN Ia delay time distribution, stellar yields, and stellar population mixing. Using flexCE, a flexible one-zone chemical evolution code, we investigate the effects of and trade-offs between parameters. Two critical parameters are SFE and the outflow mass-loading parameter, which shift the knee in [O/Fe]–[Fe/H] and the equilibrium abundances that the simulations asymptotically approach, respectively. One-zone models with simple star formation histories follow narrow tracks in [O/Fe]–[Fe/H] unlike the observed bimodality (separate high- α and low- α sequences) in this plane. A mix of one-zone models with inflow timescale and outflow mass-loading parameter variations, motivated by the inside-out galaxy formation scenario with radial mixing, reproduces the two sequences better than a one-zone model with two infall epochs. We present [X/Fe]–[Fe/H] tracks for 20 elements assuming three different supernova yield models and find some significant discrepancies with solar neighborhood observations, especially for elements with strongly metallicity-dependent yields. We apply principal component abundance analysis to the simulations and existing data to reveal the main correlations among abundances and quantify their contributions to variation in abundance space. For the stellar population mixing scenario, the abundances of α -elements and elements with metallicity-dependent yields dominate the first and second principal components, respectively, and collectively explain 99% of the variance in the model. flexCE is a python package available at https://github.com/bretthandrews/flexCE.

  7. Inflow, Outflow, Yields, and Stellar Population Mixing in Chemical Evolution Models

    Energy Technology Data Exchange (ETDEWEB)

    Andrews, Brett H. [PITT PACC, Department of Physics and Astronomy, University of Pittsburgh, Pittsburgh, PA 15260 (United States); Weinberg, David H.; Schönrich, Ralph; Johnson, Jennifer A., E-mail: andrewsb@pitt.edu [Department of Astronomy, The Ohio State University, 140 West 18th Avenue, Columbus, OH 43210 (United States)

    2017-02-01

    Chemical evolution models are powerful tools for interpreting stellar abundance surveys and understanding galaxy evolution. However, their predictions depend heavily on the treatment of inflow, outflow, star formation efficiency (SFE), the stellar initial mass function, the SN Ia delay time distribution, stellar yields, and stellar population mixing. Using flexCE, a flexible one-zone chemical evolution code, we investigate the effects of and trade-offs between parameters. Two critical parameters are SFE and the outflow mass-loading parameter, which shift the knee in [O/Fe]–[Fe/H] and the equilibrium abundances that the simulations asymptotically approach, respectively. One-zone models with simple star formation histories follow narrow tracks in [O/Fe]–[Fe/H] unlike the observed bimodality (separate high- α and low- α sequences) in this plane. A mix of one-zone models with inflow timescale and outflow mass-loading parameter variations, motivated by the inside-out galaxy formation scenario with radial mixing, reproduces the two sequences better than a one-zone model with two infall epochs. We present [X/Fe]–[Fe/H] tracks for 20 elements assuming three different supernova yield models and find some significant discrepancies with solar neighborhood observations, especially for elements with strongly metallicity-dependent yields. We apply principal component abundance analysis to the simulations and existing data to reveal the main correlations among abundances and quantify their contributions to variation in abundance space. For the stellar population mixing scenario, the abundances of α -elements and elements with metallicity-dependent yields dominate the first and second principal components, respectively, and collectively explain 99% of the variance in the model. flexCE is a python package available at https://github.com/bretthandrews/flexCE.

  8. A vine copula mixed effect model for trivariate meta-analysis of diagnostic test accuracy studies accounting for disease prevalence.

    Science.gov (United States)

    Nikoloulopoulos, Aristidis K

    2017-10-01

    A bivariate copula mixed model has been recently proposed to synthesize diagnostic test accuracy studies and it has been shown that it is superior to the standard generalized linear mixed model in this context. Here, we call trivariate vine copulas to extend the bivariate meta-analysis of diagnostic test accuracy studies by accounting for disease prevalence. Our vine copula mixed model includes the trivariate generalized linear mixed model as a special case and can also operate on the original scale of sensitivity, specificity, and disease prevalence. Our general methodology is illustrated by re-analyzing the data of two published meta-analyses. Our study suggests that there can be an improvement on trivariate generalized linear mixed model in fit to data and makes the argument for moving to vine copula random effects models especially because of their richness, including reflection asymmetric tail dependence, and computational feasibility despite their three dimensionality.

  9. Random effects coefficient of determination for mixed and meta-analysis models.

    Science.gov (United States)

    Demidenko, Eugene; Sargent, James; Onega, Tracy

    2012-01-01

    The key feature of a mixed model is the presence of random effects. We have developed a coefficient, called the random effects coefficient of determination, [Formula: see text], that estimates the proportion of the conditional variance of the dependent variable explained by random effects. This coefficient takes values from 0 to 1 and indicates how strong the random effects are. The difference from the earlier suggested fixed effects coefficient of determination is emphasized. If [Formula: see text] is close to 0, there is weak support for random effects in the model because the reduction of the variance of the dependent variable due to random effects is small; consequently, random effects may be ignored and the model simplifies to standard linear regression. The value of [Formula: see text] apart from 0 indicates the evidence of the variance reduction in support of the mixed model. If random effects coefficient of determination is close to 1 the variance of random effects is very large and random effects turn into free fixed effects-the model can be estimated using the dummy variable approach. We derive explicit formulas for [Formula: see text] in three special cases: the random intercept model, the growth curve model, and meta-analysis model. Theoretical results are illustrated with three mixed model examples: (1) travel time to the nearest cancer center for women with breast cancer in the U.S., (2) cumulative time watching alcohol related scenes in movies among young U.S. teens, as a risk factor for early drinking onset, and (3) the classic example of the meta-analysis model for combination of 13 studies on tuberculosis vaccine.

  10. Investigation of coolant mixing in WWER-440/213 RPV with improved turbulence model

    International Nuclear Information System (INIS)

    Kiss, B.; Aszodi, A.

    2011-01-01

    A detailed and complex RPV model of WWER-440/213 type reactor was developed in Budapest University of Technology and Economics Institute of Nuclear Techniques in the previous years. This model contains the main structural elements as inlet and outlet nozzles, guide baffles of hydro-accumulators coolant, alignment drifts, perforated plates, brake- and guide tube chamber and simplified core. With the new vessel model a series of parameter studies were performed considering turbulence models, discretization schemes, and modeling methods with ANSYS CFX. In the course of parameter studies the coolant mixing was investigated in the RPV. The coolant flow was 'traced' with different scalar concentration at the inlet nozzles and its distribution was calculated at the core bottom. The simulation results were compared with PAKS NPP measured mixing factors data (available from FLOMIX project. Based on the comparison the SST turbulence model was chosen for the further simulations, which unifies the advantages of two-equation (kω and kε) models. The most widely used turbulence models are Reynolds-averaged Navier-Stokes models that are based on time-averaging of the equations. Time-averaging filters out all turbulent scales from the simulation, and the effect of turbulence on the mean flow is then re-introduced through appropriate modeling assumptions. Because of this characteristic of SST turbulence model a decision was made in year 2011 to investigate the coolant mixing with improved turbulence model as well. The hybrid SAS-SST turbulence model was chosen, which is capable of resolving large scale turbulent structures without the time and grid-scale resolution restrictions of LES, often allowing the use of existing grids created for Reynolds-averaged Navier-Stokes simulations. As a first step the coolant mixing was investigated in the downcomer only. Eddies are occurred after the loop connection because of the steep flow direction change. This turbulent, vertiginous flow was

  11. TRANSP modeling of minority ion sawtooth mixing in ICRF + NBI heated discharges in TFTR

    International Nuclear Information System (INIS)

    Goldfinger, R.C.; Batchelor, D.B.; Murakami, M.; Phillips, C.K.; Budny, R.; Hammett, G.W.; McCune, D.M.; Wilson, J.R.; Zarnstorff, M.C.

    1995-01-01

    Time independent code analysis indicates that the sawtooth relaxation phenomenon affects RF power deposition profiles through the mixing of fast ions. Predicted central electron heating rates are substantially above experimental values unless sawtooth relaxation is included. The PPPL time dependent transport analysis code, TRANSP, currently has a model to redistribute thermal electron and ion species, energy densities, plasma current density, and fast ions from neutral beam injection at each sawtooth event using the Kadomtsev (3) prescription. Results are presented here in which the set of models is extended to include sawtooth mixing effects on the hot ion population generated from ICRF heating. The ICRF generated hot ion distribution function, line-integral(ν parallel , ν perpendicular ), which is strongly peaked at the center before each sawtooth, is replaced throughout the sawtooth mixing volume by its volume averaged value at each sawtooth. The modified line-integral(ν parallel ,ν perpendicular ) is then used to recalculate the collisional transfer of power from the minority species to the background species. Results demonstrate that neglect of sawtooth mixing of ICRF-induced fast ions leads to prediction of faster central electron reheat rates than are measured experimentally

  12. Modeling and analysis of ORNL horizontal storage tank mobilization and mixing

    International Nuclear Information System (INIS)

    Mahoney, L.A.; Terrones, G.; Eyler, L.L.

    1994-06-01

    The retrieval and treatment of radioactive sludges that are stored in tanks constitute a prevalent problem at several US Department of Energy sites. The tanks typically contain a settled sludge layer with non-Newtonian rheological characteristics covered by a layer of supernatant. The first step in retrieval is the mobilization and mixing of the supernatant and sludge in the storage tanks. Submerged jets have been proposed to achieve sludge mobilization in tanks, including the 189 m 3 (50,000 gallon) Melton Valley Storage tanks (MVST) at Oak Ridge National Laboratory (ORNL) and the planned 378 m 3 (100,000 gallon) tanks being designed as part of the MVST Capacity Increase Project (MVST-CIP). This report focuses on the modeling of mixing and mobilization in horizontal cylindrical tanks like those of the MVST design using submerged, recirculating liquid jets. The computer modeling of the mobilization and mixing processes uses the TEMPEST computational fluid dynamics program (Trend and Eyler 1992). The goals of the simulations are to determine under what conditions sludge mobilization using submerged liquid jets is feasible in tanks of this configuration, and to estimate mixing times required to approach homogeneity of the contents

  13. Partially linear mixed-effects joint models for skewed and missing longitudinal competing risks outcomes.

    Science.gov (United States)

    Lu, Tao; Lu, Minggen; Wang, Min; Zhang, Jun; Dong, Guang-Hui; Xu, Yong

    2017-12-18

    Longitudinal competing risks data frequently arise in clinical studies. Skewness and missingness are commonly observed for these data in practice. However, most joint models do not account for these data features. In this article, we propose partially linear mixed-effects joint models to analyze skew longitudinal competing risks data with missingness. In particular, to account for skewness, we replace the commonly assumed symmetric distributions by asymmetric distribution for model errors. To deal with missingness, we employ an informative missing data model. The joint models that couple the partially linear mixed-effects model for the longitudinal process, the cause-specific proportional hazard model for competing risks process and missing data process are developed. To estimate the parameters in the joint models, we propose a fully Bayesian approach based on the joint likelihood. To illustrate the proposed model and method, we implement them to an AIDS clinical study. Some interesting findings are reported. We also conduct simulation studies to validate the proposed method.

  14. Fixed versus mixed RSA: Explaining visual representations by fixed and mixed feature sets from shallow and deep computational models.

    Science.gov (United States)

    Khaligh-Razavi, Seyed-Mahdi; Henriksson, Linda; Kay, Kendrick; Kriegeskorte, Nikolaus

    2017-02-01

    Studies of the primate visual system have begun to test a wide range of complex computational object-vision models. Realistic models have many parameters, which in practice cannot be fitted using the limited amounts of brain-activity data typically available. Task performance optimization (e.g. using backpropagation to train neural networks) provides major constraints for fitting parameters and discovering nonlinear representational features appropriate for the task (e.g. object classification). Model representations can be compared to brain representations in terms of the representational dissimilarities they predict for an image set. This method, called representational similarity analysis (RSA), enables us to test the representational feature space as is (fixed RSA) or to fit a linear transformation that mixes the nonlinear model features so as to best explain a cortical area's representational space (mixed RSA). Like voxel/population-receptive-field modelling, mixed RSA uses a training set (different stimuli) to fit one weight per model feature and response channel (voxels here), so as to best predict the response profile across images for each response channel. We analysed response patterns elicited by natural images, which were measured with functional magnetic resonance imaging (fMRI). We found that early visual areas were best accounted for by shallow models, such as a Gabor wavelet pyramid (GWP). The GWP model performed similarly with and without mixing, suggesting that the original features already approximated the representational space, obviating the need for mixing. However, a higher ventral-stream visual representation (lateral occipital region) was best explained by the higher layers of a deep convolutional network and mixing of its feature set was essential for this model to explain the representation. We suspect that mixing was essential because the convolutional network had been trained to discriminate a set of 1000 categories, whose frequencies

  15. Detecting treatment-subgroup interactions in clustered data with generalized linear mixed-effects model trees.

    Science.gov (United States)

    Fokkema, M; Smits, N; Zeileis, A; Hothorn, T; Kelderman, H

    2017-10-25

    Identification of subgroups of patients for whom treatment A is more effective than treatment B, and vice versa, is of key importance to the development of personalized medicine. Tree-based algorithms are helpful tools for the detection of such interactions, but none of the available algorithms allow for taking into account clustered or nested dataset structures, which are particularly common in psychological research. Therefore, we propose the generalized linear mixed-effects model tree (GLMM tree) algorithm, which allows for the detection of treatment-subgroup interactions, while accounting for the clustered structure of a dataset. The algorithm uses model-based recursive partitioning to detect treatment-subgroup interactions, and a GLMM to estimate the random-effects parameters. In a simulation study, GLMM trees show higher accuracy in recovering treatment-subgroup interactions, higher predictive accuracy, and lower type II error rates than linear-model-based recursive partitioning and mixed-effects regression trees. Also, GLMM trees show somewhat higher predictive accuracy than linear mixed-effects models with pre-specified interaction effects, on average. We illustrate the application of GLMM trees on an individual patient-level data meta-analysis on treatments for depression. We conclude that GLMM trees are a promising exploratory tool for the detection of treatment-subgroup interactions in clustered datasets.

  16. Assessment of RANS and LES Turbulence Modeling for Buoyancy-Aided/Opposed Forced and Mixed Convection

    Science.gov (United States)

    Clifford, Corey; Kimber, Mark

    2017-11-01

    Over the last 30 years, an industry-wide shift within the nuclear community has led to increased utilization of computational fluid dynamics (CFD) to supplement nuclear reactor safety analyses. One such area that is of particular interest to the nuclear community, specifically to those performing loss-of-flow accident (LOFA) analyses for next-generation very-high temperature reactors (VHTR), is the capacity of current computational models to predict heat transfer across a wide range of buoyancy conditions. In the present investigation, a critical evaluation of Reynolds-averaged Navier-Stokes (RANS) and large-eddy simulation (LES) turbulence modeling techniques is conducted based on CFD validation data collected from the Rotatable Buoyancy Tunnel (RoBuT) at Utah State University. Four different experimental flow conditions are investigated: (1) buoyancy-aided forced convection; (2) buoyancy-opposed forced convection; (3) buoyancy-aided mixed convection; (4) buoyancy-opposed mixed convection. Overall, good agreement is found for both forced convection-dominated scenarios, but an overly-diffusive prediction of the normal Reynolds stress is observed for the RANS-based turbulence models. Low-Reynolds number RANS models perform adequately for mixed convection, while higher-order RANS approaches underestimate the influence of buoyancy on the production of turbulence.

  17. Model and measurements of linear mixing in thermal IR ground leaving radiance spectra

    Science.gov (United States)

    Balick, Lee; Clodius, William; Jeffery, Christopher; Theiler, James; McCabe, Matthew; Gillespie, Alan; Mushkin, Amit; Danilina, Iryna

    2007-10-01

    Hyperspectral thermal IR remote sensing is an effective tool for the detection and identification of gas plumes and solid materials. Virtually all remotely sensed thermal IR pixels are mixtures of different materials and temperatures. As sensors improve and hyperspectral thermal IR remote sensing becomes more quantitative, the concept of homogeneous pixels becomes inadequate. The contributions of the constituents to the pixel spectral ground leaving radiance are weighted by their spectral emissivities and their temperature, or more correctly, temperature distributions, because real pixels are rarely thermally homogeneous. Planck's Law defines a relationship between temperature and radiance that is strongly wavelength dependent, even for blackbodies. Spectral ground leaving radiance (GLR) from mixed pixels is temperature and wavelength dependent and the relationship between observed radiance spectra from mixed pixels and library emissivity spectra of mixtures of 'pure' materials is indirect. A simple model of linear mixing of subpixel radiance as a function of material type, the temperature distribution of each material and the abundance of the material within a pixel is presented. The model indicates that, qualitatively and given normal environmental temperature variability, spectral features remain observable in mixtures as long as the material occupies more than roughly 10% of the pixel. Field measurements of known targets made on the ground and by an airborne sensor are presented here and serve as a reality check on the model. Target spectral GLR from mixtures as a function of temperature distribution and abundance within the pixel at day and night are presented and compare well qualitatively with model output.

  18. A dependent stress-strength interference model based on mixed copula function

    Energy Technology Data Exchange (ETDEWEB)

    Gao, Jian Xiong; An, Zong Wen; Liu, Bo [School of Mechatronics Engineering, Lanzhou University of Technology, Lanzhou (China)

    2016-10-15

    In the traditional Stress-strength interference (SSI) model, stress and strength must satisfy the basic assumption of mutual independence. However, a complex dependence between stress and strength exists in practical engineering. To evaluate structural reliability under the case that stress and strength are dependent, a mixed copula function is introduced to a new dependent SSI model. This model can fully characterize the dependence between stress and strength. The residual square sum method and genetic algorithm are also used to estimate the unknown parameters of the model. Finally, the validity of the proposed model is demonstrated via a practical case. Results show that traditional SSI model ignoring the dependence between stress and strength more easily overestimates product reliability than the new dependent SSI model.

  19. Canards and mixed-mode oscillations in a forest pest model

    DEFF Research Database (Denmark)

    Brøns, Morten; Kaasen, Rune

    2010-01-01

    of high pest concentration. For small values of the timescale of the young trees, the model can be reduced to a two-dimensional model. By a geometrical analysis we identify a canard explosion in the reduced model, that is, a change over a narrow parameter interval from outbreak dynamics to small...... oscillations around an endemic state. For larger values of the timescale of the young trees the two-dimensional approximation breaks down, and a broader parameter interval with mixed-mode oscillations appear, replacing the simple canard explosion. The analysis only relies on simple and generic properties...

  20. Mixed-order phase transition in a minimal, diffusion-based spin model.

    Science.gov (United States)

    Fronczak, Agata; Fronczak, Piotr

    2016-07-01

    In this paper we exactly solve, within the grand canonical ensemble, a minimal spin model with the hybrid phase transition. We call the model diffusion based because its Hamiltonian can be recovered from a simple dynamic procedure, which can be seen as an equilibrium statistical mechanics representation of a biased random walk. We outline the derivation of the phase diagram of the model, in which the triple point has the hallmarks of the hybrid transition: discontinuity in the average magnetization and algebraically diverging susceptibilities. At this point, two second-order transition curves meet in equilibrium with the first-order curve, resulting in a prototypical mixed-order behavior.