Multiple model adaptive control with mixing
Kuipers, Matthew
Despite the remarkable theoretical accomplishments and successful applications of adaptive control, the field is not sufficiently mature to solve challenging control problems requiring strict performance and safety guarantees. Towards addressing these issues, a novel deterministic multiple-model adaptive control approach called adaptive mixing control is proposed. In this approach, adaptation comes from a high-level system called the supervisor that mixes into feedback a number of candidate controllers, each finely-tuned to a subset of the parameter space. The mixing signal, the supervisor's output, is generated by estimating the unknown parameters and, at every instant of time, calculating the contribution level of each candidate controller based on certainty equivalence. The proposed architecture provides two characteristics relevant to solving stringent, performance-driven applications. First, the full-suite of linear time invariant control tools is available. A disadvantage of conventional adaptive control is its restriction to utilizing only those control laws whose solutions can be feasibly computed in real-time, such as model reference and pole-placement type controllers. Because its candidate controllers are computed off line, the proposed approach suffers no such restriction. Second, the supervisor's output is smooth and does not necessarily depend on explicit a priori knowledge of the disturbance model. These characteristics can lead to improved performance by avoiding the unnecessary switching and chattering behaviors associated with some other multiple adaptive control approaches. The stability and robustness properties of the adaptive scheme are analyzed. It is shown that the mean-square regulation error is of the order of the modeling error. And when the parameter estimate converges to its true value, which is guaranteed if a persistence of excitation condition is satisfied, the adaptive closed-loop system converges exponentially fast to a closed
A multiple objective mixed integer linear programming model for power generation expansion planning
Energy Technology Data Exchange (ETDEWEB)
Antunes, C. Henggeler; Martins, A. Gomes [INESC-Coimbra, Coimbra (Portugal); Universidade de Coimbra, Dept. de Engenharia Electrotecnica, Coimbra (Portugal); Brito, Isabel Sofia [Instituto Politecnico de Beja, Escola Superior de Tecnologia e Gestao, Beja (Portugal)
2004-03-01
Power generation expansion planning inherently involves multiple, conflicting and incommensurate objectives. Therefore, mathematical models become more realistic if distinct evaluation aspects, such as cost and environmental concerns, are explicitly considered as objective functions rather than being encompassed by a single economic indicator. With the aid of multiple objective models, decision makers may grasp the conflicting nature and the trade-offs among the different objectives in order to select satisfactory compromise solutions. This paper presents a multiple objective mixed integer linear programming model for power generation expansion planning that allows the consideration of modular expansion capacity values of supply-side options. This characteristic of the model avoids the well-known problem associated with continuous capacity values that usually have to be discretized in a post-processing phase without feedback on the nature and importance of the changes in the attributes of the obtained solutions. Demand-side management (DSM) is also considered an option in the planning process, assuming there is a sufficiently large portion of the market under franchise conditions. As DSM full costs are accounted in the model, including lost revenues, it is possible to perform an evaluation of the rate impact in order to further inform the decision process (Author)
DEFF Research Database (Denmark)
Holst, René; Jørgensen, Bent
2015-01-01
The paper proposes a versatile class of multiplicative generalized linear longitudinal mixed models (GLLMM) with additive dispersion components, based on explicit modelling of the covariance structure. The class incorporates a longitudinal structure into the random effects models and retains...... a marginal as well as a conditional interpretation. The estimation procedure is based on a computationally efficient quasi-score method for the regression parameters combined with a REML-like bias-corrected Pearson estimating function for the dispersion and correlation parameters. This avoids...... the multidimensional integral of the conventional GLMM likelihood and allows an extension of the robust empirical sandwich estimator for use with both association and regression parameters. The method is applied to a set of otholit data, used for age determination of fish....
Sharifi, N; Ozgoli, S; Ramezani, A
2017-06-01
Mixed immunotherapy and chemotherapy of tumours is one of the most efficient ways to improve cancer treatment strategies. However, it is important to 'design' an effective treatment programme which can optimize the ways of combining immunotherapy and chemotherapy to diminish their imminent side effects. Control engineering techniques could be used for this. The method of multiple model predictive controller (MMPC) is applied to the modified Stepanova model to induce the best combination of drugs scheduling under a better health criteria profile. The proposed MMPC is a feedback scheme that can perform global optimization for both tumour volume and immune competent cell density by performing multiple constraints. Although current studies usually assume that immunotherapy has no side effect, this paper presents a new method of mixed drug administration by employing MMPC, which implements several constraints for chemotherapy and immunotherapy by considering both drug toxicity and autoimmune. With designed controller we need maximum 57% and 28% of full dosage of drugs for chemotherapy and immunotherapy in some instances, respectively. Therefore, through the proposed controller less dosage of drugs are needed, which contribute to suitable results with a perceptible reduction in medicine side effects. It is observed that in the presence of MMPC, the amount of required drugs is minimized, while the tumour volume is reduced. The efficiency of the presented method has been illustrated through simulations, as the system from an initial condition in the malignant region of the state space (macroscopic tumour volume) transfers into the benign region (microscopic tumour volume) in which the immune system can control tumour growth. Copyright © 2017 Elsevier B.V. All rights reserved.
Furlotte, Nicholas A; Eskin, Eleazar
2015-05-01
Multiple-trait association mapping, in which multiple traits are used simultaneously in the identification of genetic variants affecting those traits, has recently attracted interest. One class of approaches for this problem builds on classical variance component methodology, utilizing a multitrait version of a linear mixed model. These approaches both increase power and provide insights into the genetic architecture of multiple traits. In particular, it is possible to estimate the genetic correlation, which is a measure of the portion of the total correlation between traits that is due to additive genetic effects. Unfortunately, the practical utility of these methods is limited since they are computationally intractable for large sample sizes. In this article, we introduce a reformulation of the multiple-trait association mapping approach by defining the matrix-variate linear mixed model. Our approach reduces the computational time necessary to perform maximum-likelihood inference in a multiple-trait model by utilizing a data transformation. By utilizing a well-studied human cohort, we show that our approach provides more than a 10-fold speedup, making multiple-trait association feasible in a large population cohort on the genome-wide scale. We take advantage of the efficiency of our approach to analyze gene expression data. By decomposing gene coexpression into a genetic and environmental component, we show that our method provides fundamental insights into the nature of coexpressed genes. An implementation of this method is available at http://genetics.cs.ucla.edu/mvLMM. Copyright © 2015 by the Genetics Society of America.
Estimation in a multiplicative mixed model involving a genetic relationship matrix
Directory of Open Access Journals (Sweden)
Eccleston John A
2009-04-01
Full Text Available Abstract Genetic models partitioning additive and non-additive genetic effects for populations tested in replicated multi-environment trials (METs in a plant breeding program have recently been presented in the literature. For these data, the variance model involves the direct product of a large numerator relationship matrix A, and a complex structure for the genotype by environment interaction effects, generally of a factor analytic (FA form. With MET data, we expect a high correlation in genotype rankings between environments, leading to non-positive definite covariance matrices. Estimation methods for reduced rank models have been derived for the FA formulation with independent genotypes, and we employ these estimation methods for the more complex case involving the numerator relationship matrix. We examine the performance of differing genetic models for MET data with an embedded pedigree structure, and consider the magnitude of the non-additive variance. The capacity of existing software packages to fit these complex models is largely due to the use of the sparse matrix methodology and the average information algorithm. Here, we present an extension to the standard formulation necessary for estimation with a factor analytic structure across multiple environments.
DEFF Research Database (Denmark)
Jensen, Signe Marie; Ritz, Christian
2018-01-01
Longitudinal studies with multiple outcomes often pose challenges for the statistical analysis. A joint model including all outcomes has the advantage of incorporating the simultaneous behavior but is often difficult to fit due to computational challenges. We consider 2 alternative approaches to ......, pairwise fitting shows a larger loss in efficiency than the marginal models approach. Using an alternative to the joint modelling strategy will lead to some but not necessarily a large loss of efficiency for small sample sizes....
Zhang, Hanze; Huang, Yangxin; Wang, Wei; Chen, Henian; Langland-Orban, Barbara
2017-01-01
In longitudinal AIDS studies, it is of interest to investigate the relationship between HIV viral load and CD4 cell counts, as well as the complicated time effect. Most of common models to analyze such complex longitudinal data are based on mean-regression, which fails to provide efficient estimates due to outliers and/or heavy tails. Quantile regression-based partially linear mixed-effects models, a special case of semiparametric models enjoying benefits of both parametric and nonparametric models, have the flexibility to monitor the viral dynamics nonparametrically and detect the varying CD4 effects parametrically at different quantiles of viral load. Meanwhile, it is critical to consider various data features of repeated measurements, including left-censoring due to a limit of detection, covariate measurement error, and asymmetric distribution. In this research, we first establish a Bayesian joint models that accounts for all these data features simultaneously in the framework of quantile regression-based partially linear mixed-effects models. The proposed models are applied to analyze the Multicenter AIDS Cohort Study (MACS) data. Simulation studies are also conducted to assess the performance of the proposed methods under different scenarios.
Dilution refrigeration with multiple mixing chambers
International Nuclear Information System (INIS)
Coops, G.M.
1981-01-01
A dilution refrigerator is an instrument to reach temperatures in the mK region in a continuous way. The temperature range can be extended and the cooling power can be enlarged by adding an extra mixing chamber. In this way we obtain a double mixing chamber system. In this thesis the theory of the multiple mixing chamber is presented and tested on its validity by comparison with the measurements. Measurements on a dilution refrigerator with a circulation rate up to 2.5 mmol/s are also reported. (Auth.)
Directory of Open Access Journals (Sweden)
Sugandha Aggarwal
2014-12-01
Full Text Available Judicious media planning decisions are crucial for successful advertising of products. Media planners extensively use mathematical models supplemented with market research and expert opinion to devise the media plans. Media planning models discussed in the literature largely focus on single products with limited studies related to the multi-product media planning. In this paper we propose a media planning model to allocate limited advertising budget among multiple products advertised in a segmented market and determine the number of advertisements to be given in different media. The proposed model is formulated considering both segment specific and mass media vehicles to maximize the total advertising reach for each product. The model also incorporates the cross product effect of advertising of one product on the other. The proposed formulation is a multi-objective linear integer programming model and interactive linear integer goal programming is discussed to solve the model. A real life case study is presented to illustrate the application of the proposed model.
International Nuclear Information System (INIS)
Xue Dongmei; De Baets, Bernard; Van Cleemput, Oswald; Hennessy, Carmel; Berglund, Michael; Boeckx, Pascal
2012-01-01
To identify different NO 3 − sources in surface water and to estimate their proportional contribution to the nitrate mixture in surface water, a dual isotope and a Bayesian isotope mixing model have been applied for six different surface waters affected by agriculture, greenhouses in an agricultural area, and households. Annual mean δ 15 N–NO 3 − were between 8.0 and 19.4‰, while annual mean δ 18 O–NO 3 − were given by 4.5–30.7‰. SIAR was used to estimate the proportional contribution of five potential NO 3 − sources (NO 3 − in precipitation, NO 3 − fertilizer, NH 4 + in fertilizer and rain, soil N, and manure and sewage). SIAR showed that “manure and sewage” contributed highest, “soil N”, “NO 3 − fertilizer” and “NH 4 + in fertilizer and rain” contributed middle, and “NO 3 − in precipitation” contributed least. The SIAR output can be considered as a “fingerprint” for the NO 3 − source contributions. However, the wide range of isotope values observed in surface water and of the NO 3 − sources limit its applicability. - Highlights: ► The dual isotope approach (δ 15 N- and δ 18 O–NO 3 − ) identify dominant nitrate sources in 6 surface waters. ► The SIAR model estimate proportional contributions for 5 nitrate sources. ► SIAR is a reliable approach to assess temporal and spatial variations of different NO 3 − sources. ► The wide range of isotope values observed in surface water and of the nitrate sources limit its applicability. - This paper successfully applied a dual isotope approach and Bayesian isotopic mixing model to identify and quantify 5 potential nitrate sources in surface water.
International Nuclear Information System (INIS)
Lee, S; Richard Dimenna, R; David Tamburello, D
2008-01-01
The process of recovering the waste in storage tanks at the Savannah River Site (SRS) typically requires mixing the contents of the tank with one to four dual-nozzle jet mixers located within the tank. The typical criteria to establish a mixed condition in a tank are based on the number of pumps in operation and the time duration of operation. To ensure that a mixed condition is achieved, operating times are set conservatively long. This approach results in high operational costs because of the long mixing times and high maintenance and repair costs for the same reason. A significant reduction in both of these costs might be realized by reducing the required mixing time based on calculating a reliable indicator of mixing with a suitably validated computer code. The work described in this report establishes the basis for further development of the theory leading to the identified mixing indicators, the benchmark analyses demonstrating their consistency with widely accepted correlations, and the application of those indicators to SRS waste tanks to provide a better, physically based estimate of the required mixing time. Waste storage tanks at SRS contain settled sludge which varies in height from zero to 10 ft. The sludge has been characterized and modeled as micron-sized solids, typically 1 to 5 microns, at weight fractions as high as 20 to 30 wt%, specific gravities to 1.4, and viscosities up to 64 cp during motion. The sludge is suspended and mixed through the use of submersible slurry jet pumps. To suspend settled sludge, water is added to the tank as a slurry medium and stirred with the jet pump. Although there is considerable technical literature on mixing and solid suspension in agitated tanks, very little literature has been published on jet mixing in a large-scale tank. If shorter mixing times can be shown to support Defense Waste Processing Facility (DWPF) or other feed requirements, longer pump lifetimes can be achieved with associated operational cost and
Energy Technology Data Exchange (ETDEWEB)
Lee, S; Richard Dimenna, R; David Tamburello, D
2008-11-13
The process of recovering the waste in storage tanks at the Savannah River Site (SRS) typically requires mixing the contents of the tank with one to four dual-nozzle jet mixers located within the tank. The typical criteria to establish a mixed condition in a tank are based on the number of pumps in operation and the time duration of operation. To ensure that a mixed condition is achieved, operating times are set conservatively long. This approach results in high operational costs because of the long mixing times and high maintenance and repair costs for the same reason. A significant reduction in both of these costs might be realized by reducing the required mixing time based on calculating a reliable indicator of mixing with a suitably validated computer code. The work described in this report establishes the basis for further development of the theory leading to the identified mixing indicators, the benchmark analyses demonstrating their consistency with widely accepted correlations, and the application of those indicators to SRS waste tanks to provide a better, physically based estimate of the required mixing time. Waste storage tanks at SRS contain settled sludge which varies in height from zero to 10 ft. The sludge has been characterized and modeled as micron-sized solids, typically 1 to 5 microns, at weight fractions as high as 20 to 30 wt%, specific gravities to 1.4, and viscosities up to 64 cp during motion. The sludge is suspended and mixed through the use of submersible slurry jet pumps. To suspend settled sludge, water is added to the tank as a slurry medium and stirred with the jet pump. Although there is considerable technical literature on mixing and solid suspension in agitated tanks, very little literature has been published on jet mixing in a large-scale tank. If shorter mixing times can be shown to support Defense Waste Processing Facility (DWPF) or other feed requirements, longer pump lifetimes can be achieved with associated operational cost and
Zhang, Peng; Luo, Dandan; Li, Pengfei; Sharpsten, Lucie; Medeiros, Felipe A.
2015-01-01
Glaucoma is a progressive disease due to damage in the optic nerve with associated functional losses. Although the relationship between structural and functional progression in glaucoma is well established, there is disagreement on how this association evolves over time. In addressing this issue, we propose a new class of non-Gaussian linear-mixed models to estimate the correlations among subject-specific effects in multivariate longitudinal studies with a skewed distribution of random effects, to be used in a study of glaucoma. This class provides an efficient estimation of subject-specific effects by modeling the skewed random effects through the log-gamma distribution. It also provides more reliable estimates of the correlations between the random effects. To validate the log-gamma assumption against the usual normality assumption of the random effects, we propose a lack-of-fit test using the profile likelihood function of the shape parameter. We apply this method to data from a prospective observation study, the Diagnostic Innovations in Glaucoma Study, to present a statistically significant association between structural and functional change rates that leads to a better understanding of the progression of glaucoma over time. PMID:26075565
Energy Technology Data Exchange (ETDEWEB)
Lee, S; Dimenna, R; Tamburello, D
2011-02-14
height from zero to 10 ft. The sludge has been characterized and modeled as micron-sized solids, typically 1 to 5 microns, at weight fractions as high as 20 to 30 wt%, specific gravities to 1.4, and viscosities up to 64 cp during motion. The sludge is suspended and mixed through the use of submersible slurry jet pumps. To suspend settled sludge, water is added to the tank as a slurry medium and stirred with the jet pump. Although there is considerable technical literature on mixing and solid suspension in agitated tanks, very little literature has been published on jet mixing in a large-scale tank. One of the main objectives in the waste processing is to provide feed of a uniform slurry composition at a certain weight percentage (e.g. typically {approx}13 wt% at SRS) over an extended period of time. In preparation of the sludge for slurrying, several important questions have been raised with regard to sludge suspension and mixing of the solid suspension in the bulk of the tank: (1) How much time is required to prepare a slurry with a uniform solid composition? (2) How long will it take to suspend and mix the sludge for uniform composition in any particular waste tank? (3) What are good mixing indicators to answer the questions concerning sludge mixing stated above in a general fashion applicable to any waste tank/slurry pump geometry and fluid/sludge combination?
System equivalent model mixing
Klaassen, Steven W. B.; van der Seijs, Maarten V.; de Klerk, Dennis
2018-05-01
This paper introduces SEMM: a method based on Frequency Based Substructuring (FBS) techniques that enables the construction of hybrid dynamic models. With System Equivalent Model Mixing (SEMM) frequency based models, either of numerical or experimental nature, can be mixed to form a hybrid model. This model follows the dynamic behaviour of a predefined weighted master model. A large variety of applications can be thought of, such as the DoF-space expansion of relatively small experimental models using numerical models, or the blending of different models in the frequency spectrum. SEMM is outlined, both mathematically and conceptually, based on a notation commonly used in FBS. A critical physical interpretation of the theory is provided next, along with a comparison to similar techniques; namely DoF expansion techniques. SEMM's concept is further illustrated by means of a numerical example. It will become apparent that the basic method of SEMM has some shortcomings which warrant a few extensions to the method. One of the main applications is tested in a practical case, performed on a validated benchmark structure; it will emphasize the practicality of the method.
Kondo, Yumi; Zhao, Yinshan; Petkau, John
2015-06-15
We develop a new modeling approach to enhance a recently proposed method to detect increases of contrast-enhancing lesions (CELs) on repeated magnetic resonance imaging, which have been used as an indicator for potential adverse events in multiple sclerosis clinical trials. The method signals patients with unusual increases in CEL activity by estimating the probability of observing CEL counts as large as those observed on a patient's recent scans conditional on the patient's CEL counts on previous scans. This conditional probability index (CPI), computed based on a mixed-effect negative binomial regression model, can vary substantially depending on the choice of distribution for the patient-specific random effects. Therefore, we relax this parametric assumption to model the random effects with an infinite mixture of beta distributions, using the Dirichlet process, which effectively allows any form of distribution. To our knowledge, no previous literature considers a mixed-effect regression for longitudinal count variables where the random effect is modeled with a Dirichlet process mixture. As our inference is in the Bayesian framework, we adopt a meta-analytic approach to develop an informative prior based on previous clinical trials. This is particularly helpful at the early stages of trials when less data are available. Our enhanced method is illustrated with CEL data from 10 previous multiple sclerosis clinical trials. Our simulation study shows that our procedure estimates the CPI more accurately than parametric alternatives when the patient-specific random effect distribution is misspecified and that an informative prior improves the accuracy of the CPI estimates. Copyright © 2015 John Wiley & Sons, Ltd.
Hamid, Arian Zad
2016-12-01
We analytically investigate Multiple Quantum (MQ) NMR dynamics in a mixed-three-spin (1/2,1,1/2) system with XXX Heisenberg model at the front of an external homogeneous magnetic field B. A single-ion anisotropy property ζ is considered for the spin-1. The intensities dependence of MQ NMR coherences on their orders (zeroth and second orders) for two pairs of spins (1,1/2) and (1/2,1/2) of the favorite tripartite system are obtained. It is also investigated dynamics of the pairwise quantum entanglement for the bipartite (sub)systems (1,1/2) and (1/2,1/2) permanently coupled by, respectively, coupling constants J}1 and J}2, by means of concurrence and fidelity. Then, some straightforward comparisons are done between these quantities and the intensities of MQ NMR coherences and ultimately some interesting results are reported. We also show that the time evolution of MQ coherences based on the reduced density matrix of the pair spins (1,1/2) is closely connected with the dynamics of the pairwise entanglement. Finally, we prove that one can introduce MQ coherence of the zeroth order corresponds to the pair spins (1,1/2) as an entanglement witness at some special time intervals.
Generalized, Linear, and Mixed Models
McCulloch, Charles E; Neuhaus, John M
2011-01-01
An accessible and self-contained introduction to statistical models-now in a modernized new editionGeneralized, Linear, and Mixed Models, Second Edition provides an up-to-date treatment of the essential techniques for developing and applying a wide variety of statistical models. The book presents thorough and unified coverage of the theory behind generalized, linear, and mixed models and highlights their similarities and differences in various construction, application, and computational aspects.A clear introduction to the basic ideas of fixed effects models, random effects models, and mixed m
Schadewaldt, Verena; McInnes, Elizabeth; Hiller, Janet E; Gardner, Anne
2016-07-29
In 2010 policy changes were introduced to the Australian healthcare system that granted nurse practitioners access to the public health insurance scheme (Medicare) subject to a collaborative arrangement with a medical practitioner. These changes facilitated nurse practitioner practice in primary healthcare settings. This study investigated the experiences and perceptions of nurse practitioners and medical practitioners who worked together under the new policies and aimed to identify enablers of collaborative practice models. A multiple case study of five primary healthcare sites was undertaken, applying mixed methods research. Six nurse practitioners, 13 medical practitioners and three practice managers participated in the study. Data were collected through direct observations, documents and semi-structured interviews as well as questionnaires including validated scales to measure the level of collaboration, satisfaction with collaboration and beliefs in the benefits of collaboration. Thematic analysis was undertaken for qualitative data from interviews, observations and documents, followed by deductive analysis whereby thematic categories were compared to two theoretical models of collaboration. Questionnaire responses were summarised using descriptive statistics. Using the scale measurements, nurse practitioners and medical practitioners reported high levels of collaboration, were highly satisfied with their collaborative relationship and strongly believed that collaboration benefited the patient. The three themes developed from qualitative data showed a more complex and nuanced picture: 1) Structures such as government policy requirements and local infrastructure disadvantaged nurse practitioners financially and professionally in collaborative practice models; 2) Participants experienced the influence and consequences of individual role enactment through the co-existence of overlapping, complementary, traditional and emerging roles, which blurred perceptions of
Abbring, J.H.
2009-01-01
We study mixed hitting-time models, which specify durations as the first time a Levy process (a continuous-time process with stationary and independent increments) crosses a heterogeneous threshold. Such models of substantial interest because they can be reduced from optimal-stopping models with
Cluster Correlation in Mixed Models
Gardini, A.; Bonometto, S. A.; Murante, G.; Yepes, G.
2000-10-01
We evaluate the dependence of the cluster correlation length, rc, on the mean intercluster separation, Dc, for three models with critical matter density, vanishing vacuum energy (Λ=0), and COBE normalization: a tilted cold dark matter (tCDM) model (n=0.8) and two blue mixed models with two light massive neutrinos, yielding Ωh=0.26 and 0.14 (MDM1 and MDM2, respectively). All models approach the observational value of σ8 (and hence the observed cluster abundance) and are consistent with the observed abundance of damped Lyα systems. Mixed models have a motivation in recent results of neutrino physics; they also agree with the observed value of the ratio σ8/σ25, yielding the spectral slope parameter Γ, and nicely fit Las Campanas Redshift Survey (LCRS) reconstructed spectra. We use parallel AP3M simulations, performed in a wide box (of side 360 h-1 Mpc) and with high mass and distance resolution, enabling us to build artificial samples of clusters, whose total number and mass range allow us to cover the same Dc interval inspected through Automatic Plate Measuring Facility (APM) and Abell cluster clustering data. We find that the tCDM model performs substantially better than n=1 critical density CDM models. Our main finding, however, is that mixed models provide a surprisingly good fit to cluster clustering data.
Kriging with mixed effects models
Directory of Open Access Journals (Sweden)
Alessio Pollice
2007-10-01
Full Text Available In this paper the effectiveness of the use of mixed effects models for estimation and prediction purposes in spatial statistics for continuous data is reviewed in the classical and Bayesian frameworks. A case study on agricultural data is also provided.
Mathematical study of mixing models
International Nuclear Information System (INIS)
Lagoutiere, F.; Despres, B.
1999-01-01
This report presents the construction and the study of a class of models that describe the behavior of compressible and non-reactive Eulerian fluid mixtures. Mixture models can have two different applications. Either they are used to describe physical mixtures, in the case of a true zone of extensive mixing (but then this modelization is incomplete and must be considered only as a point of departure for the elaboration of models of mixtures actually relevant). Either they are used to solve the problem of the numerical mixture. This problem appears during the discretization of an interface which separates fluids having laws of different state: the zone of numerical mixing is the set of meshes which cover the interface. The attention is focused on numerical mixtures, for which the hypothesis of non-miscibility (physics) will bring two equations (the sixth and the eighth of the system). It is important to emphasize that even in the case of the only numerical mixture, the presence in one and same place (same mesh) of several fluids have to be taken into account. This will be formalized by the possibility for mass fractions to take all values between 0 and 1. This is not at odds with the equations that derive from the hypothesis of non-miscibility. One way of looking at things is to consider that there are two scales of observation: the physical scale at which one observes the separation of fluids, and the numerical scale, given by the fineness of the mesh, to which a mixture appears. In this work, mixtures are considered from the mathematical angle (both in the elaboration phase and during their study). In particular, Chapter 5 shows a result of model degeneration for a non-extended mixing zone (case of an interface): this justifies the use of models in the case of numerical mixing. All these models are based on the classical model of non-viscous compressible fluids recalled in Chapter 2. In Chapter 3, the central point of the elaboration of the class of models is
Multiple-jet thermal mixing in a piping tee
International Nuclear Information System (INIS)
Lykoudis, P.S.; Hagar, R.C.
1979-01-01
Piping tees that are used to mix fluid streams at different temperatures are subjected to possibly severe thermal and mechanical stresses. There is reason to suspect that mixing in a piping tee could be improved by injecting the fluid streams into the tee through multiple jets. This paper reports the results of an experimental investigation of the effects of multiple-jet injection on mixing in a piping tee. The experimental work involves the measurement of the temperature fluctuation intensity with a hot-film sensor downstream of a simple 22.22-mm(7/8-in.)-diam tee with mixed multiple-jet injected hot and cold streams of water. The jets were provided by holes drilled in plates that partially blocked the inlet streams; 26 pairs of plates were investigated. The number of holes per plate varied from 1 to 51; the jet diameters ranged from 5 to 68% of the tee diameter. The inlet stream Reynolds number upstream of the jet plates was roughly 15 500 for each stream. The data indicated that the root mean square (rms) temperature fluctuation intensity measured at the tee outlet decreased dramatically as the jet plate cross-sectional area void fraction was decreased. When the jets emanating from the tee plates were misaligned, the reduction of the rms temperature fluctuation was not as high as when the jets were aligned. The rate of decay of the intensity downstream of the tee for most ofthe plates investigated was found to agree well with the -3/4 power decay law predicted by Corrsin's theory of scalar decay. However, unusual features in the intensity decay data were also observed, such as an increase of the intensity several diameters downstream before continuing to decay
Linear mixed models in sensometrics
DEFF Research Database (Denmark)
Kuznetsova, Alexandra
quality of decision making in Danish as well as international food companies and other companies using the same methods. The two open-source R packages lmerTest and SensMixed implement and support the methodological developments in the research papers as well as the ANOVA modelling part of the Consumer...... an open-source software tool ConsumerCheck was developed in this project and now is available for everyone. will represent a major step forward when concerns this important problem in modern consumer driven product development. Standard statistical software packages can be used for some of the purposes......Today’s companies and researchers gather large amounts of data of different kind. In consumer studies the objective is the collection of the data to better understand consumer acceptance of products. In such studies a number of persons (generally not trained) are selected in order to score products...
Multifractal Modeling of Turbulent Mixing
Samiee, Mehdi; Zayernouri, Mohsen; Meerschaert, Mark M.
2017-11-01
Stochastic processes in random media are emerging as interesting tools for modeling anomalous transport phenomena. Applications include intermittent passive scalar transport with background noise in turbulent flows, which are observed in atmospheric boundary layers, turbulent mixing in reactive flows, and long-range dependent flow fields in disordered/fractal environments. In this work, we propose a nonlocal scalar transport equation involving the fractional Laplacian, where the corresponding fractional index is linked to the multifractal structure of the nonlinear passive scalar power spectrum. This work was supported by the AFOSR Young Investigator Program (YIP) award (FA9550-17-1-0150) and partially by MURI/ARO (W911NF-15-1-0562).
Reliability assessment of competing risks with generalized mixed shock models
International Nuclear Information System (INIS)
Rafiee, Koosha; Feng, Qianmei; Coit, David W.
2017-01-01
This paper investigates reliability modeling for systems subject to dependent competing risks considering the impact from a new generalized mixed shock model. Two dependent competing risks are soft failure due to a degradation process, and hard failure due to random shocks. The shock process contains fatal shocks that can cause hard failure instantaneously, and nonfatal shocks that impact the system in three different ways: 1) damaging the unit by immediately increasing the degradation level, 2) speeding up the deterioration by accelerating the degradation rate, and 3) weakening the unit strength by reducing the hard failure threshold. While the first impact from nonfatal shocks comes from each individual shock, the other two impacts are realized when the condition for a new generalized mixed shock model is satisfied. Unlike most existing mixed shock models that consider a combination of two shock patterns, our new generalized mixed shock model includes three classic shock patterns. According to the proposed generalized mixed shock model, the degradation rate and the hard failure threshold can simultaneously shift multiple times, whenever the condition for one of these three shock patterns is satisfied. An example using micro-electro-mechanical systems devices illustrates the effectiveness of the proposed approach with sensitivity analysis. - Highlights: • A rich reliability model for systems subject to dependent failures is proposed. • The degradation rate and the hard failure threshold can shift simultaneously. • The shift is triggered by a new generalized mixed shock model. • The shift can occur multiple times under the generalized mixed shock model.
Madadi-Kandjani, E.; Fox, R. O.; Passalacqua, A.
2017-06-01
An extended quadrature method of moments using the β kernel density function (β -EQMOM) is used to approximate solutions to the evolution equation for univariate and bivariate composition probability distribution functions (PDFs) of a passive scalar for binary and ternary mixing. The key element of interest is the molecular mixing term, which is described using the Fokker-Planck (FP) molecular mixing model. The direct numerical simulations (DNSs) of Eswaran and Pope ["Direct numerical simulations of the turbulent mixing of a passive scalar," Phys. Fluids 31, 506 (1988)] and the amplitude mapping closure (AMC) of Pope ["Mapping closures for turbulent mixing and reaction," Theor. Comput. Fluid Dyn. 2, 255 (1991)] are taken as reference solutions to establish the accuracy of the FP model in the case of binary mixing. The DNSs of Juneja and Pope ["A DNS study of turbulent mixing of two passive scalars," Phys. Fluids 8, 2161 (1996)] are used to validate the results obtained for ternary mixing. Simulations are performed with both the conditional scalar dissipation rate (CSDR) proposed by Fox [Computational Methods for Turbulent Reacting Flows (Cambridge University Press, 2003)] and the CSDR from AMC, with the scalar dissipation rate provided as input and obtained from the DNS. Using scalar moments up to fourth order, the ability of the FP model to capture the evolution of the shape of the PDF, important in turbulent mixing problems, is demonstrated. Compared to the widely used assumed β -PDF model [S. S. Girimaji, "Assumed β-pdf model for turbulent mixing: Validation and extension to multiple scalar mixing," Combust. Sci. Technol. 78, 177 (1991)], the β -EQMOM solution to the FP model more accurately describes the initial mixing process with a relatively small increase in computational cost.
Mixed multiplicities for arbitrary ideals and generalized Buchsbaum-Rim multiplicities
International Nuclear Information System (INIS)
Callejas-Bedregal, R.; Jorge Perez, V.H.
2005-12-01
We introduce first the notion of mixed multiplicities for arbitrary ideals in a local d-dimensional noetherian ring (A, m) which, in some sense, generalizes the concept of mixed multiplicities for m-primary ideals. We also generalize Teissier's Product Formula for a set of arbitrary ideals. We also extend the notion of the Buchsbaum-Rim multiplicity (in short, we write BR-multiplicity) of a submodule of a free module to the case where the submodule no longer has finite colength. For a submodule M of A p we introduce a sequence e BR k (M), k = 0,...,d + p - 1 which in the ideal case coincides with the multiplicity sequence c 0 (I, A),...,c d (I, A) defined for an arbitrary ideal I of A by Achilles and Manaresi in [AM]. In case that M has finite colength in A p and it is totally decomposable we prove that our BR-multiplicity sequence essentially falls into the standard BR-multiplicity of M. (author)
Multiple Estimation Architecture in Discrete-Time Adaptive Mixing Control
Directory of Open Access Journals (Sweden)
Simone Baldi
2013-05-01
Full Text Available Adaptive mixing control (AMC is a recently developed control scheme for uncertain plants, where the control action coming from a bank of precomputed controller is mixed based on the parameter estimates generated by an on-line parameter estimator. Even if the stability of the control scheme, also in the presence of modeling errors and disturbances, has been shown analytically, its transient performance might be sensitive to the initial conditions of the parameter estimator. In particular, for some initial conditions, transient oscillations may not be acceptable in practical applications. In order to account for such a possible phenomenon and to improve the learning capability of the adaptive scheme, in this paper a new mixing architecture is developed, involving the use of parallel parameter estimators, or multi-estimators, each one working on a small subset of the uncertainty set. A supervisory logic, using performance signals based on the past and present estimation error, selects the parameter estimate to determine the mixing of the controllers. The stability and robustness properties of the resulting approach, referred to as multi-estimator adaptive mixing control (Multi-AMC, are analytically established. Besides, extensive simulations demonstrate that the scheme improves the transient performance of the original AMC with a single estimator. The control scheme and the analysis are carried out in a discrete-time framework, for easier implementation of the method in digital control.
Theoretical Models of Neutrino Mixing Recent Developments
Altarelli, Guido
2009-01-01
The data on neutrino mixing are at present compatible with Tri-Bimaximal (TB) mixing. If one takes this indication seriously then the models that lead to TB mixing in first approximation are particularly interesting and A4 models are prominent in this list. However, the agreement of TB mixing with the data could still be an accident. We discuss a recent model based on S4 where Bimaximal mixing is instead valid at leading order and the large corrections needed to reproduce the data arise from the diagonalization of charged leptons. The value of $\\theta_{13}$ could distinguish between the two alternatives.
Mixed models for predictive modeling in actuarial science
Antonio, K.; Zhang, Y.
2012-01-01
We start with a general discussion of mixed (also called multilevel) models and continue with illustrating specific (actuarial) applications of this type of models. Technical details on (linear, generalized, non-linear) mixed models follow: model assumptions, specifications, estimation techniques
Mixed-mode modelling mixing methodologies for organisational intervention
Clarke, Steve; Lehaney, Brian
2001-01-01
The 1980s and 1990s have seen a growing interest in research and practice in the use of methodologies within problem contexts characterised by a primary focus on technology, human issues, or power. During the last five to ten years, this has given rise to challenges regarding the ability of a single methodology to address all such contexts, and the consequent development of approaches which aim to mix methodologies within a single problem situation. This has been particularly so where the situation has called for a mix of technological (the so-called 'hard') and human centred (so-called 'soft') methods. The approach developed has been termed mixed-mode modelling. The area of mixed-mode modelling is relatively new, with the phrase being coined approximately four years ago by Brian Lehaney in a keynote paper published at the 1996 Annual Conference of the UK Operational Research Society. Mixed-mode modelling, as suggested above, is a new way of considering problem situations faced by organisations. Traditional...
Parker, Stephen; Dark, Frances; Newman, Ellie; Korman, Nicole; Meurk, Carla; Siskind, Dan; Harris, Meredith
2016-06-02
A novel staffing model integrating peer support workers and clinical staff within a unified team is being trialled at community based residential rehabilitation units in Australia. A mixed-methods protocol for the longitudinal evaluation of the outcomes, expectations and experiences of care by consumers and staff under this staffing model in two units will be compared to one unit operating a traditional clinical staffing. The study is unique with regards to the context, the longitudinal approach and consideration of multiple stakeholder perspectives. The longitudinal mixed methods design integrates a quantitative evaluation of the outcomes of care for consumers at three residential rehabilitation units with an applied qualitative research methodology. The quantitative component utilizes a prospective cohort design to explore whether equivalent outcomes are achieved through engagement at residential rehabilitation units operating integrated and clinical staffing models. Comparative data will be available from the time of admission, discharge and 12-month period post-discharge from the units. Additionally, retrospective data for the 12-month period prior to admission will be utilized to consider changes in functioning pre and post engagement with residential rehabilitation care. The primary outcome will be change in psychosocial functioning, assessed using the total score on the Health of the Nation Outcome Scales (HoNOS). Planned secondary outcomes will include changes in symptomatology, disability, recovery orientation, carer quality of life, emergency department presentations, psychiatric inpatient bed days, and psychological distress and wellbeing. Planned analyses will include: cohort description; hierarchical linear regression modelling of the predictors of change in HoNOS following CCU care; and descriptive comparisons of the costs associated with the two staffing models. The qualitative component utilizes a pragmatic approach to grounded theory, with
Directory of Open Access Journals (Sweden)
J. Lu
2010-04-01
Full Text Available A new method for describing externally mixed particles, the Detailed Aerosol Mixing State (DAMS representation, is presented in this study. This novel method classifies aerosols by both composition and size, using a user-specified mixing criterion to define boundaries between compositional populations. Interactions between aerosol mixing state, semivolatile partitioning, and coagulation are investigated with a Lagrangian box model that incorporates the DAMS approach. Model results predict that mixing state affects the amount and types of semivolatile organics that partition to available aerosol phases, causing external mixtures to produce a more size-varying composition than internal mixtures. Both coagulation and condensation contribute to the mixing of emitted particles, producing a collection of multiple compositionally distinct aerosol populations that exists somewhere between the extremes of a strictly external or internal mixture. The selection of mixing criteria has a significant impact on the size and type of individual populations that compose the modeled aerosol mixture. Computational demands for external mixture modeling are significant and can be controlled by limiting the number of aerosol populations used in the model.
Geometrical model of multiple production
International Nuclear Information System (INIS)
Chikovani, Z.E.; Jenkovszky, L.L.; Kvaratshelia, T.M.; Struminskij, B.V.
1988-01-01
The relation between geometrical and KNO-scaling and their violation is studied in a geometrical model of multiple production of hadrons. Predictions concerning the behaviour of correlation coefficients at future accelerators are given
Linear mixed models for longitudinal data
Molenberghs, Geert
2000-01-01
This paperback edition is a reprint of the 2000 edition. This book provides a comprehensive treatment of linear mixed models for continuous longitudinal data. Next to model formulation, this edition puts major emphasis on exploratory data analysis for all aspects of the model, such as the marginal model, subject-specific profiles, and residual covariance structure. Further, model diagnostics and missing data receive extensive treatment. Sensitivity analysis for incomplete data is given a prominent place. Several variations to the conventional linear mixed model are discussed (a heterogeity model, conditional linear mixed models). This book will be of interest to applied statisticians and biomedical researchers in industry, public health organizations, contract research organizations, and academia. The book is explanatory rather than mathematically rigorous. Most analyses were done with the MIXED procedure of the SAS software package, and many of its features are clearly elucidated. However, some other commerc...
Flapping model of scalar mixing in turbulence
International Nuclear Information System (INIS)
Kerstein, A.R.
1991-01-01
Motivated by the fluctuating plume model of turbulent mixing downstream of a point source, a flapping model is formulated for application to other configurations. For the scalar mixing layer, simple expressions for single-point scalar fluctuation statistics are obtained that agree with measurements. For a spatially homogeneous scalar mixing field, the family of probability density functions previously derived using mapping closure is reproduced. It is inferred that single-point scalar statistics may depend primarily on large-scale flapping motions in many cases of interest, and thus that multipoint statistics may be the principal indicators of finer-scale mixing effects
Coding response to a case-mix measurement system based on multiple diagnoses.
Preyra, Colin
2004-08-01
To examine the hospital coding response to a payment model using a case-mix measurement system based on multiple diagnoses and the resulting impact on a hospital cost model. Financial, clinical, and supplementary data for all Ontario short stay hospitals from years 1997 to 2002. Disaggregated trends in hospital case-mix growth are examined for five years following the adoption of an inpatient classification system making extensive use of combinations of secondary diagnoses. Hospital case mix is decomposed into base and complexity components. The longitudinal effects of coding variation on a standard hospital payment model are examined in terms of payment accuracy and impact on adjustment factors. Introduction of the refined case-mix system provided incentives for hospitals to increase reporting of secondary diagnoses and resulted in growth in highest complexity cases that were not matched by increased resource use over time. Despite a pronounced coding response on the part of hospitals, the increase in measured complexity and case mix did not reduce the unexplained variation in hospital unit cost nor did it reduce the reliance on the teaching adjustment factor, a potential proxy for case mix. The main implication was changes in the size and distribution of predicted hospital operating costs. Jurisdictions introducing extensive refinements to standard diagnostic related group (DRG)-type payment systems should consider the effects of induced changes to hospital coding practices. Assessing model performance should include analysis of the robustness of classification systems to hospital-level variation in coding practices. Unanticipated coding effects imply that case-mix models hypothesized to perform well ex ante may not meet expectations ex post.
Coding Response to a Case-Mix Measurement System Based on Multiple Diagnoses
Preyra, Colin
2004-01-01
Objective To examine the hospital coding response to a payment model using a case-mix measurement system based on multiple diagnoses and the resulting impact on a hospital cost model. Data Sources Financial, clinical, and supplementary data for all Ontario short stay hospitals from years 1997 to 2002. Study Design Disaggregated trends in hospital case-mix growth are examined for five years following the adoption of an inpatient classification system making extensive use of combinations of secondary diagnoses. Hospital case mix is decomposed into base and complexity components. The longitudinal effects of coding variation on a standard hospital payment model are examined in terms of payment accuracy and impact on adjustment factors. Principal Findings Introduction of the refined case-mix system provided incentives for hospitals to increase reporting of secondary diagnoses and resulted in growth in highest complexity cases that were not matched by increased resource use over time. Despite a pronounced coding response on the part of hospitals, the increase in measured complexity and case mix did not reduce the unexplained variation in hospital unit cost nor did it reduce the reliance on the teaching adjustment factor, a potential proxy for case mix. The main implication was changes in the size and distribution of predicted hospital operating costs. Conclusions Jurisdictions introducing extensive refinements to standard diagnostic related group (DRG)-type payment systems should consider the effects of induced changes to hospital coding practices. Assessing model performance should include analysis of the robustness of classification systems to hospital-level variation in coding practices. Unanticipated coding effects imply that case-mix models hypothesized to perform well ex ante may not meet expectations ex post. PMID:15230940
Model Information Exchange System (MIXS).
2013-08-01
Many travel demand forecast models operate at state, regional, and local levels. While they share the same physical network in overlapping geographic areas, they use different and uncoordinated modeling networks. This creates difficulties for models ...
Modeling of particle mixing in the atmosphere
International Nuclear Information System (INIS)
Zhu, Shupeng
2015-01-01
This thesis presents a newly developed size-composition resolved aerosol model (SCRAM), which is able to simulate the dynamics of externally-mixed particles in the atmosphere, and evaluates its performance in three-dimensional air-quality simulations. The main work is split into four parts. First, the research context of external mixing and aerosol modelling is introduced. Secondly, the development of the SCRAM box model is presented along with validation tests. Each particle composition is defined by the combination of mass-fraction sections of its chemical components or aggregates of components. The three main processes involved in aerosol dynamic (nucleation, coagulation, condensation/ evaporation) are included in SCRAM. The model is first validated by comparisons with published reference solutions for coagulation and condensation/evaporation of internally-mixed particles. The particle mixing state is investigated in a 0-D simulation using data representative of air pollution at a traffic site in Paris. The relative influence on the mixing state of the different aerosol processes and of the algorithm used to model condensation/evaporation (dynamic evolution or bulk equilibrium between particles and gas) is studied. Then, SCRAM is integrated into the Polyphemus air quality platform and used to conduct simulations over Greater Paris during the summer period of 2009. This evaluation showed that SCRAM gives satisfactory results for both PM2.5/PM10 concentrations and aerosol optical depths, as assessed from comparisons to observations. Besides, the model allows us to analyze the particle mixing state, as well as the impact of the mixing state assumption made in the modelling on particle formation, aerosols optical properties, and cloud condensation nuclei activation. Finally, two simulations are conducted during the winter campaign of MEGAPOLI (Megacities: Emissions, urban, regional and Global Atmospheric Pollution and climate effects, and Integrated tools for
Statistical Tests for Mixed Linear Models
Khuri, André I; Sinha, Bimal K
2011-01-01
An advanced discussion of linear models with mixed or random effects. In recent years a breakthrough has occurred in our ability to draw inferences from exact and optimum tests of variance component models, generating much research activity that relies on linear models with mixed and random effects. This volume covers the most important research of the past decade as well as the latest developments in hypothesis testing. It compiles all currently available results in the area of exact and optimum tests for variance component models and offers the only comprehensive treatment for these models a
Multivariate generalized linear mixed models using R
Berridge, Damon Mark
2011-01-01
Multivariate Generalized Linear Mixed Models Using R presents robust and methodologically sound models for analyzing large and complex data sets, enabling readers to answer increasingly complex research questions. The book applies the principles of modeling to longitudinal data from panel and related studies via the Sabre software package in R. A Unified Framework for a Broad Class of Models The authors first discuss members of the family of generalized linear models, gradually adding complexity to the modeling framework by incorporating random effects. After reviewing the generalized linear model notation, they illustrate a range of random effects models, including three-level, multivariate, endpoint, event history, and state dependence models. They estimate the multivariate generalized linear mixed models (MGLMMs) using either standard or adaptive Gaussian quadrature. The authors also compare two-level fixed and random effects linear models. The appendices contain additional information on quadrature, model...
International Nuclear Information System (INIS)
Wang Hai-Hua; Sun Xian-Ming
2012-01-01
The mixture of water cloud droplets with black carbon impurities is modeled by external and internal mixing models. The internal mixing model is modeled with a two-layered sphere (water cloud droplets containing black carbon (BC) inclusions), and the single scattering and absorption characteristics are calculated at the visible wavelength of 0.55 μm by using the Lorenz—Mie theory. The external mixing model is developed assuming that the same amount of BC particles are mixed with the water droplets externally. The multiple scattering characteristics are computed by using the Monte Carlo method. The results show that when the size of the BC aerosol is small, the reflection intensity of the internal mixing model is bigger than that of the external mixing model. However, if the size of the BC aerosol is big, the absorption of the internal mixing model will be larger than that of the external mixing model. (electromagnetism, optics, acoustics, heat transfer, classical mechanics, and fluid dynamics)
Mixed-effects regression models in linguistics
Heylen, Kris; Geeraerts, Dirk
2018-01-01
When data consist of grouped observations or clusters, and there is a risk that measurements within the same group are not independent, group-specific random effects can be added to a regression model in order to account for such within-group associations. Regression models that contain such group-specific random effects are called mixed-effects regression models, or simply mixed models. Mixed models are a versatile tool that can handle both balanced and unbalanced datasets and that can also be applied when several layers of grouping are present in the data; these layers can either be nested or crossed. In linguistics, as in many other fields, the use of mixed models has gained ground rapidly over the last decade. This methodological evolution enables us to build more sophisticated and arguably more realistic models, but, due to its technical complexity, also introduces new challenges. This volume brings together a number of promising new evolutions in the use of mixed models in linguistics, but also addres...
Mixed models theory and applications with R
Demidenko, Eugene
2013-01-01
Mixed modeling is one of the most promising and exciting areas of statistical analysis, enabling the analysis of nontraditional, clustered data that may come in the form of shapes or images. This book provides in-depth mathematical coverage of mixed models' statistical properties and numerical algorithms, as well as applications such as the analysis of tumor regrowth, shape, and image. The new edition includes significant updating, over 300 exercises, stimulating chapter projects and model simulations, inclusion of R subroutines, and a revised text format. The target audience continues to be g
Statistical models of global Langmuir mixing
Li, Qing; Fox-Kemper, Baylor; Breivik, Øyvind; Webb, Adrean
2017-05-01
The effects of Langmuir mixing on the surface ocean mixing may be parameterized by applying an enhancement factor which depends on wave, wind, and ocean state to the turbulent velocity scale in the K-Profile Parameterization. Diagnosing the appropriate enhancement factor online in global climate simulations is readily achieved by coupling with a prognostic wave model, but with significant computational and code development expenses. In this paper, two alternatives that do not require a prognostic wave model, (i) a monthly mean enhancement factor climatology, and (ii) an approximation to the enhancement factor based on the empirical wave spectra, are explored and tested in a global climate model. Both appear to reproduce the Langmuir mixing effects as estimated using a prognostic wave model, with nearly identical and substantial improvements in the simulated mixed layer depth and intermediate water ventilation over control simulations, but significantly less computational cost. Simpler approaches, such as ignoring Langmuir mixing altogether or setting a globally constant Langmuir number, are found to be deficient. Thus, the consequences of Stokes depth and misaligned wind and waves are important.
Modeling of Salt Solubilities in Mixed Solvents
DEFF Research Database (Denmark)
Chiavone-Filho, O.; Rasmussen, Peter
2000-01-01
A method to correlate and predict salt solubilities in mixed solvents using a UNIQUAC+Debye-Huckel model is developed. The UNIQUAC equation is applied in a form with temperature-dependent parameters. The Debye-Huckel model is extended to mixed solvents by properly evaluating the dielectric...... constants and the liquid densities of the solvent media. To normalize the activity coefficients, the symmetric convention is adopted. Thermochemical properties of the salt are used to estimate the solubility product. It is shown that the proposed procedure can describe with good accuracy a series of salt...
Handbook of mixed membership models and their applications
Airoldi, Edoardo M; Erosheva, Elena A; Fienberg, Stephen E
2014-01-01
In response to scientific needs for more diverse and structured explanations of statistical data, researchers have discovered how to model individual data points as belonging to multiple groups. Handbook of Mixed Membership Models and Their Applications shows you how to use these flexible modeling tools to uncover hidden patterns in modern high-dimensional multivariate data. It explores the use of the models in various application settings, including survey data, population genetics, text analysis, image processing and annotation, and molecular biology.Through examples using real data sets, yo
Linear and Generalized Linear Mixed Models and Their Applications
Jiang, Jiming
2007-01-01
This book covers two major classes of mixed effects models, linear mixed models and generalized linear mixed models, and it presents an up-to-date account of theory and methods in analysis of these models as well as their applications in various fields. The book offers a systematic approach to inference about non-Gaussian linear mixed models. Furthermore, it has included recently developed methods, such as mixed model diagnostics, mixed model selection, and jackknife method in the context of mixed models. The book is aimed at students, researchers and other practitioners who are interested
He, Cenlin; Liou, Kuo-Nan; Takano, Yoshi; Yang, Ping; Qi, Ling; Chen, Fei
2018-01-01
We quantify the effects of grain shape and multiple black carbon (BC)-snow internal mixing on snow albedo by explicitly resolving shape and mixing structures. Nonspherical snow grains tend to have higher albedos than spheres with the same effective sizes, while the albedo difference due to shape effects increases with grain size, with up to 0.013 and 0.055 for effective radii of 1,000 μm at visible and near-infrared bands, respectively. BC-snow internal mixing reduces snow albedo at wavelengths external mixing, internal mixing enhances snow albedo reduction by a factor of 1.2-2.0 at visible wavelengths depending on BC concentration and snow shape. The opposite effects on albedo reductions due to snow grain nonsphericity and BC-snow internal mixing point toward a careful investigation of these two factors simultaneously in climate modeling. We further develop parameterizations for snow albedo and its reduction by accounting for grain shape and BC-snow internal/external mixing. Combining the parameterizations with BC-in-snow measurements in China, North America, and the Arctic, we estimate that nonspherical snow grains reduce BC-induced albedo radiative effects by up to 50% compared with spherical grains. Moreover, BC-snow internal mixing enhances the albedo effects by up to 30% (130%) for spherical (nonspherical) grains relative to external mixing. The overall uncertainty induced by snow shape and BC-snow mixing state is about 21-32%.
Multiple Pregnancies in CKD Patients: An Explosive Mix
Arduino, Silvana; Attini, Rossella; Parisi, Silvia; Fassio, Federica; Biolcati, Marlisa; Pagano, Arianna; Bossotti, Carlotta; Vasario, Elena; Borgarello, Valentina; Daidola, Germana; Ferraresi, Martina; Gaglioti, Pietro; Todros, Tullia
2013-01-01
Summary Background and objectives CKD and multiple pregnancies bear important risks for pregnancy outcomes. The aim of the study was to define the risk for adverse pregnancy-related outcomes in multiple pregnancies in CKD patients in comparison with a control group of “low-risk” multiple pregnancies. Design, setting, participants, & measurements The study was performed in the Maternal Hospital of the University of Turin, Italy. Of 314 pregnancies referred in CKD (2000–2011), 20 were multiple (15 twin deliveries). Control groups consisted of 379 low-risk multiple pregnancies (314 twin deliveries) and 19 (15 twin deliveries) cases with hypertension-collagen diseases. Baseline data and outcomes were compared by univariate and logistic regression analyses. Results The prevalence of multiple pregnancies was relatively high in the CKD population (6.4%); all referred cases were in early CKD stages (I-II); both creatinine (0.68 to 0.79 mg/dl; P=0.010) and proteinuria (0.81 to 3.42 g/d; P=0.041) significantly increased from referral to delivery. No significant difference in demographic data at baseline was found between cases and low-risk controls. CKD was associated with higher risk of adverse pregnancy outcomes versus low-risk twin pregnancies. Statistical significance was reached for preterm delivery (<34 weeks: 60% vs 26.4%; P=0.005; <32 weeks: 53.3% vs 12.7%; P<0.001), small for gestational age babies (28.6% vs 8.1%; P<0.001), need for Neonatal Intensive Care Unit (60% vs 12.7%; P<0.001), weight discordance between twins (40% vs 17.8%; P=0.032), and neonatal and perinatal mortality (6.6% vs 0.8%; P=0.032). Conclusion This study suggests that maternal-fetal risks are increased in multiple pregnancies in the early CKD stages. PMID:23124785
Hopping mixed hybrid excitations in multiple composite quantum wire structures
International Nuclear Information System (INIS)
Nguyen Ba An; Tran Thai Hoa
1995-10-01
A structure consisting of N pairs of inorganic semiconductor and organic quantum wires is considered theoretically. In such an isolated pair of wires, while the intrawire coupling forms Wannier-Mott exciton in an inorganic semiconductor quantum wire and Frenkel exciton in an organic one, the interwire coupling gives rise to hybrid excitons residing within the pair. When N pairs of wires are packed together 2N new mixed hybrid modes appear that are the true elementary excitations and can hop throughout the whole structure. Energies and wave functions of such hopping mixed hybrid excitations are derived analytically in detail accounting for the global interwire coupling and the different polarization configurations. (author). 19 refs
A Lagrangian mixing frequency model for transported PDF modeling
Turkeri, Hasret; Zhao, Xinyu
2017-11-01
In this study, a Lagrangian mixing frequency model is proposed for molecular mixing models within the framework of transported probability density function (PDF) methods. The model is based on the dissipations of mixture fraction and progress variables obtained from Lagrangian particles in PDF methods. The new model is proposed as a remedy to the difficulty in choosing the optimal model constant parameters when using conventional mixing frequency models. The model is implemented in combination with the Interaction by exchange with the mean (IEM) mixing model. The performance of the new model is examined by performing simulations of Sandia Flame D and the turbulent premixed flame from the Cambridge stratified flame series. The simulations are performed using the pdfFOAM solver which is a LES/PDF solver developed entirely in OpenFOAM. A 16-species reduced mechanism is used to represent methane/air combustion, and in situ adaptive tabulation is employed to accelerate the finite-rate chemistry calculations. The results are compared with experimental measurements as well as with the results obtained using conventional mixing frequency models. Dynamic mixing frequencies are predicted using the new model without solving additional transport equations, and good agreement with experimental data is observed.
Scotogenic model for co-bimaximal mixing
Energy Technology Data Exchange (ETDEWEB)
Ferreira, P.M. [Instituto Superior de Engenharia de Lisboa - ISEL,1959-007 Lisboa (Portugal); Centro de Física Teórica e Computacional - FCUL, Universidade de Lisboa,R. Ernesto de Vasconcelos, 1749-016 Lisboa (Portugal); Grimus, W. [Faculty of Physics, University of Vienna,Boltzmanngasse 5, A-1090 Wien (Austria); Jurčiukonis, D. [Institute of Theoretical Physics and Astronomy, Vilnius University,Saul?etekio ave. 3, LT-10222 Vilnius (Lithuania); Lavoura, L. [CFTP, Instituto Superior Técnico, Universidade de Lisboa,1049-001 Lisboa (Portugal)
2016-07-04
We present a scotogenic model, i.e. a one-loop neutrino mass model with dark right-handed neutrino gauge singlets and one inert dark scalar gauge doublet η, which has symmetries that lead to co-bimaximal mixing, i.e. to an atmospheric mixing angle θ{sub 23}=45{sup ∘} and to a CP-violating phase δ=±π/2, while the mixing angle θ{sub 13} remains arbitrary. The symmetries consist of softly broken lepton numbers L{sub α} (α=e,μ,τ), a non-standard CP symmetry, and three ℤ{sub 2} symmetries. We indicate two possibilities for extending the model to the quark sector. Since the model has, besides η, three scalar gauge doublets, we perform a thorough discussion of its scalar sector. We demonstrate that it can accommodate a Standard Model-like scalar with mass 125 GeV, with all the other charged and neutral scalars having much higher masses.
Relating masses and mixing angles. A model-independent model
Energy Technology Data Exchange (ETDEWEB)
Hollik, Wolfgang Gregor [DESY, Hamburg (Germany); Saldana-Salazar, Ulises Jesus [CINVESTAV (Mexico)
2016-07-01
In general, mixing angles and fermion masses are seen to be independent parameters of the Standard Model. However, exploiting the observed hierarchy in the masses, it is viable to construct the mixing matrices for both quarks and leptons in terms of the corresponding mass ratios only. A closer view on the symmetry properties leads to potential realizations of that approach in extensions of the Standard Model. We discuss the application in the context of flavored multi-Higgs models.
A mixed model framework for teratology studies.
Braeken, Johan; Tuerlinckx, Francis
2009-10-01
A mixed model framework is presented to model the characteristic multivariate binary anomaly data as provided in some teratology studies. The key features of the model are the incorporation of covariate effects, a flexible random effects distribution by means of a finite mixture, and the application of copula functions to better account for the relation structure of the anomalies. The framework is motivated by data of the Boston Anticonvulsant Teratogenesis study and offers an integrated approach to investigate substantive questions, concerning general and anomaly-specific exposure effects of covariates, interrelations between anomalies, and objective diagnostic measurement.
Mixed Methods: Incorporating multiple learning styles into a measurements course
Pallone, Arthur
2010-03-01
The best scientists and engineers regularly combine creative and critical skill sets. As faculty, we are responsible to provide future scientists and engineers with those skills sets. EGR 390: Engineering Measurements at Murray State University is structured to actively engage students in the processes that develop and enhance those skills. Students learn through a mix of traditional lecture and homework, active discussion of open-ended questions, small group activities, structured laboratory exercises, oral and written communications exercises, student chosen team projects, and peer evaluations. Examples of each of these activities, the skill set addressed by each activity, outcomes from and effectiveness of each activity and recommendations for future directions in the EGR 390 course as designed will be presented.
Discrete Symmetries and Models of Flavour Mixing
International Nuclear Information System (INIS)
King, Stephen F
2015-01-01
In this talk we shall give an overview of the role of discrete symmetries, including both CP and family symmetry, in constructing unified models of quark and lepton (including especially neutrino) masses and mixing. Various different approaches to model building will be described, denoted as direct, semi-direct and indirect, and the pros and cons of each approach discussed. Particular examples based on Δ(6n 2 ) will be discussed and an A to Z of Flavour with Pati-Salam will be presented. (paper)
Models of neutrino mass and mixing
International Nuclear Information System (INIS)
Ma, Ernest
2000-01-01
There are two basic theoretical approaches to obtaining neutrino mass and mixing. In the minimalist approach, one adds just enough new stuff to the Minimal Standard Model to get m ν ≠0 and U αi ≠1. In the holistic approach, one uses a general framework or principle to enlarge the Minimal Standard Model such that, among other things, m ν ≠0 and U αi ≠1. In both cases, there are important side effects besides neutrino oscillations. I discuss a number of examples, including the possibility of leptogenesis from R parity nonconservation in supersymmetry
BDA special care case mix model.
Bateman, P; Arnold, C; Brown, R; Foster, L V; Greening, S; Monaghan, N; Zoitopoulos, L
2010-04-10
Routine dental care provided in special care dentistry is complicated by patient specific factors which increase the time taken and costs of treatment. The BDA have developed and conducted a field trial of a case mix tool to measure this complexity. For each episode of care the case mix tool assesses the following on a four point scale: 'ability to communicate', 'ability to cooperate', 'medical status', 'oral risk factors', 'access to oral care' and 'legal and ethical barriers to care'. The tool is reported to be easy to use and captures sufficient detail to discriminate between types of service and special care dentistry provided. It offers potential as a simple to use and clinically relevant source of performance management and commissioning data. This paper describes the model, demonstrates how it is currently being used, and considers future developments in its use.
Model Selection with the Linear Mixed Model for Longitudinal Data
Ryoo, Ji Hoon
2011-01-01
Model building or model selection with linear mixed models (LMMs) is complicated by the presence of both fixed effects and random effects. The fixed effects structure and random effects structure are codependent, so selection of one influences the other. Most presentations of LMM in psychology and education are based on a multilevel or…
A mixed methods study of multiple health behaviors among individuals with stroke.
Plow, Matthew; Moore, Shirley M; Sajatovic, Martha; Katzan, Irene
2017-01-01
Individuals with stroke often have multiple cardiovascular risk factors that necessitate promoting engagement in multiple health behaviors. However, observational studies of individuals with stroke have typically focused on promoting a single health behavior. Thus, there is a poor understanding of linkages between healthy behaviors and the circumstances in which factors, such as stroke impairments, may influence a single or multiple health behaviors. We conducted a mixed methods convergent parallel study of 25 individuals with stroke to examine the relationships between stroke impairments and physical activity, sleep, and nutrition. Our goal was to gain further insight into possible strategies to promote multiple health behaviors among individuals with stroke. This study focused on physical activity, sleep, and nutrition because of their importance in achieving energy balance, maintaining a healthy weight, and reducing cardiovascular risks. Qualitative and quantitative data were collected concurrently, with the former being prioritized over the latter. Qualitative data was prioritized in order to develop a conceptual model of engagement in multiple health behaviors among individuals with stroke. Qualitative and quantitative data were analyzed independently and then were integrated during the inference stage to develop meta-inferences. The 25 individuals with stroke completed closed-ended questionnaires on healthy behaviors and physical function. They also participated in face-to-face focus groups and one-to-one phone interviews. We found statistically significant and moderate correlations between hand function and healthy eating habits ( r = 0.45), sleep disturbances and limitations in activities of daily living ( r = - 0.55), BMI and limitations in activities of daily living ( r = - 0.49), physical activity and limitations in activities of daily living ( r = 0.41), mobility impairments and BMI ( r = - 0.41), sleep disturbances and physical
A mixed methods study of multiple health behaviors among individuals with stroke
Directory of Open Access Journals (Sweden)
Matthew Plow
2017-05-01
Full Text Available Background Individuals with stroke often have multiple cardiovascular risk factors that necessitate promoting engagement in multiple health behaviors. However, observational studies of individuals with stroke have typically focused on promoting a single health behavior. Thus, there is a poor understanding of linkages between healthy behaviors and the circumstances in which factors, such as stroke impairments, may influence a single or multiple health behaviors. Methods We conducted a mixed methods convergent parallel study of 25 individuals with stroke to examine the relationships between stroke impairments and physical activity, sleep, and nutrition. Our goal was to gain further insight into possible strategies to promote multiple health behaviors among individuals with stroke. This study focused on physical activity, sleep, and nutrition because of their importance in achieving energy balance, maintaining a healthy weight, and reducing cardiovascular risks. Qualitative and quantitative data were collected concurrently, with the former being prioritized over the latter. Qualitative data was prioritized in order to develop a conceptual model of engagement in multiple health behaviors among individuals with stroke. Qualitative and quantitative data were analyzed independently and then were integrated during the inference stage to develop meta-inferences. The 25 individuals with stroke completed closed-ended questionnaires on healthy behaviors and physical function. They also participated in face-to-face focus groups and one-to-one phone interviews. Results We found statistically significant and moderate correlations between hand function and healthy eating habits (r = 0.45, sleep disturbances and limitations in activities of daily living (r = − 0.55, BMI and limitations in activities of daily living (r = − 0.49, physical activity and limitations in activities of daily living (r = 0.41, mobility impairments and BMI (r
Simplified models of mixed dark matter
International Nuclear Information System (INIS)
Cheung, Clifford; Sanford, David
2014-01-01
We explore simplified models of mixed dark matter (DM), defined here to be a stable relic composed of a singlet and an electroweak charged state. Our setup describes a broad spectrum of thermal DM candidates that can naturally accommodate the observed DM abundance but are subject to substantial constraints from current and upcoming direct detection experiments. We identify ''blind spots'' at which the DM-Higgs coupling is identically zero, thus nullifying direct detection constraints on spin independent scattering. Furthermore, we characterize the fine-tuning in mixing angles, i.e. well-tempering, required for thermal freeze-out to accommodate the observed abundance. Present and projected limits from LUX and XENON1T force many thermal relic models into blind spot tuning, well-tempering, or both. This simplified model framework generalizes bino-Higgsino DM in the MSSM, singlino-Higgsino DM in the NMSSM, and scalar DM candidates that appear in models of extended Higgs sectors
J. Lu; F. M. Bowman
2010-01-01
A new method for describing externally mixed particles, the Detailed Aerosol Mixing State (DAMS) representation, is presented in this study. This novel method classifies aerosols by both composition and size, using a user-specified mixing criterion to define boundaries between compositional populations. Interactions between aerosol mixing state, semivolatile partitioning, and coagulation are investigated with a Lagrangian box model that incorporates the DAMS approach. Model results predict th...
Multiple Indicator Stationary Time Series Models.
Sivo, Stephen A.
2001-01-01
Discusses the propriety and practical advantages of specifying multivariate time series models in the context of structural equation modeling for time series and longitudinal panel data. For time series data, the multiple indicator model specification improves on classical time series analysis. For panel data, the multiple indicator model…
Additive action model for mixed irradiation
International Nuclear Information System (INIS)
Lam, G.K.Y.
1984-01-01
Recent experimental results indicate that a mixture of high and low LET radiation may have some beneficial features (such as lower OER but with skin sparing) for clinical use, and interest has been renewed in the study of mixtures of high and low LET radiation. Several standard radiation inactivation models can readily accommodate interaction between two mixed radiations, however, this is usually handled by postulating extra free parameters, which can only be determined by fitting to experimental data. A model without any free parameter is proposed to explain the biological effect of mixed radiations, based on the following two assumptions: (a) The combined biological action due to two radiations is additive, assuming no repair has taken place during the interval between the two irradiations; and (b) The initial physical damage induced by radiation develops into final biological effect (e.g. cell killing) over a relatively long period (hours) after irradiation. This model has been shown to provide satisfactory fit to the experiment results of previous studies
Mixing parametrizations for ocean climate modelling
Gusev, Anatoly; Moshonkin, Sergey; Diansky, Nikolay; Zalesny, Vladimir
2016-04-01
The algorithm is presented of splitting the total evolutionary equations for the turbulence kinetic energy (TKE) and turbulence dissipation frequency (TDF), which is used to parameterize the viscosity and diffusion coefficients in ocean circulation models. The turbulence model equations are split into the stages of transport-diffusion and generation-dissipation. For the generation-dissipation stage, the following schemes are implemented: the explicit-implicit numerical scheme, analytical solution and the asymptotic behavior of the analytical solutions. The experiments were performed with different mixing parameterizations for the modelling of Arctic and the Atlantic climate decadal variability with the eddy-permitting circulation model INMOM (Institute of Numerical Mathematics Ocean Model) using vertical grid refinement in the zone of fully developed turbulence. The proposed model with the split equations for turbulence characteristics is similar to the contemporary differential turbulence models, concerning the physical formulations. At the same time, its algorithm has high enough computational efficiency. Parameterizations with using the split turbulence model make it possible to obtain more adequate structure of temperature and salinity at decadal timescales, compared to the simpler Pacanowski-Philander (PP) turbulence parameterization. Parameterizations with using analytical solution or numerical scheme at the generation-dissipation step of the turbulence model leads to better representation of ocean climate than the faster parameterization using the asymptotic behavior of the analytical solution. At the same time, the computational efficiency left almost unchanged relative to the simple PP parameterization. Usage of PP parametrization in the circulation model leads to realistic simulation of density and circulation with violation of T,S-relationships. This error is majorly avoided with using the proposed parameterizations containing the split turbulence model
Modeling Multiple Causes of Carcinogenesis
Energy Technology Data Exchange (ETDEWEB)
Jones, T D
1999-01-24
An array of epidemiological results and databases on test animal indicate that risk of cancer and atherosclerosis can be up- or down-regulated by diet through a range of 200%. Other factors contribute incrementally and include the natural terrestrial environment and various human activities that jointly produce complex exposures to endotoxin-producing microorganisms, ionizing radiations, and chemicals. Ordinary personal habits and simple physical irritants have been demonstrated to affect the immune response and risk of disease. There tends to be poor statistical correlation of long-term risk with single agent exposures incurred throughout working careers. However, Agency recommendations for control of hazardous exposures to humans has been substance-specific instead of contextually realistic even though there is consistent evidence for common mechanisms of toxicological and carcinogenic action. That behavior seems to be best explained by molecular stresses from cellular oxygen metabolism and phagocytosis of antigenic invasion as well as breakdown of normal metabolic compounds associated with homeostatic- and injury-related renewal of cells. There is continually mounting evidence that marrow stroma, comprised largely of monocyte-macrophages and fibroblasts, is important to phagocytic and cytokinetic response, but the complex action of the immune process is difficult to infer from first-principle logic or biomarkers of toxic injury. The many diverse database studies all seem to implicate two important processes, i.e., the univalent reduction of molecular oxygen and breakdown of aginuine, an amino acid, by hydrolysis or digestion of protein which is attendant to normal antigen-antibody action. This behavior indicates that protection guidelines and risk coefficients should be context dependent to include reference considerations of the composite action of parameters that mediate oxygen metabolism. A logic of this type permits the realistic common-scale modeling of
Inference of ICF Implosion Core Mix using Experimental Data and Theoretical Mix Modeling
International Nuclear Information System (INIS)
Welser-Sherrill, L.; Haynes, D.A.; Mancini, R.C.; Cooley, J.H.; Tommasini, R.; Golovkin, I.E.; Sherrill, M.E.; Haan, S.W.
2009-01-01
The mixing between fuel and shell materials in Inertial Confinement Fusion (ICF) implosion cores is a current topic of interest. The goal of this work was to design direct-drive ICF experiments which have varying levels of mix, and subsequently to extract information on mixing directly from the experimental data using spectroscopic techniques. The experimental design was accomplished using hydrodynamic simulations in conjunction with Haan's saturation model, which was used to predict the mix levels of candidate experimental configurations. These theoretical predictions were then compared to the mixing information which was extracted from the experimental data, and it was found that Haan's mix model performed well in predicting trends in the width of the mix layer. With these results, we have contributed to an assessment of the range of validity and predictive capability of the Haan saturation model, as well as increased our confidence in the methods used to extract mixing information from experimental data.
Mixed models in cerebral ischemia study
Directory of Open Access Journals (Sweden)
Matheus Henrique Dal Molin Ribeiro
2016-06-01
Full Text Available The data modeling from longitudinal studies stands out in the current scientific scenario, especially in the areas of health and biological sciences, which induces a correlation between measurements for the same observed unit. Thus, the modeling of the intra-individual dependency is required through the choice of a covariance structure that is able to receive and accommodate the sample variability. However, the lack of methodology for correlated data analysis may result in an increased occurrence of type I or type II errors and underestimate/overestimate the standard errors of the model estimates. In the present study, a Gaussian mixed model was adopted for the variable response latency of an experiment investigating the memory deficits in animals subjected to cerebral ischemia when treated with fish oil (FO. The model parameters estimation was based on maximum likelihood methods. Based on the restricted likelihood ratio test and information criteria, the autoregressive covariance matrix was adopted for errors. The diagnostic analyses for the model were satisfactory, since basic assumptions and results obtained corroborate with biological evidence; that is, the effectiveness of the FO treatment to alleviate the cognitive effects caused by cerebral ischemia was found.
Valid statistical approaches for analyzing sholl data: Mixed effects versus simple linear models.
Wilson, Machelle D; Sethi, Sunjay; Lein, Pamela J; Keil, Kimberly P
2017-03-01
The Sholl technique is widely used to quantify dendritic morphology. Data from such studies, which typically sample multiple neurons per animal, are often analyzed using simple linear models. However, simple linear models fail to account for intra-class correlation that occurs with clustered data, which can lead to faulty inferences. Mixed effects models account for intra-class correlation that occurs with clustered data; thus, these models more accurately estimate the standard deviation of the parameter estimate, which produces more accurate p-values. While mixed models are not new, their use in neuroscience has lagged behind their use in other disciplines. A review of the published literature illustrates common mistakes in analyses of Sholl data. Analysis of Sholl data collected from Golgi-stained pyramidal neurons in the hippocampus of male and female mice using both simple linear and mixed effects models demonstrates that the p-values and standard deviations obtained using the simple linear models are biased downwards and lead to erroneous rejection of the null hypothesis in some analyses. The mixed effects approach more accurately models the true variability in the data set, which leads to correct inference. Mixed effects models avoid faulty inference in Sholl analysis of data sampled from multiple neurons per animal by accounting for intra-class correlation. Given the widespread practice in neuroscience of obtaining multiple measurements per subject, there is a critical need to apply mixed effects models more widely. Copyright © 2017 Elsevier B.V. All rights reserved.
Numerical modeling of two-phase binary fluid mixing using mixed finite elements
Sun, Shuyu
2012-07-27
Diffusion coefficients of dense gases in liquids can be measured by considering two-phase binary nonequilibrium fluid mixing in a closed cell with a fixed volume. This process is based on convection and diffusion in each phase. Numerical simulation of the mixing often requires accurate algorithms. In this paper, we design two efficient numerical methods for simulating the mixing of two-phase binary fluids in one-dimensional, highly permeable media. Mathematical model for isothermal compositional two-phase flow in porous media is established based on Darcy\\'s law, material balance, local thermodynamic equilibrium for the phases, and diffusion across the phases. The time-lag and operator-splitting techniques are used to decompose each convection-diffusion equation into two steps: diffusion step and convection step. The Mixed finite element (MFE) method is used for diffusion equation because it can achieve a high-order and stable approximation of both the scalar variable and the diffusive fluxes across grid-cell interfaces. We employ the characteristic finite element method with moving mesh to track the liquid-gas interface. Based on the above schemes, we propose two methods: single-domain and two-domain methods. The main difference between two methods is that the two-domain method utilizes the assumption of sharp interface between two fluid phases, while the single-domain method allows fractional saturation level. Two-domain method treats the gas domain and the liquid domain separately. Because liquid-gas interface moves with time, the two-domain method needs work with a moving mesh. On the other hand, the single-domain method allows the use of a fixed mesh. We derive the formulas to compute the diffusive flux for MFE in both methods. The single-domain method is extended to multiple dimensions. Numerical results indicate that both methods can accurately describe the evolution of the pressure and liquid level. © 2012 Springer Science+Business Media B.V.
Multiplicative mixing of object identity and image attributes in single inferior temporal neurons.
Ratan Murty, N Apurva; Arun, S P
2018-04-03
Object recognition is challenging because the same object can produce vastly different images, mixing signals related to its identity with signals due to its image attributes, such as size, position, rotation, etc. Previous studies have shown that both signals are present in high-level visual areas, but precisely how they are combined has remained unclear. One possibility is that neurons might encode identity and attribute signals multiplicatively so that each can be efficiently decoded without interference from the other. Here, we show that, in high-level visual cortex, responses of single neurons can be explained better as a product rather than a sum of tuning for object identity and tuning for image attributes. This subtle effect in single neurons produced substantially better population decoding of object identity and image attributes in the neural population as a whole. This property was absent both in low-level vision models and in deep neural networks. It was also unique to invariances: when tested with two-part objects, neural responses were explained better as a sum than as a product of part tuning. Taken together, our results indicate that signals requiring separate decoding, such as object identity and image attributes, are combined multiplicatively in IT neurons, whereas signals that require integration (such as parts in an object) are combined additively. Copyright © 2018 the Author(s). Published by PNAS.
Nonlinear spectral mixing theory to model multispectral signatures
Energy Technology Data Exchange (ETDEWEB)
Borel, C.C. [Los Alamos National Lab., NM (United States). Astrophysics and Radiation Measurements Group
1996-02-01
Nonlinear spectral mixing occurs due to multiple reflections and transmissions between discrete surfaces, e.g. leaves or facets of a rough surface. The radiosity method is an energy conserving computational method used in thermal engineering and it models nonlinear spectral mixing realistically and accurately. In contrast to the radiative transfer method the radiosity method takes into account the discreteness of the scattering surfaces (e.g. exact location, orientation and shape) such as leaves and includes mutual shading between them. An analytic radiosity-based scattering model for vegetation was developed and used to compute vegetation indices for various configurations. The leaf reflectance and transmittance was modeled using the PROSPECT model for various amounts of water, chlorophyll and variable leaf structure. The soil background was modeled using SOILSPEC with a linear mixture of reflectances of sand, clay and peat. A neural network and a geometry based retrieval scheme were used to retrieve leaf area index and chlorophyll concentration for dense canopies. Only simulated canopy reflectances in the 6 visible through short wave IR Landsat TM channels were used. The authors used an empirical function to compute the signal-to-noise ratio of a retrieved quantity.
A Note on the Identifiability of Generalized Linear Mixed Models
DEFF Research Database (Denmark)
Labouriau, Rodrigo
2014-01-01
I present here a simple proof that, under general regularity conditions, the standard parametrization of generalized linear mixed model is identifiable. The proof is based on the assumptions of generalized linear mixed models on the first and second order moments and some general mild regularity...... conditions, and, therefore, is extensible to quasi-likelihood based generalized linear models. In particular, binomial and Poisson mixed models with dispersion parameter are identifiable when equipped with the standard parametrization...
Solving large mixed linear models using preconditioned conjugate gradient iteration.
Strandén, I; Lidauer, M
1999-12-01
Continuous evaluation of dairy cattle with a random regression test-day model requires a fast solving method and algorithm. A new computing technique feasible in Jacobi and conjugate gradient based iterative methods using iteration on data is presented. In the new computing technique, the calculations in multiplication of a vector by a matrix were recorded to three steps instead of the commonly used two steps. The three-step method was implemented in a general mixed linear model program that used preconditioned conjugate gradient iteration. Performance of this program in comparison to other general solving programs was assessed via estimation of breeding values using univariate, multivariate, and random regression test-day models. Central processing unit time per iteration with the new three-step technique was, at best, one-third that needed with the old technique. Performance was best with the test-day model, which was the largest and most complex model used. The new program did well in comparison to other general software. Programs keeping the mixed model equations in random access memory required at least 20 and 435% more time to solve the univariate and multivariate animal models, respectively. Computations of the second best iteration on data took approximately three and five times longer for the animal and test-day models, respectively, than did the new program. Good performance was due to fast computing time per iteration and quick convergence to the final solutions. Use of preconditioned conjugate gradient based methods in solving large breeding value problems is supported by our findings.
Multiplicity Control in Structural Equation Modeling
Cribbie, Robert A.
2007-01-01
Researchers conducting structural equation modeling analyses rarely, if ever, control for the inflated probability of Type I errors when evaluating the statistical significance of multiple parameters in a model. In this study, the Type I error control, power and true model rates of famsilywise and false discovery rate controlling procedures were…
Modeling molecular mixing in a spatially inhomogeneous turbulent flow
Meyer, Daniel W.; Deb, Rajdeep
2012-02-01
Simulations of spatially inhomogeneous turbulent mixing in decaying grid turbulence with a joint velocity-concentration probability density function (PDF) method were conducted. The inert mixing scenario involves three streams with different compositions. The mixing model of Meyer ["A new particle interaction mixing model for turbulent dispersion and turbulent reactive flows," Phys. Fluids 22(3), 035103 (2010)], the interaction by exchange with the mean (IEM) model and its velocity-conditional variant, i.e., the IECM model, were applied. For reference, the direct numerical simulation data provided by Sawford and de Bruyn Kops ["Direct numerical simulation and lagrangian modeling of joint scalar statistics in ternary mixing," Phys. Fluids 20(9), 095106 (2008)] was used. It was found that velocity conditioning is essential to obtain accurate concentration PDF predictions. Moreover, the model of Meyer provides significantly better results compared to the IECM model at comparable computational expense.
Lagrangian mixed layer modeling of the western equatorial Pacific
Shinoda, Toshiaki; Lukas, Roger
1995-01-01
Processes that control the upper ocean thermohaline structure in the western equatorial Pacific are examined using a Lagrangian mixed layer model. The one-dimensional bulk mixed layer model of Garwood (1977) is integrated along the trajectories derived from a nonlinear 1 1/2 layer reduced gravity model forced with actual wind fields. The Global Precipitation Climatology Project (GPCP) data are used to estimate surface freshwater fluxes for the mixed layer model. The wind stress data which forced the 1 1/2 layer model are used for the mixed layer model. The model was run for the period 1987-1988. This simple model is able to simulate the isothermal layer below the mixed layer in the western Pacific warm pool and its variation. The subduction mechanism hypothesized by Lukas and Lindstrom (1991) is evident in the model results. During periods of strong South Equatorial Current, the warm and salty mixed layer waters in the central Pacific are subducted below the fresh shallow mixed layer in the western Pacific. However, this subduction mechanism is not evident when upwelling Rossby waves reach the western equatorial Pacific or when a prominent deepening of the mixed layer occurs in the western equatorial Pacific or when a prominent deepening of the mixed layer occurs in the western equatorial Pacific due to episodes of strong wind and light precipitation associated with the El Nino-Southern Oscillation. Comparison of the results between the Lagrangian mixed layer model and a locally forced Eulerian mixed layer model indicated that horizontal advection of salty waters from the central Pacific strongly affects the upper ocean salinity variation in the western Pacific, and that this advection is necessary to maintain the upper ocean thermohaline structure in this region.
Modeling tides and vertical tidal mixing: A reality check
International Nuclear Information System (INIS)
Robertson, Robin
2010-01-01
Recently, there has been a great interest in the tidal contribution to vertical mixing in the ocean. In models, vertical mixing is estimated using parameterization of the sub-grid scale processes. Estimates of the vertical mixing varied widely depending on which vertical mixing parameterization was used. This study investigated the performance of ten different vertical mixing parameterizations in a terrain-following ocean model when simulating internal tides. The vertical mixing parameterization was found to have minor effects on the velocity fields at the tidal frequencies, but large effects on the estimates of vertical diffusivity of temperature. Although there was no definitive best performer for the vertical mixing parameterization, several parameterizations were eliminated based on comparison of the vertical diffusivity estimates with observations. The best performers were the new generic coefficients for the generic length scale schemes and Mellor-Yamada's 2.5 level closure scheme.
Behavior of mixed-oxide fuel subjected to multiple thermal transients
International Nuclear Information System (INIS)
Fenske, G.R.; Neimark, L.A.; Poeppel, R.B.; Hofman, G.L.
1985-01-01
The microstructural behavior of irradiated mixed-oxide fuel subjected to multiple, mild thermal transients was investigated using direct electrical heating. The results demonstrate that significant intergranular porosity, accompanied by large-scale (>90%) release of the retained fission gas, developed as a result of the cyclic heating. Microstructural examination of the fuel indicated that thermal-shock-induced cracking of the fuel contributed significantly to the increased swelling and gas release. 29 refs., 12 figs
Behavior of mixed-oxide fuel subjected to multiple thermal transients
International Nuclear Information System (INIS)
Fenske, G.R.; Hofman, G.L.; Neimark, L.A.; Poeppel, R.B.
1983-11-01
The microstructural behavior of irradiated mixed-oxide fuel subjected to multiple, mild thermal transients was investigated using direct electrical heating. The results demonstrate that significant intergranular porosity, accompanied by large-scale (>90%) release of the retained fission gas, developed as a result of the cyclic heating. Microstructural examination of the fuel indicated that thermal-shock-induced cracking of the fuel contributed significantly to the increased swelling and gas release
Modeling Dynamic Effects of the Marketing Mix on Market Shares
D. Fok (Dennis); R. Paap (Richard); Ph.H.B.F. Franses (Philip Hans)
2003-01-01
textabstractTo comprehend the competitive structure of a market, it is important to understand the short-run and long-run effects of the marketing mix on market shares. A useful model to link market shares with marketing-mix variables, like price and promotion, is the market share attraction model.
Molecular Thermodynamic Modeling of Mixed Solvent Solubility
DEFF Research Database (Denmark)
Ellegaard, Martin Dela; Abildskov, Jens; O’Connell, John P.
2010-01-01
A method based on statistical mechanical fluctuation solution theory for composition derivatives of activity coefficients is employed for estimating dilute solubilities of 11 solid pharmaceutical solutes in nearly 70 mixed aqueous and nonaqueous solvent systems. The solvent mixtures range from...... nearly ideal to strongly nonideal. The database covers a temperature range from 293 to 323 K. Comparisons with available data and other existing solubility methods show that the method successfully describes a variety of observed mixed solvent solubility behaviors using solute−solvent parameters from...
Design of Xen Hybrid Multiple Police Model
Sun, Lei; Lin, Renhao; Zhu, Xianwei
2017-10-01
Virtualization Technology has attracted more and more attention. As a popular open-source virtualization tools, XEN is used more and more frequently. Xsm, XEN security model, has also been widespread concern. The safety status classification has not been established in the XSM, and it uses the virtual machine as a managed object to make Dom0 a unique administrative domain that does not meet the minimum privilege. According to these questions, we design a Hybrid multiple police model named SV_HMPMD that organically integrates multiple single security policy models include DTE,RBAC,BLP. It can fullfill the requirement of confidentiality and integrity for security model and use different particle size to different domain. In order to improve BLP’s practicability, the model introduce multi-level security labels. In order to divide the privilege in detail, we combine DTE with RBAC. In order to oversize privilege, we limit the privilege of domain0.
Multiple Model Approaches to Modelling and Control,
DEFF Research Database (Denmark)
on the ease with which prior knowledge can be incorporated. It is interesting to note that researchers in Control Theory, Neural Networks,Statistics, Artificial Intelligence and Fuzzy Logic have more or less independently developed very similar modelling methods, calling them Local ModelNetworks, Operating......, and allows direct incorporation of high-level and qualitative plant knowledge into themodel. These advantages have proven to be very appealing for industrial applications, and the practical, intuitively appealing nature of the framework isdemonstrated in chapters describing applications of local methods...... to problems in the process industries, biomedical applications and autonomoussystems. The successful application of the ideas to demanding problems is already encouraging, but creative development of the basic framework isneeded to better allow the integration of human knowledge with automated learning...
Software engineering the mixed model for genome-wide association studies on large samples.
Zhang, Zhiwu; Buckler, Edward S; Casstevens, Terry M; Bradbury, Peter J
2009-11-01
Mixed models improve the ability to detect phenotype-genotype associations in the presence of population stratification and multiple levels of relatedness in genome-wide association studies (GWAS), but for large data sets the resource consumption becomes impractical. At the same time, the sample size and number of markers used for GWAS is increasing dramatically, resulting in greater statistical power to detect those associations. The use of mixed models with increasingly large data sets depends on the availability of software for analyzing those models. While multiple software packages implement the mixed model method, no single package provides the best combination of fast computation, ability to handle large samples, flexible modeling and ease of use. Key elements of association analysis with mixed models are reviewed, including modeling phenotype-genotype associations using mixed models, population stratification, kinship and its estimation, variance component estimation, use of best linear unbiased predictors or residuals in place of raw phenotype, improving efficiency and software-user interaction. The available software packages are evaluated, and suggestions made for future software development.
Challenges in LCA modelling of multiple loops for aluminium cans
DEFF Research Database (Denmark)
Niero, Monia; Olsen, Stig Irving
considered the case of closed-loop recycling for aluminium cans, where body and lid are different alloys, and discussed the abovementioned challenge. The Life Cycle Inventory (LCI) modelling of aluminium processes is traditionally based on a pure aluminium flow, therefore neglecting the presence of alloying...... elements. We included the effect of alloying elements on the LCA modelling of aluminium can recycling. First, we performed a mass balance of the main alloying elements (Mn, Fe, Si, Cu) in aluminium can recycling at increasing levels of recycling rate. The analysis distinguished between different aluminium...... packaging scrap sources (i.e. used beverage can and mixed aluminium packaging) to understand the limiting factors for multiple loop aluminium can recycling. Secondly, we performed a comparative LCA of aluminium can production and recycling in multiple loops considering the two aluminium packaging scrap...
Bilinear Mixed Effects Models for Dyadic Data
National Research Council Canada - National Science Library
Hoff, Peter D
2003-01-01
This article discusses the use of a symmetric multiplicative interaction effect to capture certain types of third-order dependence patterns often present in social networks and other dyadic datasets...
Linear mixed-effects modeling approach to FMRI group analysis.
Chen, Gang; Saad, Ziad S; Britton, Jennifer C; Pine, Daniel S; Cox, Robert W
2013-06-01
Conventional group analysis is usually performed with Student-type t-test, regression, or standard AN(C)OVA in which the variance-covariance matrix is presumed to have a simple structure. Some correction approaches are adopted when assumptions about the covariance structure is violated. However, as experiments are designed with different degrees of sophistication, these traditional methods can become cumbersome, or even be unable to handle the situation at hand. For example, most current FMRI software packages have difficulty analyzing the following scenarios at group level: (1) taking within-subject variability into account when there are effect estimates from multiple runs or sessions; (2) continuous explanatory variables (covariates) modeling in the presence of a within-subject (repeated measures) factor, multiple subject-grouping (between-subjects) factors, or the mixture of both; (3) subject-specific adjustments in covariate modeling; (4) group analysis with estimation of hemodynamic response (HDR) function by multiple basis functions; (5) various cases of missing data in longitudinal studies; and (6) group studies involving family members or twins. Here we present a linear mixed-effects modeling (LME) methodology that extends the conventional group analysis approach to analyze many complicated cases, including the six prototypes delineated above, whose analyses would be otherwise either difficult or unfeasible under traditional frameworks such as AN(C)OVA and general linear model (GLM). In addition, the strength of the LME framework lies in its flexibility to model and estimate the variance-covariance structures for both random effects and residuals. The intraclass correlation (ICC) values can be easily obtained with an LME model with crossed random effects, even at the presence of confounding fixed effects. The simulations of one prototypical scenario indicate that the LME modeling keeps a balance between the control for false positives and the sensitivity
Analysis and modeling of subgrid scalar mixing using numerical data
Girimaji, Sharath S.; Zhou, YE
1995-01-01
Direct numerical simulations (DNS) of passive scalar mixing in isotropic turbulence is used to study, analyze and, subsequently, model the role of small (subgrid) scales in the mixing process. In particular, we attempt to model the dissipation of the large scale (supergrid) scalar fluctuations caused by the subgrid scales by decomposing it into two parts: (1) the effect due to the interaction among the subgrid scales; and (2) the effect due to interaction between the supergrid and the subgrid scales. Model comparisons with DNS data show good agreement. This model is expected to be useful in the large eddy simulations of scalar mixing and reaction.
Multiple model cardinalized probability hypothesis density filter
Georgescu, Ramona; Willett, Peter
2011-09-01
The Probability Hypothesis Density (PHD) filter propagates the first-moment approximation to the multi-target Bayesian posterior distribution while the Cardinalized PHD (CPHD) filter propagates both the posterior likelihood of (an unlabeled) target state and the posterior probability mass function of the number of targets. Extensions of the PHD filter to the multiple model (MM) framework have been published and were implemented either with a Sequential Monte Carlo or a Gaussian Mixture approach. In this work, we introduce the multiple model version of the more elaborate CPHD filter. We present the derivation of the prediction and update steps of the MMCPHD particularized for the case of two target motion models and proceed to show that in the case of a single model, the new MMCPHD equations reduce to the original CPHD equations.
Mixing of the Glauber dynamics for the ferromagnetic Potts model
Bordewich, Magnus; Greenhill, Catherine; Patel, Viresh
2013-01-01
We present several results on the mixing time of the Glauber dynamics for sampling from the Gibbs distribution in the ferromagnetic Potts model. At a fixed temperature and interaction strength, we study the interplay between the maximum degree ($\\Delta$) of the underlying graph and the number of colours or spins ($q$) in determining whether the dynamics mixes rapidly or not. We find a lower bound $L$ on the number of colours such that Glauber dynamics is rapidly mixing if at least $L$ colours...
Applied model for the growth of the daytime mixed layer
DEFF Research Database (Denmark)
Batchvarova, E.; Gryning, Sven-Erik
1991-01-01
numerically. When the mixed layer is shallow or the atmosphere nearly neutrally stratified, the growth is controlled mainly by mechanical turbulence. When the layer is deep, its growth is controlled mainly by convective turbulence. The model is applied on a data set of the evolution of the height of the mixed...... layer in the morning hours, when both mechanical and convective turbulence contribute to the growth process. Realistic mixed-layer developments are obtained....
Predictive performance models and multiple task performance
Wickens, Christopher D.; Larish, Inge; Contorer, Aaron
1989-01-01
Five models that predict how performance of multiple tasks will interact in complex task scenarios are discussed. The models are shown in terms of the assumptions they make about human operator divided attention. The different assumptions about attention are then empirically validated in a multitask helicopter flight simulation. It is concluded from this simulation that the most important assumption relates to the coding of demand level of different component tasks.
A mixed integer program to model spatial wildfire behavior and suppression placement decisions
Erin J. Belval; Yu Wei; Michael. Bevers
2015-01-01
Wildfire suppression combines multiple objectives and dynamic fire behavior to form a complex problem for decision makers. This paper presents a mixed integer program designed to explore integrating spatial fire behavior and suppression placement decisions into a mathematical programming framework. Fire behavior and suppression placement decisions are modeled using...
Software engineering the mixed model for genome-wide association studies on large samples
Mixed models improve the ability to detect phenotype-genotype associations in the presence of population stratification and multiple levels of relatedness in genome-wide association studies (GWAS), but for large data sets the resource consumption becomes impractical. At the same time, the sample siz...
Research on mixed network architecture collaborative application model
Jing, Changfeng; Zhao, Xi'an; Liang, Song
2009-10-01
When facing complex requirements of city development, ever-growing spatial data, rapid development of geographical business and increasing business complexity, collaboration between multiple users and departments is needed urgently, however conventional GIS software (such as Client/Server model or Browser/Server model) are not support this well. Collaborative application is one of the good resolutions. Collaborative application has four main problems to resolve: consistency and co-edit conflict, real-time responsiveness, unconstrained operation, spatial data recoverability. In paper, application model called AMCM is put forward based on agent and multi-level cache. AMCM can be used in mixed network structure and supports distributed collaborative. Agent is an autonomous, interactive, initiative and reactive computing entity in a distributed environment. Agent has been used in many fields such as compute science and automation. Agent brings new methods for cooperation and the access for spatial data. Multi-level cache is a part of full data. It reduces the network load and improves the access and handle of spatial data, especially, in editing the spatial data. With agent technology, we make full use of its characteristics of intelligent for managing the cache and cooperative editing that brings a new method for distributed cooperation and improves the efficiency.
Computer modeling of jet mixing in INEL waste tanks
International Nuclear Information System (INIS)
Meyer, P.A.
1994-01-01
The objective of this study is to examine the feasibility of using submerged jet mixing pumps to mobilize and suspend settled sludge materials in INEL High Level Radioactive Waste Tanks. Scenarios include removing the heel (a shallow liquid and sludge layer remaining after tank emptying processes) and mobilizing and suspending solids in full or partially full tanks. The approach used was to (1) briefly review jet mixing theory, (2) review erosion literature in order to identify and estimate important sludge characterization parameters (3) perform computer modeling of submerged liquid mixing jets in INEL tank geometries, (4) develop analytical models from which pump operating conditions and mixing times can be estimated, and (5) analyze model results to determine overall feasibility of using jet mixing pumps and make design recommendations
Development of a Medicaid Behavioral Health Case-Mix Model
Robst, John
2009-01-01
Many Medicaid programs have either fully or partially carved out mental health services. The evaluation of carve-out plans requires a case-mix model that accounts for differing health status across Medicaid managed care plans. This article develops a diagnosis-based case-mix adjustment system specific to Medicaid behavioral health care. Several…
Mixed integer linear programming model for dynamic supplier selection problem considering discounts
Directory of Open Access Journals (Sweden)
Adi Wicaksono Purnawan
2018-01-01
Full Text Available Supplier selection is one of the most important elements in supply chain management. This function involves evaluation of many factors such as, material costs, transportation costs, quality, delays, supplier capacity, storage capacity and others. Each of these factors varies with time, therefore, supplier identified for one period is not necessarily be same for the next period to supply the same product. So, mixed integer linear programming (MILP was developed to overcome the dynamic supplier selection problem (DSSP. In this paper, a mixed integer linear programming model is built to solve the lot-sizing problem with multiple suppliers, multiple periods, multiple products and quantity discounts. The buyer has to make a decision for some products which will be supplied by some suppliers for some periods cosidering by discount. To validate the MILP model with randomly generated data. The model is solved by Lingo 16.
Effects of the ρ - ω mixing interaction in relativistic models
International Nuclear Information System (INIS)
Menezes, D.P.; Providencia, C.
2003-01-01
The effects of the ρ-ω mixing term in infinite nuclear matter and in finite nuclei are investigated with the non-linear Walecka model in a Thomas-Fermi approximation. For infinite nuclear matter the influence of the mixing term in the binding energy calculated with the NL3 and TM1 parametrizations can be neglected. Its influence on the symmetry energy is only felt for the TM1 with a unrealistically large value for the mixing term strength. For finite nuclei the contribution of the isospin mixing term is very large as compared with the expected value to solve the Nolen-Schiffer anomaly
International Nuclear Information System (INIS)
Welser-Sherrill, L.; Mancini, R. C.; Haynes, D. A.; Haan, S. W.; Koch, J. A.; Izumi, N.; Tommasini, R.; Golovkin, I. E.; MacFarlane, J. J.; Radha, P. B.; Delettrez, J. A.; Regan, S. P.; Smalyuk, V. A.
2007-01-01
The presence of shell mix in inertial confinement fusion implosion cores is an important characteristic. Mixing in this experimental regime is primarily due to hydrodynamic instabilities, such as Rayleigh-Taylor and Richtmyer-Meshkov, which can affect implosion dynamics. Two independent theoretical mix models, Youngs' model and the Haan saturation model, were used to estimate the level of Rayleigh-Taylor mixing in a series of indirect drive experiments. The models were used to predict the radial width of the region containing mixed fuel and shell materials. The results for Rayleigh-Taylor mixing provided by Youngs' model are considered to be a lower bound for the mix width, while those generated by Haan's model incorporate more experimental characteristics and consequently have larger mix widths. These results are compared with an independent experimental analysis, which infers a larger mix width based on all instabilities and effects captured in the experimental data
Janssen, Dirk P
2012-03-01
Psychologists, psycholinguists, and other researchers using language stimuli have been struggling for more than 30 years with the problem of how to analyze experimental data that contain two crossed random effects (items and participants). The classical analysis of variance does not apply; alternatives have been proposed but have failed to catch on, and a statistically unsatisfactory procedure of using two approximations (known as F(1) and F(2)) has become the standard. A simple and elegant solution using mixed model analysis has been available for 15 years, and recent improvements in statistical software have made mixed models analysis widely available. The aim of this article is to increase the use of mixed models by giving a concise practical introduction and by giving clear directions for undertaking the analysis in the most popular statistical packages. The article also introduces the DJMIXED: add-on package for SPSS, which makes entering the models and reporting their results as straightforward as possible.
Perturbative estimates of lepton mixing angles in unified models
International Nuclear Information System (INIS)
Antusch, Stefan; King, Stephen F.; Malinsky, Michal
2009-01-01
Many unified models predict two large neutrino mixing angles, with the charged lepton mixing angles being small and quark-like, and the neutrino masses being hierarchical. Assuming this, we present simple approximate analytic formulae giving the lepton mixing angles in terms of the underlying high energy neutrino mixing angles together with small perturbations due to both charged lepton corrections and renormalisation group (RG) effects, including also the effects of third family canonical normalization (CN). We apply the perturbative formulae to the ubiquitous case of tri-bimaximal neutrino mixing at the unification scale, in order to predict the theoretical corrections to mixing angle predictions and sum rule relations, and give a general discussion of all limiting cases. We also discuss the implications for the sum rule relations of the measurement of a non-zero reactor angle, as hinted at by recent experimental measurements.
Mixing Paradigms for More Comprehensible Models
DEFF Research Database (Denmark)
Westergaard, Michael; Slaats, Tijs
2013-01-01
Petri nets efficiently model both data- and control-flow. Control-flow is either modeled explicitly as flow of a specific kind of data, or implicit based on the data-flow. Explicit modeling of control-flow is useful for well-known and highly structured processes, but may make modeling of abstract...
Markov and mixed models with applications
DEFF Research Database (Denmark)
Mortensen, Stig Bousgaard
This thesis deals with mathematical and statistical models with focus on applications in pharmacokinetic and pharmacodynamic (PK/PD) modelling. These models are today an important aspect of the drug development in the pharmaceutical industry and continued research in statistical methodology within...... or uncontrollable factors in an individual. Modelling using SDEs also provides new tools for estimation of unknown inputs to a system and is illustrated with an application to estimation of insulin secretion rates in diabetic patients. Models for the eect of a drug is a broader area since drugs may affect...... for non-parametric estimation of Markov processes are proposed to give a detailed description of the sleep process during the night. Statistically the Markov models considered for sleep states are closely related to the PK models based on SDEs as both models share the Markov property. When the models...
Directory of Open Access Journals (Sweden)
Willem Odendaal
2016-12-01
Full Text Available Abstract Background Formative programme evaluations assess intervention implementation processes, and are seen widely as a way of unlocking the ‘black box’ of any programme in order to explore and understand why a programme functions as it does. However, few critical assessments of the methods used in such evaluations are available, and there are especially few that reflect on how well the evaluation achieved its objectives. This paper describes a formative evaluation of a community-based lay health worker programme for TB and HIV/AIDS clients across three low-income communities in South Africa. It assesses each of the methods used in relation to the evaluation objectives, and offers suggestions on ways of optimising the use of multiple, mixed-methods within formative evaluations of complex health system interventions. Methods The evaluation’s qualitative methods comprised interviews, focus groups, observations and diary keeping. Quantitative methods included a time-and-motion study of the lay health workers’ scope of practice and a client survey. The authors conceptualised and conducted the evaluation, and through iterative discussions, assessed the methods used and their results. Results Overall, the evaluation highlighted programme issues and insights beyond the reach of traditional single methods evaluations. The strengths of the multiple, mixed-methods in this evaluation included a detailed description and nuanced understanding of the programme and its implementation, and triangulation of the perspectives and experiences of clients, lay health workers, and programme managers. However, the use of multiple methods needs to be carefully planned and implemented as this approach can overstretch the logistic and analytic resources of an evaluation. Conclusions For complex interventions, formative evaluation designs including multiple qualitative and quantitative methods hold distinct advantages over single method evaluations. However
Application of mixed models for the assessment genotype and ...
African Journals Online (AJOL)
SAM
2014-05-07
May 7, 2014 ... cused mainly on the yield of cottonseed and fiber, with the CA 324 and ..... Gaps and opportunities for agricultural sector development in ... Adaptability and stability of maize varieties using mixed models. Crop. Breeding and ...
Surface wind mixing in the Regional Ocean Modeling System (ROMS)
Robertson, Robin; Hartlipp, Paul
2017-12-01
Mixing at the ocean surface is key for atmosphere-ocean interactions and the distribution of heat, energy, and gases in the upper ocean. Winds are the primary force for surface mixing. To properly simulate upper ocean dynamics and the flux of these quantities within the upper ocean, models must reproduce mixing in the upper ocean. To evaluate the performance of the Regional Ocean Modeling System (ROMS) in replicating the surface mixing, the results of four different vertical mixing parameterizations were compared against observations, using the surface mixed layer depth, the temperature fields, and observed diffusivities for comparisons. The vertical mixing parameterizations investigated were Mellor- Yamada 2.5 level turbulent closure (MY), Large- McWilliams- Doney Kpp (LMD), Nakanishi- Niino (NN), and the generic length scale (GLS) schemes. This was done for one temperate site in deep water in the Eastern Pacific and three shallow water sites in the Baltic Sea. The model reproduced the surface mixed layer depth reasonably well for all sites; however, the temperature fields were reproduced well for the deep site, but not for the shallow Baltic Sea sites. In the Baltic Sea, the models overmixed the water column after a few days. Vertical temperature diffusivities were higher than those observed and did not show the temporal fluctuations present in the observations. The best performance was by NN and MY; however, MY became unstable in two of the shallow simulations with high winds. The performance of GLS nearly as good as NN and MY. LMD had the poorest performance as it generated temperature diffusivities that were too high and induced too much mixing. Further observational comparisons are needed to evaluate the effects of different stratification and wind conditions and the limitations on the vertical mixing parameterizations.
Abd El-Malek, Ahmed H.; Salhab, Anas M.; Zummo, Salam A.; Alouini, Mohamed-Slim
2016-01-01
In this paper, we investigate the impact of different diversity combining techniques on the security and reliability analysis of a single-input-multiple-output (SIMO) mixed radio frequency (RF)/free space optical (FSO) relay network with opportunistic multiuser scheduling. In this model, the user of the best channel among multiple users communicates with a multiple antennas relay node over an RF link, and then, the relay node employs amplify-and-forward (AF) protocol in retransmitting the user data to the destination over an FSO link. Moreover, the authorized transmission is assumed to be attacked by a single passive RF eavesdropper equipped with multiple antennas. Therefore, the system security reliability trade-off analysis is investigated. Closed-form expressions for the system outage probability and the system intercept probability are derived. Then, the newly derived expressions are simplified to their asymptotic formulas at the high signal-to-noise- ratio (SNR) region. Numerical results are presented to validate the achieved exact and asymptotic results and to illustrate the impact of various system parameters on the system performance. © 2016 IEEE.
Abd El-Malek, Ahmed H.
2016-07-26
In this paper, we investigate the impact of different diversity combining techniques on the security and reliability analysis of a single-input-multiple-output (SIMO) mixed radio frequency (RF)/free space optical (FSO) relay network with opportunistic multiuser scheduling. In this model, the user of the best channel among multiple users communicates with a multiple antennas relay node over an RF link, and then, the relay node employs amplify-and-forward (AF) protocol in retransmitting the user data to the destination over an FSO link. Moreover, the authorized transmission is assumed to be attacked by a single passive RF eavesdropper equipped with multiple antennas. Therefore, the system security reliability trade-off analysis is investigated. Closed-form expressions for the system outage probability and the system intercept probability are derived. Then, the newly derived expressions are simplified to their asymptotic formulas at the high signal-to-noise- ratio (SNR) region. Numerical results are presented to validate the achieved exact and asymptotic results and to illustrate the impact of various system parameters on the system performance. © 2016 IEEE.
Multiplicity within Singularity: Racial Categorization and Recognizing “Mixed Race” in Singapore
Directory of Open Access Journals (Sweden)
Zarine L. Rocha
2011-01-01
Full Text Available “Race” and racial categories play a significant role in everyday life and state organization in Singapore. While multiplicity and diversity are important characteristics of Singaporean society, Singapore’s multiracial ideology is firmly based on separate, racialized groups, leaving little room for racial projects reflecting more complex identifications. This article explores national narratives of race, culture and belonging as they have developed over time, used as a tool for the state, and re-emerging in discourses of hybridity and “double-barrelled” racial identifications. Multiracialism, as a maintained structural feature of Singaporean society, is both challenged and reinforced by new understandings of hybridity and older conceptions of what it means to be “mixed race” in a (post-colonial society. Tracing the temporal thread of racial categorization through a lens of mixedness, this article places the Singaporean case within emerging work on hybridity and recognition of “mixed race”. It illustrates how state-led understandings of race and “mixed race” describe processes of both continuity and change, with far-reaching practical and ideological impacts.
Sensitivity of the urban airshed model to mixing height profiles
Energy Technology Data Exchange (ETDEWEB)
Rao, S.T.; Sistla, G.; Ku, J.Y.; Zhou, N.; Hao, W. [New York State Dept. of Environmental Conservation, Albany, NY (United States)
1994-12-31
The United States Environmental Protection Agency (USEPA) has recommended the use of the Urban Airshed Model (UAM), a grid-based photochemical model, for regulatory applications. One of the important parameters in applications of the UAM is the height of the mixed layer or the diffusion break. In this study, we examine the sensitivity of the UAM-predicted ozone concentrations to (a) a spatially invariant diurnal mixing height profile, and (b) a spatially varying diurnal mixing height profile for a high ozone episode of July 1988 for the New York Airshed. The 1985/88 emissions inventory used in the EPA`s Regional Oxidant Modeling simulations has been regridded for this study. Preliminary results suggest that the spatially varying case yields a higher peak ozone concentrations compared to the spatially invariant mixing height simulation, with differences in the peak ozone ranging from a few ppb to about 40 ppb for the days simulated. These differences are attributed to the differences in the shape of the mixing height profiles and its rate of growth during the morning hours when peak emissions are injected into the atmosphere. Examination of the impact of emissions reductions associated with these two mixing height profiles indicates that NO{sub x}-focussed controls provide a greater change in the predicted ozone peak under spatially invariant mixing heights than under the spatially varying mixing height profile. On the other hand, VOC-focussed controls provide a greater change in the predicted peak ozone levels under spatially varying mixing heights than under the spatially invariant mixing height profile.
Modelling mixed forest growth : a review of models for forest management
Porte, A.; Bartelink, H.H.
2002-01-01
Most forests today are multi-specific and heterogeneous forests (`mixed forests'). However, forest modelling has been focusing on mono-specific stands for a long time, only recently have models been developed for mixed forests. Previous reviews of mixed forest modelling were restricted to certain
Application of Multiple Evaluation Models in Brazil
Directory of Open Access Journals (Sweden)
Rafael Victal Saliba
2008-07-01
Full Text Available Based on two different samples, this article tests the performance of a number of Value Drivers commonly used for evaluating companies by ﬁnance practitioners, through simple regression models of cross-section type which estimate the parameters associated to each Value Driver, denominated Market Multiples. We are able to diagnose the behavior of several multiples in the period 1994-2004, with an outlook also on the particularities of the economic activities performed by the sample companies (and their impacts on the performance through a subsequent analysis with segregation of companies in the sample by sectors. Extrapolating simple multiples evaluation standards from analysts of the main ﬁnancial institutions in Brazil, we ﬁnd that adjusting the ratio formulation to allow for an intercept does not provide satisfactory results in terms of pricing errors reduction. Results found, in spite of evidencing certain relative and absolute superiority among the multiples, may not be generically representative, given samples limitation.
Actuarial statistics with generalized linear mixed models
Antonio, K.; Beirlant, J.
2007-01-01
Over the last decade the use of generalized linear models (GLMs) in actuarial statistics has received a lot of attention, starting from the actuarial illustrations in the standard text by McCullagh and Nelder [McCullagh, P., Nelder, J.A., 1989. Generalized linear models. In: Monographs on Statistics
A Comparison of Item Fit Statistics for Mixed IRT Models
Chon, Kyong Hee; Lee, Won-Chan; Dunbar, Stephen B.
2010-01-01
In this study we examined procedures for assessing model-data fit of item response theory (IRT) models for mixed format data. The model fit indices used in this study include PARSCALE's G[superscript 2], Orlando and Thissen's S-X[superscript 2] and S-G[superscript 2], and Stone's chi[superscript 2*] and G[superscript 2*]. To investigate the…
A new approach to model mixed hydrates
Czech Academy of Sciences Publication Activity Database
Hielscher, S.; Vinš, Václav; Jäger, A.; Hrubý, Jan; Breitkopf, C.; Span, R.
2018-01-01
Roč. 459, March (2018), s. 170-185 ISSN 0378-3812 R&D Projects: GA ČR(CZ) GA17-08218S Institutional support: RVO:61388998 Keywords : gas hydrate * mixture * modeling Subject RIV: BJ - Thermodynamics Impact factor: 2.473, year: 2016 https://www.sciencedirect.com/science/article/pii/S0378381217304983
Isothermal coarse mixing: experimental and CFD modelling
International Nuclear Information System (INIS)
Gilbertson, M.A.; Kenning, D.B.R.; Hall, R.W.
1992-01-01
A plane, two-dimensional flow apparatus has been built which uses a jet of solid 6mm diameter balls to model a jet of molten drops falling into a tank of water to study premixing prior to a vapour explosion. Preliminary experiments with unheated stainless steel balls are here compared with computational fluid dynamics (CFD) calculations by the code CHYMES. (6 figures) (Author)
Comparison between the SIMPLE and ENERGY mixing models
International Nuclear Information System (INIS)
Burns, K.J.; Todreas, N.E.
1980-07-01
The SIMPLE and ENERGY mixing models were compared in order to investigate the limitations of SIMPLE's analytically formulated mixing parameter, relative to the experimentally calibrated ENERGY mixing parameters. For interior subchannels, it was shown that when the SIMPLE and ENERGY parameters are reduced to a common form, there is good agreement between the two models for a typical fuel geometry. However, large discrepancies exist for typical blanket (lower P/D) geometries. Furthermore, the discrepancies between the mixing parameters result in significant differences in terms of the temperature profiles generated by the ENERGY code utilizing these mixing parameters as input. For edge subchannels, the assumptions made in the development of the SIMPLE model were extended to the rectangular edge subchannel geometry used in ENERGY. The resulting effective eddy diffusivities (used by the ENERGY code) associated with the SIMPLE model are again closest to those of the ENERGY model for the fuel assembly geometry. Finally, the SIMPLE model's neglect of a net swirl effect in the edge region is most limiting for assemblies exhibiting relatively large radial power skews
Business models in commercial media markets: Bargaining, advertising, and mixing
Thöne, Miriam; Rasch, Alexander; Wenzel, Tobias
2016-01-01
We consider a product and a media market and show how a change in the business model employed by the media platforms affects consumers, producers (or advertisers), and price negotiations for advertisements. On both markets, two firms differentiated á la Hotelling compete for consumers. On the media market, consumers can mix between the two outlets whereas on the product market, consumers have to decide for one supplier. With pay-tv, as opposed to free-to-air, mixing by consumers disappears, p...
Stochastic model of Rayleigh-Taylor turbulent mixing
International Nuclear Information System (INIS)
Abarzhi, S.I.; Cadjan, M.; Fedotov, S.
2007-01-01
We propose a stochastic model to describe the random character of the dissipation process in Rayleigh-Taylor turbulent mixing. The parameter alpha, used conventionally to characterize the mixing growth-rate, is not a universal constant and is very sensitive to the statistical properties of the dissipation. The ratio between the rates of momentum loss and momentum gain is the statistic invariant and a robust parameter to diagnose with or without turbulent diffusion accounted for
Functional Mixed Effects Model for Small Area Estimation.
Maiti, Tapabrata; Sinha, Samiran; Zhong, Ping-Shou
2016-09-01
Functional data analysis has become an important area of research due to its ability of handling high dimensional and complex data structures. However, the development is limited in the context of linear mixed effect models, and in particular, for small area estimation. The linear mixed effect models are the backbone of small area estimation. In this article, we consider area level data, and fit a varying coefficient linear mixed effect model where the varying coefficients are semi-parametrically modeled via B-splines. We propose a method of estimating the fixed effect parameters and consider prediction of random effects that can be implemented using a standard software. For measuring prediction uncertainties, we derive an analytical expression for the mean squared errors, and propose a method of estimating the mean squared errors. The procedure is illustrated via a real data example, and operating characteristics of the method are judged using finite sample simulation studies.
Linear mixed models a practical guide using statistical software
West, Brady T; Galecki, Andrzej T
2006-01-01
Simplifying the often confusing array of software programs for fitting linear mixed models (LMMs), Linear Mixed Models: A Practical Guide Using Statistical Software provides a basic introduction to primary concepts, notation, software implementation, model interpretation, and visualization of clustered and longitudinal data. This easy-to-navigate reference details the use of procedures for fitting LMMs in five popular statistical software packages: SAS, SPSS, Stata, R/S-plus, and HLM. The authors introduce basic theoretical concepts, present a heuristic approach to fitting LMMs based on bo
Multiple simultaneous event model for radiation carcinogenesis
International Nuclear Information System (INIS)
Baum, J.W.
1976-01-01
A mathematical model is proposed which postulates that cancer induction is a multi-event process, that these events occur naturally, usually one at a time in any cell, and that radiation frequently causes two of these events to occur simultaneously. Microdosimetric considerations dictate that for high LET radiations the simultaneous events are associated with a single particle or track. The model predicts: (a) linear dose-effect relations for early times after irradiation with small doses, (b) approximate power functions of dose (i.e. Dsup(x)) having exponent less than one for populations of mixed age examined at short times after irradiation with small doses, (c) saturation of effect at either long times after irradiation with small doses or for all times after irradiation with large doses, and (d) a net increase in incidence which is dependent on age of observation but independent of age at irradiation. Data of Vogel, for neutron induced mammary tumors in rats, are used to illustrate the validity of the formulation. This model provides a quantitative framework to explain several unexpected results obtained by Vogel. It also provides a logical framework to explain the dose-effect relations observed in the Japanese survivors of the atomic bombs. (author)
Analytical characterization of high-level mixed wastes using multiple sample preparation treatments
International Nuclear Information System (INIS)
King, A.G.; Baldwin, D.L.; Urie, M.W.; McKinley, S.G.
1994-01-01
The Analytical Chemistry Laboratory at the Pacific Northwest Laboratory in Richland, Washington, is actively involved in performing analytical characterization of high-level mixed waste from Hanford's single shell and double shell tank characterization programs. A full suite of analyses is typically performed on homogenized tank core samples. These analytical techniques include inductively-coupled plasma-atomic emission spectroscopy, total organic carbon methods and radiochemistry methods, as well as many others, all requiring some type of remote sample-preparation treatment to solubilize the tank sludge material for analysis. Most of these analytical methods typically use a single sample-preparation treatment, inherently providing elemental information only. To better understand and interpret tank chemistry and assist in identifying chemical compounds, selected analytical methods are performed using multiple sample-preparation treatments. The sample preparation treatments used at Pacific Northwest Laboratory for this work with high-level mixed waste include caustic fusion, acid digestion, and water leach. The type of information available by comparing results from different sample-prep treatments includes evidence for the presence of refractory compounds, acid-soluble compounds, or water-soluble compounds. Problems unique to the analysis of Hanford tank wastes are discussed. Selected results from the Hanford single shell ferrocyanide tank, 241-C-109, are presented, and the resulting conclusions are discussed
Energy Technology Data Exchange (ETDEWEB)
Sprague, Michael A.; Stickel, Jonathan J.; Sitaraman, Hariswaran; Crawford, Nathan C.; Fischer, Paul F.
2017-04-11
Designing processing equipment for the mixing of settling suspensions is a challenging problem. Achieving low-cost mixing is especially difficult for the application of slowly reacting suspended solids because the cost of impeller power consumption becomes quite high due to the long reaction times (batch mode) or due to large-volume reactors (continuous mode). Further, the usual scale-up metrics for mixing, e.g., constant tip speed and constant power per volume, do not apply well for mixing of suspensions. As an alternative, computational fluid dynamics (CFD) can be useful for analyzing mixing at multiple scales and determining appropriate mixer designs and operating parameters. We developed a mixture model to describe the hydrodynamics of a settling cellulose suspension. The suspension motion is represented as a single velocity field in a computationally efficient Eulerian framework. The solids are represented by a scalar volume-fraction field that undergoes transport due to particle diffusion, settling, fluid advection, and shear stress. A settling model and a viscosity model, both functions of volume fraction, were selected to fit experimental settling and viscosity data, respectively. Simulations were performed with the open-source Nek5000 CFD program, which is based on the high-order spectral-finite-element method. Simulations were performed for the cellulose suspension undergoing mixing in a laboratory-scale vane mixer. The settled-bed heights predicted by the simulations were in semi-quantitative agreement with experimental observations. Further, the simulation results were in quantitative agreement with experimentally obtained torque and mixing-rate data, including a characteristic torque bifurcation. In future work, we plan to couple this CFD model with a reaction-kinetics model for the enzymatic digestion of cellulose, allowing us to predict enzymatic digestion performance for various mixing intensities and novel reactor designs.
Hatzell, Marta C.; Hatzell, Kelsey B.; Logan, Bruce E.
2014-01-01
Efficient conversion of “mixing energy” to electricity through capacitive mixing (CapMix) has been limited by low energy recoveries, low power densities, and noncontinuous energy production resulting from intermittent charging and discharging cycles
Stochastic transport models for mixing in variable-density turbulence
Bakosi, J.; Ristorcelli, J. R.
2011-11-01
In variable-density (VD) turbulent mixing, where very-different- density materials coexist, the density fluctuations can be an order of magnitude larger than their mean. Density fluctuations are non-negligible in the inertia terms of the Navier-Stokes equation which has both quadratic and cubic nonlinearities. Very different mixing rates of different materials give rise to large differential accelerations and some fundamentally new physics that is not seen in constant-density turbulence. In VD flows material mixing is active in a sense far stronger than that applied in the Boussinesq approximation of buoyantly-driven flows: the mass fraction fluctuations are coupled to each other and to the fluid momentum. Statistical modeling of VD mixing requires accounting for basic constraints that are not important in the small-density-fluctuation passive-scalar-mixing approximation: the unit-sum of mass fractions, bounded sample space, and the highly skewed nature of the probability densities become essential. We derive a transport equation for the joint probability of mass fractions, equivalent to a system of stochastic differential equations, that is consistent with VD mixing in multi-component turbulence and consistently reduces to passive scalar mixing in constant-density flows.
International Nuclear Information System (INIS)
Gholinezhad, Hadi; Zeinal Hamadani, Ali
2017-01-01
This paper develops a new model for redundancy allocation problem. In this paper, like many recent papers, the choice of the redundancy strategy is considered as a decision variable. But, in our model each subsystem can exploit both active and cold-standby strategies simultaneously. Moreover, the model allows for component mixing such that components of different types may be used in each subsystem. The problem, therefore, boils down to determining the types of components, redundancy levels, and number of active and cold-standby units of each type for each subsystem to maximize system reliability by considering such constraints as available budget, weight, and space. Since RAP belongs to the NP-hard class of optimization problems, a genetic algorithm (GA) is developed for solving the problem. Finally, the performance of the proposed algorithm is evaluated by applying it to a well-known test problem from the literature with relatively satisfactory results. - Highlights: • A new model for the redundancy allocation problem in series–parallel systems is proposed. • The redundancy strategy of each subsystem is considered as a decision variable and can be active, cold-standby or mixed. • Component mixing is allowed, in other words components of any subsystem can be non-identical. • A genetic algorithm is developed for solving the problem. • Computational experiments demonstrate that the new model leads to interesting results.
The salinity effect in a mixed layer ocean model
Miller, J. R.
1976-01-01
A model of the thermally mixed layer in the upper ocean as developed by Kraus and Turner and extended by Denman is further extended to investigate the effects of salinity. In the tropical and subtropical Atlantic Ocean rapid increases in salinity occur at the bottom of a uniformly mixed surface layer. The most significant effects produced by the inclusion of salinity are the reduction of the deepening rate and the corresponding change in the heating characteristics of the mixed layer. If the net surface heating is positive, but small, salinity effects must be included to determine whether the mixed layer temperature will increase or decrease. Precipitation over tropical oceans leads to the development of a shallow stable layer accompanied by a decrease in the temperature and salinity at the sea surface.
A mathematical model for turbulent incompressible flows through mixing grids
International Nuclear Information System (INIS)
Allaire, G.
1989-01-01
A mathematical model is proposed for the computation of turbulent incompressible flows through mixing grids. This model is obtained as follows: in a three-dimentional-domain we represent a mixing grid by small identical wings of size ε 2 periodically distributed at the nodes of a plane regular mesh of size ε, and we consider incompressible Navier-Stokes equations with a no-slip condition on the wings. Using an appropriate homogenization process we pass to the limit when ε tends to zero and we obtain a Brinkman equation, i.e. a Navier-Stokes equation plus a zero-order term for the velocity, in a homogeneous domain without anymore wings. The interest of this model is that the spatial discretization is simpler in a homogeneous domain, and, moreover, the new term, which expresses the grid's mixing effect, can be evaluated with a local computation around a single wing
Model Pembelajaran Berbasis Penstimulasian Multiple Intelligences Siswa
Edy Legowo
2017-01-01
Tulisan ini membahas mengenai penerapan teori multiple intelligences dalam pembelajaran di sekolah. Pembahasan diawali dengan menguraikan perkembangan konsep inteligensi dan multiple intelligences. Diikuti dengan menjelaskan dampak teori multiple intelligences dalam bidang pendidikan dan pembelajaran di sekolah. Bagian selanjutnya menguraikan tentang implementasi teori multiple intelligences dalam praktik pembelajaran di kelas yaitu bagaimana pemberian pengalaman belajar siswa yang difasilita...
Mixed waste treatment model: Basis and analysis
International Nuclear Information System (INIS)
Palmer, B.A.
1995-09-01
The Department of Energy's Programmatic Environmental Impact Statement (PEIS) required treatment system capacities for risk and cost calculation. Los Alamos was tasked with providing these capacities to the PEIS team. This involved understanding the Department of Energy (DOE) Complex waste, making the necessary changes to correct for problems, categorizing the waste for treatment, and determining the treatment system requirements. The treatment system requirements depended on the incoming waste, which varied for each PEIS case. The treatment system requirements also depended on the type of treatment that was desired. Because different groups contributing to the PEIS needed specific types of results, we provided the treatment system requirements in a variety of forms. In total, some 40 data files were created for the TRU cases, and for the MLLW case, there were 105 separate data files. Each data file represents one treatment case consisting of the selected waste from various sites, a selected treatment system, and the reporting requirements for such a case. The treatment system requirements in their most basic form are the treatment process rates for unit operations in the desired treatment system, based on a 10-year working life and 20-year accumulation of the waste. These results were reported in cubic meters and for the MLLW case, in kilograms as well. The treatment system model consisted of unit operations that are linked together. Each unit operation's function depended on the input waste streams, waste matrix, and contaminants. Each unit operation outputs one or more waste streams whose matrix, contaminants, and volume/mass may have changed as a result of the treatment. These output streams are then routed to the appropriate unit operation for additional treatment until the output waste stream meets the treatment requirements for disposal. The total waste for each unit operation was calculated as well as the waste for each matrix treated by the unit
Computer modeling of ORNL storage tank sludge mobilization and mixing
International Nuclear Information System (INIS)
Terrones, G.; Eyler, L.L.
1993-09-01
This report presents and analyzes the results of the computer modeling of mixing and mobilization of sludge in horizontal, cylindrical storage tanks using submerged liquid jets. The computer modeling uses the TEMPEST computational fluid dynamics computer program. The horizontal, cylindrical storage tank configuration is similar to the Melton Valley Storage Tanks (MVST) at Oak Ridge National (ORNL). The MVST tank contents exhibit non-homogeneous, non-Newtonian rheology characteristics. The eventual goals of the simulations are to determine under what conditions sludge mobilization using submerged liquid jets is feasible in tanks of this configuration, and to estimate mixing times required to approach homogeneity of the contents of the tanks
A Mixing Based Model for DME Combustion in Diesel Engines
DEFF Research Database (Denmark)
Bek, Bjarne H.; Sorenson, Spencer C.
1998-01-01
A series of studies has been conducted investigating the behavior of di-methyl ether (DME) fuel jets injected into quiescent combus-tion chambers. These studies have shown that it is possible to make a good estimate of the penetration of the jet based on existing correlations for diesel fuel......, by using appropriate fuel properties. The results of the spray studies have been incorporated into a first generation model for DME combustion. The model is entirely based on physical mixing, where chemical processes have been assumed to be very fast in relation to mixing. The assumption was made...
A mixing based model for DME combustion in diesel engines
DEFF Research Database (Denmark)
Bek, Bjarne Hjort; Sorenson, Spencer C
2001-01-01
A series of studies has been conducted investigating the behavior of di-methyl ether (DME) fuel jets injected into quiescent combustion chambers. These studies have shown that it is possible to make a good estimate of the penetration of the jet based on existing correlations for diesel fuel......, by using appropriate fuel properties. The results of the spray studies have been incorporated into a first generation model for DME combustion. The model is entirely based on physical mixing, where chemical processes have been assumed to be very fast in relation to mixing. The assumption was made...
Multivariate Survival Mixed Models for Genetic Analysis of Longevity Traits
DEFF Research Database (Denmark)
Pimentel Maia, Rafael; Madsen, Per; Labouriau, Rodrigo
2014-01-01
A class of multivariate mixed survival models for continuous and discrete time with a complex covariance structure is introduced in a context of quantitative genetic applications. The methods introduced can be used in many applications in quantitative genetics although the discussion presented co...... applications. The methods presented are implemented in such a way that large and complex quantitative genetic data can be analyzed......A class of multivariate mixed survival models for continuous and discrete time with a complex covariance structure is introduced in a context of quantitative genetic applications. The methods introduced can be used in many applications in quantitative genetics although the discussion presented...... concentrates on longevity studies. The framework presented allows to combine models based on continuous time with models based on discrete time in a joint analysis. The continuous time models are approximations of the frailty model in which the hazard function will be assumed to be piece-wise constant...
Multivariate Survival Mixed Models for Genetic Analysis of Longevity Traits
DEFF Research Database (Denmark)
Pimentel Maia, Rafael; Madsen, Per; Labouriau, Rodrigo
2013-01-01
A class of multivariate mixed survival models for continuous and discrete time with a complex covariance structure is introduced in a context of quantitative genetic applications. The methods introduced can be used in many applications in quantitative genetics although the discussion presented co...... applications. The methods presented are implemented in such a way that large and complex quantitative genetic data can be analyzed......A class of multivariate mixed survival models for continuous and discrete time with a complex covariance structure is introduced in a context of quantitative genetic applications. The methods introduced can be used in many applications in quantitative genetics although the discussion presented...... concentrates on longevity studies. The framework presented allows to combine models based on continuous time with models based on discrete time in a joint analysis. The continuous time models are approximations of the frailty model in which the hazard function will be assumed to be piece-wise constant...
Generalized linear mixed models modern concepts, methods and applications
Stroup, Walter W
2012-01-01
PART I The Big PictureModeling BasicsWhat Is a Model?Two Model Forms: Model Equation and Probability DistributionTypes of Model EffectsWriting Models in Matrix FormSummary: Essential Elements for a Complete Statement of the ModelDesign MattersIntroductory Ideas for Translating Design and Objectives into ModelsDescribing ""Data Architecture"" to Facilitate Model SpecificationFrom Plot Plan to Linear PredictorDistribution MattersMore Complex Example: Multiple Factors with Different Units of ReplicationSetting the StageGoals for Inference with Models: OverviewBasic Tools of InferenceIssue I: Data
Advective mixing in a nondivergent barotropic hurricane model
Directory of Open Access Journals (Sweden)
B. Rutherford
2010-01-01
Full Text Available This paper studies Lagrangian mixing in a two-dimensional barotropic model for hurricane-like vortices. Since such flows show high shearing in the radial direction, particle separation across shear-lines is diagnosed through a Lagrangian field, referred to as R-field, that measures trajectory separation orthogonal to the Lagrangian velocity. The shear-lines are identified with the level-contours of another Lagrangian field, referred to as S-field, that measures the average shear-strength along a trajectory. Other fields used for model diagnostics are the Lagrangian field of finite-time Lyapunov exponents (FTLE-field, the Eulerian Q-field, and the angular velocity field. Because of the high shearing, the FTLE-field is not a suitable indicator for advective mixing, and in particular does not exhibit ridges marking the location of finite-time stable and unstable manifolds. The FTLE-field is similar in structure to the radial derivative of the angular velocity. In contrast, persisting ridges and valleys can be clearly recognized in the R-field, and their propagation speed indicates that transport across shear-lines is caused by Rossby waves. A radial mixing rate derived from the R-field gives a time-dependent measure of flux across the shear-lines. On the other hand, a measured mixing rate across the shear-lines, which counts trajectory crossings, confirms the results from the R-field mixing rate, and shows high mixing in the eyewall region after the formation of a polygonal eyewall, which continues until the vortex breaks down. The location of the R-field ridges elucidates the role of radial mixing for the interaction and breakdown of the mesovortices shown by the model.
The MIDAS Touch: Mixed Data Sampling Regression Models
Ghysels, Eric; Santa-Clara, Pedro; Valkanov, Rossen
2004-01-01
We introduce Mixed Data Sampling (henceforth MIDAS) regression models. The regressions involve time series data sampled at different frequencies. Technically speaking MIDAS models specify conditional expectations as a distributed lag of regressors recorded at some higher sampling frequencies. We examine the asymptotic properties of MIDAS regression estimation and compare it with traditional distributed lag models. MIDAS regressions have wide applicability in macroeconomics and ï¿½nance.
Directory of Open Access Journals (Sweden)
Pau Baya
2011-05-01
Full Text Available Remenat (Catalan (Mixed, "revoltillo" (Scrambled in Spanish, is a dish which, in Catalunya, consists of a beaten egg cooked with vegetables or other ingredients, normally prawns or asparagus. It is delicious. Scrambled refers to the action of mixing the beaten egg with other ingredients in a pan, normally using a wooden spoon Thought is frequently an amalgam of past ideas put through a spinner and rhythmically shaken around like a cocktail until a uniform and dense paste is made. This malleable product, rather like a cake mixture can be deformed pulling it out, rolling it around, adapting its shape to the commands of one’s hands or the tool which is being used on it. In the piece Mixed, the contortion of the wood seeks to reproduce the plasticity of this slow heavy movement. Each piece lays itself on the next piece consecutively like a tongue of incandescent lava slowly advancing but with unstoppable inertia.
A brief introduction to regression designs and mixed-effects modelling by a recent convert
Balling, Laura Winther
2008-01-01
This article discusses the advantages of multiple regression designs over the factorial designs traditionally used in many psycholinguistic experiments. It is shown that regression designs are typically more informative, statistically more powerful and better suited to the analysis of naturalistic tasks. The advantages of including both fixed and random effects are demonstrated with reference to linear mixed-effects models, and problems of collinearity, variable distribution and variable sele...
Model Pembelajaran Berbasis Penstimulasian Multiple Intelligences Siswa
Directory of Open Access Journals (Sweden)
Edy Legowo
2017-03-01
Full Text Available Tulisan ini membahas mengenai penerapan teori multiple intelligences dalam pembelajaran di sekolah. Pembahasan diawali dengan menguraikan perkembangan konsep inteligensi dan multiple intelligences. Diikuti dengan menjelaskan dampak teori multiple intelligences dalam bidang pendidikan dan pembelajaran di sekolah. Bagian selanjutnya menguraikan tentang implementasi teori multiple intelligences dalam praktik pembelajaran di kelas yaitu bagaimana pemberian pengalaman belajar siswa yang difasilitasi guru dapat menstimulasi multiple intelligences siswa. Evaluasi hasil belajar siswa dari pandangan penerapan teori multiple intelligences seharusnya dilakukan menggunakan authentic assessment dan portofolio yang lebih memfasilitasi para siswa mengungkapkan atau mengaktualisasikan hasil belajarnya melalui berbagai cara sesuai dengan kekuatan jenis inteligensinya.
A system dynamics model to determine products mix
Directory of Open Access Journals (Sweden)
Mahtab Hajghasem
2014-02-01
Full Text Available This paper presents an implementation of system dynamics model to determine appropriate product mix by considering various factors such as labor, materials, overhead, etc. for an Iranian producer of cosmetic and sanitary products. The proposed model of this paper considers three hypotheses including the relationship between product mix and profitability, optimum production capacity and having minimum amount of storage to take advantage of low cost production. The implementation of system dynamics on VENSIM software package has confirmed all three hypotheses of the survey and suggested that in order to reach better mix product, it is necessary to reach optimum production planning, take advantage of all available production capacities and use inventory management techniques.
Multiple Temperature Model for Near Continuum Flows
International Nuclear Information System (INIS)
XU, Kun; Liu, Hongwei; Jiang, Jianzheng
2007-01-01
In the near continuum flow regime, the flow may have different translational temperatures in different directions. It is well known that for increasingly rarefied flow fields, the predictions from continuum formulation, such as the Navier-Stokes equations, lose accuracy. These inaccuracies may be partially due to the single temperature assumption in the Navier-Stokes equations. Here, based on the gas-kinetic Bhatnagar-Gross-Krook (BGK) equation, a multitranslational temperature model is proposed and used in the flow calculations. In order to fix all three translational temperatures, two constraints are additionally proposed to model the energy exchange in different directions. Based on the multiple temperature assumption, the Navier-Stokes relation between the stress and strain is replaced by the temperature relaxation term, and the Navier-Stokes assumption is recovered only in the limiting case when the flow is close to the equilibrium with the same temperature in different directions. In order to validate the current model, both the Couette and Poiseuille flows are studied in the transition flow regime
Practical likelihood analysis for spatial generalized linear mixed models
DEFF Research Database (Denmark)
Bonat, W. H.; Ribeiro, Paulo Justiniano
2016-01-01
We investigate an algorithm for maximum likelihood estimation of spatial generalized linear mixed models based on the Laplace approximation. We compare our algorithm with a set of alternative approaches for two datasets from the literature. The Rhizoctonia root rot and the Rongelap are......, respectively, examples of binomial and count datasets modeled by spatial generalized linear mixed models. Our results show that the Laplace approximation provides similar estimates to Markov Chain Monte Carlo likelihood, Monte Carlo expectation maximization, and modified Laplace approximation. Some advantages...... of Laplace approximation include the computation of the maximized log-likelihood value, which can be used for model selection and tests, and the possibility to obtain realistic confidence intervals for model parameters based on profile likelihoods. The Laplace approximation also avoids the tuning...
Study of the brain glucose metabolism in different stage of mixed-type multiple system atrophy
International Nuclear Information System (INIS)
Wang Ying; Zhang Benshu; Cai Li; Zhang Meiyun; Gao Shuo
2014-01-01
Objective: To investigate the brain glucose metabolism in different stage of mixed-type multiple system atrophy (MSA). Methods: Forty-six MSA patients with cerebellar or Parkinsonian symptoms and 18 healthy controls with similar age as patients were included. According to the disease duration,the patients were divided into three groups: group 1 (≤ 12 months, n=14), group 2 (13-24 months, n=13), group 3 (≥ 25 months, n=19). All patients and controls underwent 18 F-FDG PET/CT brain imaging. To compare metabolic distributions between different groups, SPM 8 software and two-sample t test were used for image data analysis. When P<0.005, the result was considered statistically significant. Results: At the level of P<0.005, the hypometabolism in group 1 (all t>3.49) was identified in the frontal lobe, lateral temporal lobe, insula lobe, anterior cingulate cortex, caudate nucleus and anterior cerebellar hemisphere. The regions of hypometabolism extended to posterolateral putamen and part of posterior cerebellar hemisphere in group 2 (all t>3.21). In group 3, the whole parts of putamen and cerebellar hemisphere were involved as hypometabolism (all t>4.08). In addition to the hypometabolism regions, there were also stabled hypermetabolism regions mainly in the parietal lobe, medial temporal lobe and the thalamus in all patient groups (all t>3.27 in group 1, all t>3.02 in group 2,all t>3.30 in group 3). Conclusions: Disease duration is closely related to the FDG metabolism in the MSA patients. Frontal lobe, lateral temporal lobe, anterior cingulate cortex and caudate nucleus can be involved at early stage of the disease. Putaminal hypometabolism begins in its posterolateral part. Cerebellar hypometabolism occurs early at its anterior part. Besides, thalamus shows hypermetabolism in the whole duration. 18 F-FDG metabolic changes of brain can reflect the development of mixed-type MSA. (authors)
Longitudinal mixed-effects models for latent cognitive function
van den Hout, Ardo; Fox, Gerardus J.A.; Muniz-Terrera, Graciela
2015-01-01
A mixed-effects regression model with a bent-cable change-point predictor is formulated to describe potential decline of cognitive function over time in the older population. For the individual trajectories, cognitive function is considered to be a latent variable measured through an item response
Sensitivity Analysis of Mixed Models for Incomplete Longitudinal Data
Xu, Shu; Blozis, Shelley A.
2011-01-01
Mixed models are used for the analysis of data measured over time to study population-level change and individual differences in change characteristics. Linear and nonlinear functions may be used to describe a longitudinal response, individuals need not be observed at the same time points, and missing data, assumed to be missing at random (MAR),…
Application of mixed models for the assessment genotype and ...
African Journals Online (AJOL)
Application of mixed models for the assessment genotype and environment interactions in cotton ( Gossypium hirsutum ) cultivars in Mozambique. ... The cultivars ISA 205, STAM 42 and REMU 40 showed superior productivity when they were selected by the Harmonic Mean of Genotypic Values (HMGV) criterion in relation ...
Confidence Intervals for Assessing Heterogeneity in Generalized Linear Mixed Models
Wagler, Amy E.
2014-01-01
Generalized linear mixed models are frequently applied to data with clustered categorical outcomes. The effect of clustering on the response is often difficult to practically assess partly because it is reported on a scale on which comparisons with regression parameters are difficult to make. This article proposes confidence intervals for…
Introduction to models of neutrino masses and mixings
International Nuclear Information System (INIS)
Joshipura, Anjan S.
2004-01-01
This review contains an introduction to models of neutrino masses for non-experts. Topics discussed are i) different types of neutrino masses ii) structure of neutrino masses and mixing needed to understand neutrino oscillation results iii) mechanism to generate neutrino masses in gauge theories and iv) discussion of generic scenarios proposed to realize the required neutrino mass structures. (author)
The 4s web-marketing mix model
Constantinides, Efthymios
2002-01-01
This paper reviews the criticism on the 4Ps Marketing Mix framework, the most popular tool of traditional marketing management, and categorizes the main objections of using the model as the foundation of physical marketing. It argues that applying the traditional approach, based on the 4Ps paradigm,
Goodness-of-fit tests in mixed models
Claeskens, Gerda; Hart, Jeffrey D.
2009-01-01
Mixed models, with both random and fixed effects, are most often estimated on the assumption that the random effects are normally distributed. In this paper we propose several formal tests of the hypothesis that the random effects and/or errors
COMBINING SOURCES IN STABLE ISOTOPE MIXING MODELS: ALTERNATIVE METHODS
Stable isotope mixing models are often used to quantify source contributions to a mixture. Examples include pollution source identification; trophic web studies; analysis of water sources for soils, plants, or water bodies; and many others. A common problem is having too many s...
Metabolic modeling of mixed substrate uptake for polyhydroxyalkanoate (PHA) production
Jiang, Y.; Hebly, M.; Kleerebezem, R.; Muyzer, G.; van Loosdrecht, M.C.M.
2011-01-01
Polyhydroxyalkanoate (PHA) production by mixed microbial communities can be established in a two-stage process, consisting of a microbial enrichment step and a PHA accumulation step. In this study, a mathematical model was constructed for evaluating the influence of the carbon substrate composition
ePRISM: A case study in multiple proxy and mixed temporal resolution integration
Robinson, Marci M.; Dowsett, Harry J.
2010-01-01
As part of the Pliocene Research, Interpretation and Synoptic Mapping (PRISM) Project, we present the ePRISM experiment designed I) to provide climate modelers with a reconstruction of an early Pliocene warm period that was warmer than the PRISM interval (similar to 3.3 to 3.0 Ma), yet still similar in many ways to modern conditions and 2) to provide an example of how best to integrate multiple-proxy sea surface temperature (SST) data from time series with varying degrees of temporal resolution and age control as we begin to build the next generation of PRISM, the PRISM4 reconstruction, spanning a constricted time interval. While it is possible to tie individual SST estimates to a single light (warm) oxygen isotope event, we find that the warm peak average of SST estimates over a narrowed time interval is preferential for paleoclimate reconstruction as it allows for the inclusion of more records of multiple paleotemperature proxies.
Fluctuations in a mixed IS-LM business cycle model
Directory of Open Access Journals (Sweden)
Hamad Talibi Alaoui
2008-09-01
Full Text Available In the present paper, we extend a delayed IS-LM business cycle model by introducing an additional advance (anticipated capital stock in the investment function. The resulting model is represented in terms of mixed differential equations. For the deviating argument $au$ (advance and delay being a bifurcation parameter we investigate the local stability and the local Hopf bifurcation. Also some numerical simulations are given to support the theoretical analysis.
Configuration mixing in the sdg interacting boson model
International Nuclear Information System (INIS)
Bouldjedri, A; Van Isacker, P; Zerguine, S
2005-01-01
A wavefunction analysis of the strong-coupling limits of the sdg interacting boson model is presented. The analysis is carried out for two-boson states and allows us to characterize the boson configuration mixing in the different limits. Based on these results and those of a shell-model analysis of the sdg IBM, qualitative conclusions are drawn about the range of applicability of each limit
Configuration mixing in the sdg interacting boson model
Energy Technology Data Exchange (ETDEWEB)
Bouldjedri, A [Department of Physics, Faculty of Science, University of Batna, Avenue Boukhelouf M El Hadi, 05000 Batna (Algeria); Van Isacker, P [GANIL, BP 55027, F-14076 Caen cedex 5 (France); Zerguine, S [Department of Physics, Faculty of Science, University of Batna, Avenue Boukhelouf M El Hadi, 05000 Batna (Algeria)
2005-11-01
A wavefunction analysis of the strong-coupling limits of the sdg interacting boson model is presented. The analysis is carried out for two-boson states and allows us to characterize the boson configuration mixing in the different limits. Based on these results and those of a shell-model analysis of the sdg IBM, qualitative conclusions are drawn about the range of applicability of each limit.
Ill-posedness in modeling mixed sediment river morphodynamics
Chavarrías, Víctor; Stecca, Guglielmo; Blom, Astrid
2018-04-01
In this paper we analyze the Hirano active layer model used in mixed sediment river morphodynamics concerning its ill-posedness. Ill-posedness causes the solution to be unstable to short-wave perturbations. This implies that the solution presents spurious oscillations, the amplitude of which depends on the domain discretization. Ill-posedness not only produces physically unrealistic results but may also cause failure of numerical simulations. By considering a two-fraction sediment mixture we obtain analytical expressions for the mathematical characterization of the model. Using these we show that the ill-posed domain is larger than what was found in previous analyses, not only comprising cases of bed degradation into a substrate finer than the active layer but also in aggradational cases. Furthermore, by analyzing a three-fraction model we observe ill-posedness under conditions of bed degradation into a coarse substrate. We observe that oscillations in the numerical solution of ill-posed simulations grow until the model becomes well-posed, as the spurious mixing of the active layer sediment and substrate sediment acts as a regularization mechanism. Finally we conduct an eigenstructure analysis of a simplified vertically continuous model for mixed sediment for which we show that ill-posedness occurs in a wider range of conditions than the active layer model.
Dynamic behaviours of mix-game model and its application
Institute of Scientific and Technical Information of China (English)
Gou Cheng-Ling
2006-01-01
In this paper a minority game (MG) is modified by adding into it some agents who play a majority game. Such a game is referred to as a mix-game. The highlight of this model is that the two groups of agents in the mix-game have different bounded abilities to deal with historical information and to count their own performance. Through simulations,it is found that the local volatilities change a lot by adding some agents who play the majority game into the MG,and the change of local volatilities greatly depends on different combinations of historical memories of the two groups.Furthermore, the analyses of the underlying mechanisms for this finding are made. The applications of mix-game mode are also given as an example.
Mixing Time, Inversion and Multiple Emulsion Formation in a Limonene and Water Pickering Emulsion
Directory of Open Access Journals (Sweden)
Laura Sawiak
2018-05-01
Full Text Available It has previously been demonstrated that particle-stabilized emulsions comprised of limonene, water and fumed silica particles exhibit complex emulsification behavior as a function of composition and the duration of the emulsification step. Most notably the system can invert from being oil-continuous to being water-continuous under prolonged mixing. Here we investigate this phenomenon experimentally for the regime where water is the majority liquid. We prepare samples using a range of different emulsification times and we examine the final properties in bulk and via confocal microscopy. We use the images to quantitatively track the sizes of droplets and clusters of particles. We find that a dense emulsion of water droplets forms initially which is transformed, in time, into a water-in-oil-in-water multiple emulsion with concomitant changes in droplet and cluster sizes. In parallel we carry out rheological studies of water-in-limonene emulsions using different concentrations of fumed silica particles. We unite our observations to propose a mechanism for inversion based on the changes in flow properties and the availability of particles during emulsification.
A method for fitting regression splines with varying polynomial order in the linear mixed model.
Edwards, Lloyd J; Stewart, Paul W; MacDougall, James E; Helms, Ronald W
2006-02-15
The linear mixed model has become a widely used tool for longitudinal analysis of continuous variables. The use of regression splines in these models offers the analyst additional flexibility in the formulation of descriptive analyses, exploratory analyses and hypothesis-driven confirmatory analyses. We propose a method for fitting piecewise polynomial regression splines with varying polynomial order in the fixed effects and/or random effects of the linear mixed model. The polynomial segments are explicitly constrained by side conditions for continuity and some smoothness at the points where they join. By using a reparameterization of this explicitly constrained linear mixed model, an implicitly constrained linear mixed model is constructed that simplifies implementation of fixed-knot regression splines. The proposed approach is relatively simple, handles splines in one variable or multiple variables, and can be easily programmed using existing commercial software such as SAS or S-plus. The method is illustrated using two examples: an analysis of longitudinal viral load data from a study of subjects with acute HIV-1 infection and an analysis of 24-hour ambulatory blood pressure profiles.
Bayesian Option Pricing Using Mixed Normal Heteroskedasticity Models
DEFF Research Database (Denmark)
Rombouts, Jeroen V.K.; Stentoft, Lars Peter
While stochastic volatility models improve on the option pricing error when compared to the Black-Scholes-Merton model, mispricings remain. This paper uses mixed normal heteroskedasticity models to price options. Our model allows for significant negative skewness and time varying higher order...... moments of the risk neutral distribution. Parameter inference using Gibbs sampling is explained and we detail how to compute risk neutral predictive densities taking into account parameter uncertainty. When forecasting out-of-sample options on the S&P 500 index, substantial improvements are found compared...
Production, decay, and mixing models of the iota meson. II
International Nuclear Information System (INIS)
Palmer, W.F.; Pinsky, S.S.
1987-01-01
A five-channel mixing model for the ground and radially excited isoscalar pseudoscalar states and a glueball is presented. The model extends previous work by including two-body unitary corrections, following the technique of Toernqvist. The unitary corrections include contributions from three classes of two-body intermediate states: pseudoscalar-vector, pseudoscalar-scalar, and vector-vector states. All necessary three-body couplings are extracted from decay data. The solution of the mixing model provides information about the bare mass of the glueball and the fundamental quark-glue coupling. The solution also gives the composition of the wave function of the physical states in terms of the bare quark and glue states. Finally, it is shown how the coupling constants extracted from decay data can be used to calculate the decay rates of the five physical states to all two-body channels
Linear mixing model applied to AVHRR LAC data
Holben, Brent N.; Shimabukuro, Yosio E.
1993-01-01
A linear mixing model was applied to coarse spatial resolution data from the NOAA Advanced Very High Resolution Radiometer. The reflective component of the 3.55 - 3.93 microns channel was extracted and used with the two reflective channels 0.58 - 0.68 microns and 0.725 - 1.1 microns to run a Constraine Least Squares model to generate vegetation, soil, and shade fraction images for an area in the Western region of Brazil. The Landsat Thematic Mapper data covering the Emas National park region was used for estimating the spectral response of the mixture components and for evaluating the mixing model results. The fraction images were compared with an unsupervised classification derived from Landsat TM data acquired on the same day. The relationship between the fraction images and normalized difference vegetation index images show the potential of the unmixing techniques when using coarse resolution data for global studies.
Mixing-model Sensitivity to Initial Conditions in Hydrodynamic Predictions
Bigelow, Josiah; Silva, Humberto; Truman, C. Randall; Vorobieff, Peter
2017-11-01
Amagat and Dalton mixing-models were studied to compare their thermodynamic prediction of shock states. Numerical simulations with the Sandia National Laboratories shock hydrodynamic code CTH modeled University of New Mexico (UNM) shock tube laboratory experiments shocking a 1:1 molar mixture of helium (He) and sulfur hexafluoride (SF6) . Five input parameters were varied for sensitivity analysis: driver section pressure, driver section density, test section pressure, test section density, and mixture ratio (mole fraction). We show via incremental Latin hypercube sampling (LHS) analysis that significant differences exist between Amagat and Dalton mixing-model predictions. The differences observed in predicted shock speeds, temperatures, and pressures grow more pronounced with higher shock speeds. Supported by NNSA Grant DE-0002913.
Comparison of mixed layer models predictions with experimental data
Energy Technology Data Exchange (ETDEWEB)
Faggian, P.; Riva, G.M. [CISE Spa, Divisione Ambiente, Segrate (Italy); Brusasca, G. [ENEL Spa, CRAM, Milano (Italy)
1997-10-01
The temporal evolution of the PBL vertical structure for a North Italian rural site, situated within relatively large agricultural fields and almost flat terrain, has been investigated during the period 22-28 June 1993 by experimental and modellistic point of view. In particular, the results about a sunny day (June 22) and a cloudy day (June 25) are presented in this paper. Three schemes to estimate mixing layer depth have been compared, i.e. Holzworth (1967), Carson (1973) and Gryning-Batchvarova models (1990), which use standard meteorological observations. To estimate their degree of accuracy, model outputs were analyzed considering radio-sounding meteorological profiles and stability atmospheric classification criteria. Besides, the mixed layer depths prediction were compared with the estimated values obtained by a simple box model, whose input requires hourly measures of air concentrations and ground flux of {sup 222}Rn. (LN)
Modelling the development of mixing height in near equatorial region
Energy Technology Data Exchange (ETDEWEB)
Samah, A.A. [Univ. of Malaya, Air Pollution Research Unit, Kuala Lumpur (Malaysia)
1997-10-01
Most current air pollution models were developed for mid-latitude conditions and as such many of the empirical parameters used were based on observations taken in the mid-latitude boundary layer which is physically different from that of the equatorial boundary layer. In the equatorial boundary layer the Coriolis parameter f is small or zero and moisture plays a more important role in the control of stability and the surface energy balance. Therefore air pollution models such as the OMLMULTI or the ADMS which were basically developed for mid-latitude conditions must be applied with some caution and would need some adaptation to properly simulate the properties of equatorial boundary layer. This work elucidates some of the problems of modelling the evolution of mixing height in the equatorial region. The mixing height estimates were compared with routine observations taken during a severe air pollution episodes in Malaysia. (au)
DEFF Research Database (Denmark)
Birkedal, Dan; Vadim, Lyssenko; Pantke, Karl-Heinz
1995-01-01
The interface roughness on a nanometer scale plays a decisive role in dephasing of excitons in GaAs multiple quantum wells. The excitonic four-wave mixing signal shows a free polarization decay and a corresponding homogeneously broadened line from areas with interface roughness on a scale larger...
Arendt, Carli A.; Aciego, Sarah M.; Hetland, Eric A.
2015-05-01
The implementation of isotopic tracers as constraints on source contributions has become increasingly relevant to understanding Earth surface processes. Interpretation of these isotopic tracers has become more accessible with the development of Bayesian Monte Carlo (BMC) mixing models, which allow uncertainty in mixing end-members and provide methodology for systems with multicomponent mixing. This study presents an open source multiple isotope BMC mixing model that is applicable to Earth surface environments with sources exhibiting distinct end-member isotopic signatures. Our model is first applied to new δ18O and δD measurements from the Athabasca Glacier, which showed expected seasonal melt evolution trends and vigorously assessed the statistical relevance of the resulting fraction estimations. To highlight the broad applicability of our model to a variety of Earth surface environments and relevant isotopic systems, we expand our model to two additional case studies: deriving melt sources from δ18O, δD, and 222Rn measurements of Greenland Ice Sheet bulk water samples and assessing nutrient sources from ɛNd and 87Sr/86Sr measurements of Hawaiian soil cores. The model produces results for the Greenland Ice Sheet and Hawaiian soil data sets that are consistent with the originally published fractional contribution estimates. The advantage of this method is that it quantifies the error induced by variability in the end-member compositions, unrealized by the models previously applied to the above case studies. Results from all three case studies demonstrate the broad applicability of this statistical BMC isotopic mixing model for estimating source contribution fractions in a variety of Earth surface systems.
A marketing mix model for a complex and turbulent environment
Directory of Open Access Journals (Sweden)
R. B. Mason
2007-12-01
Full Text Available Purpose: This paper is based on the proposition that the choice of marketing tactics is determined, or at least significantly influenced, by the nature of the companys external environment. It aims to illustrate the type of marketing mix tactics that are suggested for a complex and turbulent environment when marketing and the environment are viewed through a chaos and complexity theory lens. Design/Methodology/Approach: Since chaos and complexity theories are proposed as a good means of understanding the dynamics of complex and turbulent markets, a comprehensive review and analysis of literature on the marketing mix and marketing tactics from a chaos and complexity viewpoint was conducted. From this literature review, a marketing mix model was conceptualised. Findings: A marketing mix model considered appropriate for success in complex and turbulent environments was developed. In such environments, the literature suggests destabilising marketing activities are more effective, whereas stabilising type activities are more effective in simple, stable environments. Therefore the model proposes predominantly destabilising type tactics as appropriate for a complex and turbulent environment such as is currently being experienced in South Africa. Implications: This paper is of benefit to marketers by emphasising a new way to consider the future marketing activities of their companies. How this model can assist marketers and suggestions for research to develop and apply this model are provided. It is hoped that the model suggested will form the basis of empirical research to test its applicability in the turbulent South African environment. Originality/Value: Since businesses and markets are complex adaptive systems, using complexity theory to understand how to cope in complex, turbulent environments is necessary, but has not been widely researched. In fact, most chaos and complexity theory work in marketing has concentrated on marketing strategy, with
Multiple Scattering Model for Optical Coherence Tomography with Rytov Approximation
Li, Muxingzi
2017-01-01
of speckles due to multiple scatterers within the coherence length, and other random noise. Motivated by the above two challenges, a multiple scattering model based on Rytov approximation and Gaussian beam optics is proposed for the OCT setup. Some previous
Wax Precipitation Modeled with Many Mixed Solid Phases
DEFF Research Database (Denmark)
Heidemann, Robert A.; Madsen, Jesper; Stenby, Erling Halfdan
2005-01-01
The behavior of the Coutinho UNIQUAC model for solid wax phases has been examined. The model can produce as many mixed solid phases as the number of waxy components. In binary mixtures, the solid rich in the lighter component contains little of the heavier component but the second phase shows sub......-temperature and low-temperature forms, are pure. Model calculations compare well with the data of Pauly et al. for C18 to C30 waxes precipitating from n-decane solutions. (C) 2004 American Institute of Chemical Engineers....
Analysis of a PDF model in a mixing layer case
International Nuclear Information System (INIS)
Minier, J.P.; Pozorski, J.
1996-04-01
A recent turbulence model put forward by Pope (1991) in the context of PDF modeling has been applied to a mixing layer case. This model solves the one-point joint velocity-dissipation pdf equation by simulating the instantaneous behaviour of a large number of Lagrangian fluid particles. Closure of the evolution equations of these Lagrangian particles is based on diffusion stochastic processes. The paper reports numerical results and tries to analyse the physical meaning of some variables, in particular the dissipation-weighted kinetic energy and its relation with external intermittency. (authors). 14 refs., 7 figs
Production, decay, and mixing models of the iota meson
International Nuclear Information System (INIS)
Palmer, W.F.; Pinsky, S.S.; Bender, C.
1984-01-01
We solve a five-channel mixing problem involving eta, eta', zeta(1275), iota(1440), and a new hypothetical high-mass pseudoscalar state between 1600 and 1900 MeV. We obtain the quark and glue content of iota(1440). We compare two solutions to the mixing problem with iota(1440) production and decay data, and with quark-model predictions for bare masses. In one solution the iota(1440) is primarily a glueball. This solution is preferred by the production and decay data. In the other solution the iota(1440) is a radially excited (ss-bar) state. This solution is preferred by the quark-model picture for the bare masses. We judge the weight of the combined evidence to favor the glueball interpretation
Fuel Mix Impacts from Transportation Fuel Carbon Intensity Standards in Multiple Jurisdictions
Witcover, J.
2017-12-01
Fuel carbon intensity standards have emerged as an important policy in jurisdictions looking to target transportation greenhouse gas (GHG) emissions for reduction. A carbon intensity standard rates transportation fuels based on analysis of lifecycle GHG emissions, and uses a system of deficits and tradable, bankable credits to reward increased use of fuels with lower carbon intensity ratings while disincentivizing use of fuels with higher carbon intensity ratings such as conventional fossil fuels. Jurisdictions with carbon intensity standards now in effect include California, Oregon, and British Columbia, all requiring 10% reductions in carbon intensity of the transport fuel pool over a 10-year period. The states and province have committed to grow demand for low carbon fuels in the region as part of collaboration on climate change policies. Canada is developing a carbon intensity standard with broader coverage, for fuels used in transport, industry, and buildings. This study shows a changing fuel mix in affected jurisdictions under the policy in terms of shifting contribution of transportation energy from alternative fuels and trends in shares of particular fuel pathways. It contrasts program designs across the jurisdictions with the policy, highlights the opportunities and challenges these pose for the alternative fuel market, and discusses the impact of having multiple policies alongside federal renewable fuel standards and sometimes local carbon pricing regimes. The results show how the market has responded thus far to a policy that incentivizes carbon saving anywhere along the supply chain at lowest cost, in ways that diverged from a priori policy expectations. Lessons for the policies moving forward are discussed.
Mildly mixed coupled models vs. WMAP7 data
International Nuclear Information System (INIS)
La Vacca, Giuseppe; Bonometto, Silvio A.
2011-01-01
Mildly mixed coupled models include massive ν's and CDM-DE coupling. We present new tests of their likelihood vs. recent data including WMAP7, confirming it to exceed ΛCDM, although at ∼2--σ's. We then show the impact on the physics of the dark components of ν-mass detection in 3 H β-decay or 0νββ-decay experiments.
Estimation and Inference for Very Large Linear Mixed Effects Models
Gao, K.; Owen, A. B.
2016-01-01
Linear mixed models with large imbalanced crossed random effects structures pose severe computational problems for maximum likelihood estimation and for Bayesian analysis. The costs can grow as fast as $N^{3/2}$ when there are N observations. Such problems arise in any setting where the underlying factors satisfy a many to many relationship (instead of a nested one) and in electronic commerce applications, the N can be quite large. Methods that do not account for the correlation structure can...
GUT and flavor models for neutrino masses and mixing
Meloni, Davide
2017-10-01
In the recent years experiments have established the existence of neutrino oscillations and most of the oscillation parameters have been measured with a good accuracy. However, in spite of many interesting ideas, no real illumination was sparked on the problem of flavor in the lepton sector. In this review, we discuss the state of the art of models for neutrino masses and mixings formulated in the context of flavor symmetries, with particular emphasis on the role played by grand unified gauge groups.
The 4s web-marketing mix model
Constantinides, Efthymios
2002-01-01
This paper reviews the criticism on the 4Ps Marketing Mix framework, the most popular tool of traditional marketing management, and categorizes the main objections of using the model as the foundation of physical marketing. It argues that applying the traditional approach, based on the 4Ps paradigm, is also a poor choice in the case of virtual marketing and identifies two main limitations of the framework in online environments: the drastically diminished role of the Ps and the lack of any st...
Testing for Nonuniform Differential Item Functioning with Multiple Indicator Multiple Cause Models
Woods, Carol M.; Grimm, Kevin J.
2011-01-01
In extant literature, multiple indicator multiple cause (MIMIC) models have been presented for identifying items that display uniform differential item functioning (DIF) only, not nonuniform DIF. This article addresses, for apparently the first time, the use of MIMIC models for testing both uniform and nonuniform DIF with categorical indicators. A…
Multivariate longitudinal data analysis with mixed effects hidden Markov models.
Raffa, Jesse D; Dubin, Joel A
2015-09-01
Multiple longitudinal responses are often collected as a means to capture relevant features of the true outcome of interest, which is often hidden and not directly measurable. We outline an approach which models these multivariate longitudinal responses as generated from a hidden disease process. We propose a class of models which uses a hidden Markov model with separate but correlated random effects between multiple longitudinal responses. This approach was motivated by a smoking cessation clinical trial, where a bivariate longitudinal response involving both a continuous and a binomial response was collected for each participant to monitor smoking behavior. A Bayesian method using Markov chain Monte Carlo is used. Comparison of separate univariate response models to the bivariate response models was undertaken. Our methods are demonstrated on the smoking cessation clinical trial dataset, and properties of our approach are examined through extensive simulation studies. © 2015, The International Biometric Society.
Study on system dynamics of evolutionary mix-game models
Gou, Chengling; Guo, Xiaoqian; Chen, Fang
2008-11-01
Mix-game model is ameliorated from an agent-based MG model, which is used to simulate the real financial market. Different from MG, there are two groups of agents in Mix-game: Group 1 plays a majority game and Group 2 plays a minority game. These two groups of agents have different bounded abilities to deal with historical information and to count their own performance. In this paper, we modify Mix-game model by assigning the evolution abilities to agents: if the winning rates of agents are smaller than a threshold, they will copy the best strategies the other agent has; and agents will repeat such evolution at certain time intervals. Through simulations this paper finds: (1) the average winning rates of agents in Group 1 and the mean volatilities increase with the increases of the thresholds of Group 1; (2) the average winning rates of both groups decrease but the mean volatilities of system increase with the increase of the thresholds of Group 2; (3) the thresholds of Group 2 have greater impact on system dynamics than the thresholds of Group 1; (4) the characteristics of system dynamics under different time intervals of strategy change are similar to each other qualitatively, but they are different quantitatively; (5) As the time interval of strategy change increases from 1 to 20, the system behaves more and more stable and the performances of agents in both groups become better also.
Stochastic scalar mixing models accounting for turbulent frequency multiscale fluctuations
International Nuclear Information System (INIS)
Soulard, Olivier; Sabel'nikov, Vladimir; Gorokhovski, Michael
2004-01-01
Two new scalar micromixing models accounting for a turbulent frequency scale distribution are investigated. These models were derived by Sabel'nikov and Gorokhovski [Second International Symposium on Turbulence and Shear FLow Phenomena, Royal Institute of technology (KTH), Stockholm, Sweden, June 27-29, 2001] using a multiscale extension of the classical interaction by exchange with the mean (IEM) and Langevin models. They are, respectively, called Extended IEM (EIEM) and Extended Langevin (ELM) models. The EIEM and ELM models are tested against DNS results in the case of the decay of a homogeneous scalar field in homogeneous turbulence. This comparison leads to a reformulation of the law governing the mixing frequency distribution. Finally, the asymptotic behaviour of the modeled PDF is discussed
Multiplicity distributions in the dual parton model
International Nuclear Information System (INIS)
Batunin, A.V.; Tolstenkov, A.N.
1985-01-01
Multiplicity distributions are calculated by means of a new mechanism of production of hadrons in a string, which was proposed previously by the authors and takes into account explicitly the valence character of the ends of the string. It is shown that allowance for this greatly improves the description of the low-energy multiplicity distributions. At superhigh energies, the contribution of the ends of the strings becomes negligibly small, but in this case multi-Pomeron contributions must be taken into account
Evolving Four Part Harmony Using a Multiple Worlds Model
DEFF Research Database (Denmark)
Scirea, Marco; Brown, Joseph Alexander
2015-01-01
This application of the Multiple Worlds Model examines a collaborative fitness model for generating four part harmonies. In this model we have multiple populations and the fitness of the individuals is based on the ability of a member from each population to work with the members of other...
Explaining clinical behaviors using multiple theoretical models
Directory of Open Access Journals (Sweden)
Eccles Martin P
2012-10-01
the five surveys. For the predictor variables, the mean construct scores were above the mid-point on the scale with median values across the five behaviors generally being above four out of seven and the range being from 1.53 to 6.01. Across all of the theories, the highest proportion of the variance explained was always for intention and the lowest was for behavior. The Knowledge-Attitudes-Behavior Model performed poorly across all behaviors and dependent variables; CSSRM also performed poorly. For TPB, SCT, II, and LT across the five behaviors, we predicted median R2 of 25% to 42.6% for intention, 6.2% to 16% for behavioral simulation, and 2.4% to 6.3% for behavior. Conclusions We operationalized multiple theories measuring across five behaviors. Continuing challenges that emerge from our work are: better specification of behaviors, better operationalization of theories; how best to appropriately extend the range of theories; further assessment of the value of theories in different settings and groups; exploring the implications of these methods for the management of chronic diseases; and moving to experimental designs to allow an understanding of behavior change.
Explaining clinical behaviors using multiple theoretical models.
Eccles, Martin P; Grimshaw, Jeremy M; MacLennan, Graeme; Bonetti, Debbie; Glidewell, Liz; Pitts, Nigel B; Steen, Nick; Thomas, Ruth; Walker, Anne; Johnston, Marie
2012-10-17
, the mean construct scores were above the mid-point on the scale with median values across the five behaviors generally being above four out of seven and the range being from 1.53 to 6.01. Across all of the theories, the highest proportion of the variance explained was always for intention and the lowest was for behavior. The Knowledge-Attitudes-Behavior Model performed poorly across all behaviors and dependent variables; CSSRM also performed poorly. For TPB, SCT, II, and LT across the five behaviors, we predicted median R2 of 25% to 42.6% for intention, 6.2% to 16% for behavioral simulation, and 2.4% to 6.3% for behavior. We operationalized multiple theories measuring across five behaviors. Continuing challenges that emerge from our work are: better specification of behaviors, better operationalization of theories; how best to appropriately extend the range of theories; further assessment of the value of theories in different settings and groups; exploring the implications of these methods for the management of chronic diseases; and moving to experimental designs to allow an understanding of behavior change.
Understanding and Improving Ocean Mixing Parameterizations for modeling Climate Change
Howard, A. M.; Fells, J.; Clarke, J.; Cheng, Y.; Canuto, V.; Dubovikov, M. S.
2017-12-01
Climate is vital. Earth is only habitable due to the atmosphere&oceans' distribution of energy. Our Greenhouse Gas emissions shift overall the balance between absorbed and emitted radiation causing Global Warming. How much of these emissions are stored in the ocean vs. entering the atmosphere to cause warming and how the extra heat is distributed depends on atmosphere&ocean dynamics, which we must understand to know risks of both progressive Climate Change and Climate Variability which affect us all in many ways including extreme weather, floods, droughts, sea-level rise and ecosystem disruption. Citizens must be informed to make decisions such as "business as usual" vs. mitigating emissions to avert catastrophe. Simulations of Climate Change provide needed knowledge but in turn need reliable parameterizations of key physical processes, including ocean mixing, which greatly impacts transport&storage of heat and dissolved CO2. The turbulence group at NASA-GISS seeks to use physical theory to improve parameterizations of ocean mixing, including smallscale convective, shear driven, double diffusive, internal wave and tidal driven vertical mixing, as well as mixing by submesoscale eddies, and lateral mixing along isopycnals by mesoscale eddies. Medgar Evers undergraduates aid NASA research while learning climate science and developing computer&math skills. We write our own programs in MATLAB and FORTRAN to visualize and process output of ocean simulations including producing statistics to help judge impacts of different parameterizations on fidelity in reproducing realistic temperatures&salinities, diffusivities and turbulent power. The results can help upgrade the parameterizations. Students are introduced to complex system modeling and gain deeper appreciation of climate science and programming skills, while furthering climate science. We are incorporating climate projects into the Medgar Evers college curriculum. The PI is both a member of the turbulence group at
A Linear Mixed-Effects Model of Wireless Spectrum Occupancy
Directory of Open Access Journals (Sweden)
Pagadarai Srikanth
2010-01-01
Full Text Available We provide regression analysis-based statistical models to explain the usage of wireless spectrum across four mid-size US cities in four frequency bands. Specifically, the variations in spectrum occupancy across space, time, and frequency are investigated and compared between different sites within the city as well as with other cities. By applying the mixed-effects models, several conclusions are drawn that give the occupancy percentage and the ON time duration of the licensed signal transmission as a function of several predictor variables.
Normal and Special Models of Neutrino Masses and Mixings
Altarelli, Guido
2005-01-01
One can make a distinction between "normal" and "special" models. For normal models $\\theta_{23}$ is not too close to maximal and $\\theta_{13}$ is not too small, typically a small power of the self-suggesting order parameter $\\sqrt{r}$, with $r=\\Delta m_{sol}^2/\\Delta m_{atm}^2 \\sim 1/35$. Special models are those where some symmetry or dynamical feature assures in a natural way the near vanishing of $\\theta_{13}$ and/or of $\\theta_{23}- \\pi/4$. Normal models are conceptually more economical and much simpler to construct. Here we focus on special models, in particular a recent one based on A4 discrete symmetry and extra dimensions that leads in a natural way to a Harrison-Perkins-Scott mixing matrix.
Mixed Portmanteau Test for Diagnostic Checking of Time Series Models
Directory of Open Access Journals (Sweden)
Sohail Chand
2014-01-01
Full Text Available Model criticism is an important stage of model building and thus goodness of fit tests provides a set of tools for diagnostic checking of the fitted model. Several tests are suggested in literature for diagnostic checking. These tests use autocorrelation or partial autocorrelation in the residuals to criticize the adequacy of fitted model. The main idea underlying these portmanteau tests is to identify if there is any dependence structure which is yet unexplained by the fitted model. In this paper, we suggest mixed portmanteau tests based on autocorrelation and partial autocorrelation functions of the residuals. We derived the asymptotic distribution of the mixture test and studied its size and power using Monte Carlo simulations.
CP violation and flavour mixing in the standard model
International Nuclear Information System (INIS)
Ali, A.; London, D.
1995-08-01
We review and update the constraints on the parameters of the quark flavour mixing matrix V CKM in the standard model and estimate the resulting CP asymmetries in B decays, taking into account recent experimental and theoretical developments. In performing our fits, we use inputs from the measurements of the following quantities: (i) vertical stroke εvertical stroke , the CP-violating parameter in K decays, (ii) ΔM d , the mass difference due to the B 0 d - anti B 0 d mixing, (iii) the matrix elements vertical stroke V cb vertical stroke and vertical stroke V ub vertical stroke , (iv) B-hadron lifetimes, and (v) the top quark mass. The experimental input in points (ii) - (v) has improved compared to our previous fits. With the updated CKM matrix we present the currently-allowed range of the ratios vertical stroke V td /V ts vertical stroke and vertical stroke V td /V ub vertical stroke , as well as the standard model predictions for the B s 0 - anti B s 0 mixing parameter x s , (or, equivalently, ΔM s ) and the quantities sin 2α, sin 2β and sin 2 γ, which characterize the CP-asymmetries in B-decays. Various theoretical issues related to the so-called ''penguin-pollution'', which are of importance for the determination of the phases α and γ from the CP-asymmetries in B decays, are also discussed. (orig.)
Criticality in the configuration-mixed interacting boson model: (1) U(5)-Q(χ)Q(χ) mixing
International Nuclear Information System (INIS)
Hellemans, V.; Van Isacker, P.; De Baerdemacker, S.; Heyde, K.
2007-01-01
The case of U(5)-Q(χ)Q(χ) mixing in the configuration-mixed interacting boson model is studied in its mean-field approximation. Phase diagrams with analytical and numerical solutions are constructed and discussed. Indications for first-order and second-order shape phase transitions can be obtained from binding energies and from critical exponents, respectively
Integration of multiple, excess, backup, and expected covering models
M S Daskin; K Hogan; C ReVelle
1988-01-01
The concepts of multiple, excess, backup, and expected coverage are defined. Model formulations using these constructs are reviewed and contrasted to illustrate the relationships between them. Several new formulations are presented as is a new derivation of the expected covering model which indicates more clearly the relationship of the model to other multi-state covering models. An expected covering model with multiple time standards is also presented.
modelling of far modelling of far-field mixing o field mixing o ambient
African Journals Online (AJOL)
User
his study sought to describe the dynamics of advective and dispersive tr .... focused on environmental policy designs targeted at ... consequences such as welfare loss of outright ban on polluting ... optimal DO level. ... carried out a similar study to model the shadow price .... As A varies, we have a family of curves depicted in.
Forecasting Costa Rican Quarterly Growth with Mixed-frequency Models
Directory of Open Access Journals (Sweden)
Adolfo Rodríguez Vargas
2014-11-01
Full Text Available We assess the utility of mixed-frequency models to forecast the quarterly growth rate of Costa Rican real GDP: we estimate bridge and MiDaS models with several lag lengths using information of the IMAE and compute forecasts (horizons of 0-4 quarters which are compared between themselves, with those of ARIMA models and with those resulting from forecast combinations. Combining the most accurate forecasts is most useful when forecasting in real time, whereas MiDaS forecasts are the best-performing overall: as the forecasting horizon increases, their precisionis affected relatively little; their success rates in predicting the direction of changes in the growth rate are stable, and several forecastsremain unbiased. In particular, forecasts computed from simple MiDaS with 9 and 12 lags are unbiased at all horizons and information sets assessed, and show the highest number of significant differences in forecasting ability in comparison with all other models.
A local mixing model for deuterium replacement in solids
International Nuclear Information System (INIS)
Doyle, B.L.; Brice, D.K.; Wampler, W.R.
1980-01-01
A new model for hydrogen isotope exchange by ion implantation has been developed. The basic difference between the present approach and previous work is that the depth distribution of the implanted species is included. The outstanding feature of this local mixing model is that the only adjustable parameter is the saturation hydrogen concentration which is specific to the target material and dependent only on temperature. The model is shown to give excellent agreement both with new data on H/D exchange in the low Z coating materials VB 2 , TiC, TiB 2 , and B reported here and with previously reported data on stainless steel. The saturation hydrogen concentrations used to fit these data were 0.15, 0.25, 0.15, 0.45, and 1.00 times atomic density respectively. This model should be useful in predicting the recycling behavior of hydrogen isotopes in tokamak limiter and wall materials. (author)
Negative binomial mixed models for analyzing microbiome count data.
Zhang, Xinyan; Mallick, Himel; Tang, Zaixiang; Zhang, Lei; Cui, Xiangqin; Benson, Andrew K; Yi, Nengjun
2017-01-03
Recent advances in next-generation sequencing (NGS) technology enable researchers to collect a large volume of metagenomic sequencing data. These data provide valuable resources for investigating interactions between the microbiome and host environmental/clinical factors. In addition to the well-known properties of microbiome count measurements, for example, varied total sequence reads across samples, over-dispersion and zero-inflation, microbiome studies usually collect samples with hierarchical structures, which introduce correlation among the samples and thus further complicate the analysis and interpretation of microbiome count data. In this article, we propose negative binomial mixed models (NBMMs) for detecting the association between the microbiome and host environmental/clinical factors for correlated microbiome count data. Although having not dealt with zero-inflation, the proposed mixed-effects models account for correlation among the samples by incorporating random effects into the commonly used fixed-effects negative binomial model, and can efficiently handle over-dispersion and varying total reads. We have developed a flexible and efficient IWLS (Iterative Weighted Least Squares) algorithm to fit the proposed NBMMs by taking advantage of the standard procedure for fitting the linear mixed models. We evaluate and demonstrate the proposed method via extensive simulation studies and the application to mouse gut microbiome data. The results show that the proposed method has desirable properties and outperform the previously used methods in terms of both empirical power and Type I error. The method has been incorporated into the freely available R package BhGLM ( http://www.ssg.uab.edu/bhglm/ and http://github.com/abbyyan3/BhGLM ), providing a useful tool for analyzing microbiome data.
Subgrid models for mass and thermal diffusion in turbulent mixing
Energy Technology Data Exchange (ETDEWEB)
Sharp, David H [Los Alamos National Laboratory; Lim, Hyunkyung [STONY BROOK UNIV; Li, Xiao - Lin [STONY BROOK UNIV; Gilmm, James G [STONY BROOK UNIV
2008-01-01
We are concerned with the chaotic flow fields of turbulent mixing. Chaotic flow is found in an extreme form in multiply shocked Richtmyer-Meshkov unstable flows. The goal of a converged simulation for this problem is twofold: to obtain converged solutions for macro solution features, such as the trajectories of the principal shock waves, mixing zone edges, and mean densities and velocities within each phase, and also for such micro solution features as the joint probability distributions of the temperature and species concentration. We introduce parameterized subgrid models of mass and thermal diffusion, to define large eddy simulations (LES) that replicate the micro features observed in the direct numerical simulation (DNS). The Schmidt numbers and Prandtl numbers are chosen to represent typical liquid, gas and plasma parameter values. Our main result is to explore the variation of the Schmidt, Prandtl and Reynolds numbers by three orders of magnitude, and the mesh by a factor of 8 per linear dimension (up to 3200 cells per dimension), to allow exploration of both DNS and LES regimes and verification of the simulations for both macro and micro observables. We find mesh convergence for key properties describing the molecular level of mixing, including chemical reaction rates between the distinct fluid species. We find results nearly independent of Reynolds number for Re 300, 6000, 600K . Methodologically, the results are also new. In common with the shock capturing community, we allow and maintain sharp solution gradients, and we enhance these gradients through use of front tracking. In common with the turbulence modeling community, we include subgrid scale models with no adjustable parameters for LES. To the authors' knowledge, these two methodologies have not been previously combined. In contrast to both of these methodologies, our use of Front Tracking, with DNS or LES resolution of the momentum equation at or near the Kolmogorov scale, but without
Mixing Modeling Analysis For SRS Salt Waste Disposition
International Nuclear Information System (INIS)
Lee, S.
2011-01-01
Nuclear waste at Savannah River Site (SRS) waste tanks consists of three different types of waste forms. They are the lighter salt solutions referred to as supernate, the precipitated salts as salt cake, and heavier fine solids as sludge. The sludge is settled on the tank floor. About half of the residual waste radioactivity is contained in the sludge, which is only about 8 percentage of the total waste volume. Mixing study to be evaluated here for the Salt Disposition Integration (SDI) project focuses on supernate preparations in waste tanks prior to transfer to the Salt Waste Processing Facility (SWPF) feed tank. The methods to mix and blend the contents of the SRS blend tanks were evalutaed to ensure that the contents are properly blended before they are transferred from the blend tank such as Tank 50H to the SWPF feed tank. The work consists of two principal objectives to investigate two different pumps. One objective is to identify a suitable pumping arrangement that will adequately blend/mix two miscible liquids to obtain a uniform composition in the tank with a minimum level of sludge solid particulate in suspension. The other is to estimate the elevation in the tank at which the transfer pump inlet should be located where the solid concentration of the entrained fluid remains below the acceptance criterion (0.09 wt% or 1200 mg/liter) during transfer operation to the SWPF. Tank 50H is a Waste Tank that will be used to prepare batches of salt feed for SWPF. The salt feed must be a homogeneous solution satisfying the acceptance criterion of the solids entrainment during transfer operation. The work described here consists of two modeling areas. They are the mixing modeling analysis during miscible liquid blending operation, and the flow pattern analysis during transfer operation of the blended liquid. The modeling results will provide quantitative design and operation information during the mixing/blending process and the transfer operation of the blended
Microfluidic channel structures speed up mixing of multiple emulsions by a factor of ten
CSIR Research Space (South Africa)
Land, KJ
2014-09-01
Full Text Available mixing, to over- come these limitations. Many methods to achieve this have been proposed and studied, and a number of reviews give a complete overview of the methods and the main issues associated with mixing at the microscale.1–3 Active mixing, which... for a limited range of flow parameters or require active control, for example, via electrical impulses. In order to determine whether any of these well established methods would be viable, vari- ous experiments were performed, specifically studying...
Linear mixed models a practical guide using statistical software
West, Brady T; Galecki, Andrzej T
2014-01-01
Highly recommended by JASA, Technometrics, and other journals, the first edition of this bestseller showed how to easily perform complex linear mixed model (LMM) analyses via a variety of software programs. Linear Mixed Models: A Practical Guide Using Statistical Software, Second Edition continues to lead readers step by step through the process of fitting LMMs. This second edition covers additional topics on the application of LMMs that are valuable for data analysts in all fields. It also updates the case studies using the latest versions of the software procedures and provides up-to-date information on the options and features of the software procedures available for fitting LMMs in SAS, SPSS, Stata, R/S-plus, and HLM.New to the Second Edition A new chapter on models with crossed random effects that uses a case study to illustrate software procedures capable of fitting these models Power analysis methods for longitudinal and clustered study designs, including software options for power analyses and suggest...
Wang, Xiu-lin; Wei, Zheng; Wang, Rui; Huang, Wen-cai
2018-05-01
A self-mixing interferometer (SMI) with resolution twenty times higher than that of a conventional interferometer is developed by multiple reflections. Only by employing a simple external reflecting mirror, the multiple-pass optical configuration can be constructed. The advantage of the configuration is simple and easy to make the light re-injected back into the laser cavity. Theoretical analysis shows that the resolution of measurement is scalable by adjusting the number of reflections. The experiment shows that the proposed method has the optical resolution of approximate λ/40. The influence of displacement sensitivity gain ( G) is further analyzed and discussed in practical experiments.
Modelling rainfall amounts using mixed-gamma model for Kuantan district
Zakaria, Roslinazairimah; Moslim, Nor Hafizah
2017-05-01
An efficient design of flood mitigation and construction of crop growth models depend upon good understanding of the rainfall process and characteristics. Gamma distribution is usually used to model nonzero rainfall amounts. In this study, the mixed-gamma model is applied to accommodate both zero and nonzero rainfall amounts. The mixed-gamma model presented is for the independent case. The formulae of mean and variance are derived for the sum of two and three independent mixed-gamma variables, respectively. Firstly, the gamma distribution is used to model the nonzero rainfall amounts and the parameters of the distribution (shape and scale) are estimated using the maximum likelihood estimation method. Then, the mixed-gamma model is defined for both zero and nonzero rainfall amounts simultaneously. The formulae of mean and variance for the sum of two and three independent mixed-gamma variables derived are tested using the monthly rainfall amounts from rainfall stations within Kuantan district in Pahang Malaysia. Based on the Kolmogorov-Smirnov goodness of fit test, the results demonstrate that the descriptive statistics of the observed sum of rainfall amounts is not significantly different at 5% significance level from the generated sum of independent mixed-gamma variables. The methodology and formulae demonstrated can be applied to find the sum of more than three independent mixed-gamma variables.
Kelly, Maureen E.; Dowell, Jon; Husbands, Adrian; Newell, John; O'Flynn, Siun; Kropmans, Thomas; Dunne, Fidelma P.; Murphy, Andrew W.
2014-01-01
Background International medical students, those attending medical school outside of their country of citizenship, account for a growing proportion of medical undergraduates worldwide. This study aimed to establish the fairness, predictive validity and acceptability of Multiple Mini Interview (MMI) in an internationally diverse student population. Methods This was an explanatory sequential, mixed methods study. All students in First Year Medicine, National University of Ireland Galway 2012 we...
Directory of Open Access Journals (Sweden)
Xiao-Bao Shu
2005-01-01
Full Text Available By means of variational structure and Z2 group index theory, we obtain multiple periodic solutions to a class of second-order mixed-type differential equations x''(t−τ+f(t,x(t,x(t−τ,x(t−2τ=0 and x''(t−τ+λ(tf1(t,x(t,x(t−τ,x(t−2τ=x(t−τ.
Modelling ice microphysics of mixed-phase clouds
Ahola, J.; Raatikainen, T.; Tonttila, J.; Romakkaniemi, S.; Kokkola, H.; Korhonen, H.
2017-12-01
The low-level Arctic mixed-phase clouds have a significant role for the Arctic climate due to their ability to absorb and reflect radiation. Since the climate change is amplified in polar areas, it is vital to apprehend the mixed-phase cloud processes. From a modelling point of view, this requires a high spatiotemporal resolution to capture turbulence and the relevant microphysical processes, which has shown to be difficult.In order to solve this problem about modelling mixed-phase clouds, a new ice microphysics description has been developed. The recently published large-eddy simulation cloud model UCLALES-SALSA offers a good base for a feasible solution (Tonttila et al., Geosci. Mod. Dev., 10:169-188, 2017). The model includes aerosol-cloud interactions described with a sectional SALSA module (Kokkola et al., Atmos. Chem. Phys., 8, 2469-2483, 2008), which represents a good compromise between detail and computational expense.Newly, the SALSA module has been upgraded to include also ice microphysics. The dynamical part of the model is based on well-known UCLA-LES model (Stevens et al., J. Atmos. Sci., 56, 3963-3984, 1999) which can be used to study cloud dynamics on a fine grid.The microphysical description of ice is sectional and the included processes consist of formation, growth and removal of ice and snow particles. Ice cloud particles are formed by parameterized homo- or heterogeneous nucleation. The growth mechanisms of ice particles and snow include coagulation and condensation of water vapor. Autoconversion from cloud ice particles to snow is parameterized. The removal of ice particles and snow happens by sedimentation and melting.The implementation of ice microphysics is tested by initializing the cloud simulation with atmospheric observations from the Indirect and Semi-Direct Aerosol Campaign (ISDAC). The results are compared to the model results shown in the paper of Ovchinnikov et al. (J. Adv. Model. Earth Syst., 6, 223-248, 2014) and they show a good
A test for the parameters of multiple linear regression models ...
African Journals Online (AJOL)
A test for the parameters of multiple linear regression models is developed for conducting tests simultaneously on all the parameters of multiple linear regression models. The test is robust relative to the assumptions of homogeneity of variances and absence of serial correlation of the classical F-test. Under certain null and ...
Delta-tilde interpretation of standard linear mixed model results
DEFF Research Database (Denmark)
Brockhoff, Per Bruun; Amorim, Isabel de Sousa; Kuznetsova, Alexandra
2016-01-01
effects relative to the residual error and to choose the proper effect size measure. For multi-attribute bar plots of F-statistics this amounts, in balanced settings, to a simple transformation of the bar heights to get them transformed into depicting what can be seen as approximately the average pairwise...... data set and compared to actual d-prime calculations based on Thurstonian regression modeling through the ordinal package. For more challenging cases we offer a generic "plug-in" implementation of a version of the method as part of the R-package SensMixed. We discuss and clarify the bias mechanisms...
lmerTest Package: Tests in Linear Mixed Effects Models
DEFF Research Database (Denmark)
Kuznetsova, Alexandra; Brockhoff, Per B.; Christensen, Rune Haubo Bojesen
2017-01-01
One of the frequent questions by users of the mixed model function lmer of the lme4 package has been: How can I get p values for the F and t tests for objects returned by lmer? The lmerTest package extends the 'lmerMod' class of the lme4 package, by overloading the anova and summary functions...... by providing p values for tests for fixed effects. We have implemented the Satterthwaite's method for approximating degrees of freedom for the t and F tests. We have also implemented the construction of Type I - III ANOVA tables. Furthermore, one may also obtain the summary as well as the anova table using...
Linking effort and fishing mortality in a mixed fisheries model
DEFF Research Database (Denmark)
Thøgersen, Thomas Talund; Hoff, Ayoe; Frost, Hans Staby
2012-01-01
in fish stocks has led to overcapacity in many fisheries, leading to incentives for overfishing. Recent research has shown that the allocation of effort among fleets can play an important role in mitigating overfishing when the targeting covers a range of species (multi-species—i.e., so-called mixed...... fisheries), while simultaneously optimising the overall economic performance of the fleets. The so-called FcubEcon model, in particular, has elucidated both the biologically and economically optimal method for allocating catches—and thus effort—between fishing fleets, while ensuring that the quotas...
Modeling of speed distribution for mixed bicycle traffic flow
Directory of Open Access Journals (Sweden)
Cheng Xu
2015-11-01
Full Text Available Speed is a fundamental measure of traffic performance for highway systems. There were lots of results for the speed characteristics of motorized vehicles. In this article, we studied the speed distribution for mixed bicycle traffic which was ignored in the past. Field speed data were collected from Hangzhou, China, under different survey sites, traffic conditions, and percentages of electric bicycle. The statistics results of field data show that the total mean speed of electric bicycles is 17.09 km/h, 3.63 km/h faster and 27.0% higher than that of regular bicycles. Normal, log-normal, gamma, and Weibull distribution models were used for testing speed data. The results of goodness-of-fit hypothesis tests imply that the log-normal and Weibull model can fit the field data very well. Then, the relationships between mean speed and electric bicycle proportions were proposed using linear regression models, and the mean speed for purely electric bicycles or regular bicycles can be obtained. The findings of this article will provide effective help for the safety and traffic management of mixed bicycle traffic.
Cleanthous, Sophie; Strzok, Sara; Pompilus, Farrah; Cano, Stefan; Marquis, Patrick; Cohan, Stanley; Goldman, Myla D; Kresa-Reahl, Kiren; Petrillo, Jennifer; Castrillo-Viguera, Carmen; Cadavid, Diego; Chen, Shih-Yin
2018-01-01
ABILHAND, a manual ability patient-reported outcome instrument originally developed for stroke patients, has been used in multiple sclerosis clinical trials; however, psychometric analyses indicated the measure's limited measurement range and precision in higher-functioning multiple sclerosis patients. The purpose of this study was to identify candidate items to expand the measurement range of the ABILHAND-56, thus improving its ability to detect differences in manual ability in higher-functioning multiple sclerosis patients. A step-wise mixed methods design strategy was used, comprising two waves of patient interviews, a combination of qualitative (concept elicitation and cognitive debriefing) and quantitative (Rasch measurement theory) analytic techniques, and consultation interviews with three clinical neurologists specializing in multiple sclerosis. Original ABILHAND was well understood in this context of use. Eighty-two new manual ability concepts were identified. Draft supplementary items were generated and refined with patient and neurologist input. Rasch measurement theory psychometric analysis indicated supplementary items improved targeting to higher-functioning multiple sclerosis patients and measurement precision. The final pool of Early Multiple Sclerosis Manual Ability items comprises 20 items. The synthesis of qualitative and quantitative methods used in this study improves the ABILHAND content validity to more effectively identify manual ability changes in early multiple sclerosis and potentially help determine treatment effect in higher-functioning patients in clinical trials.
Spatial generalised linear mixed models based on distances.
Melo, Oscar O; Mateu, Jorge; Melo, Carlos E
2016-10-01
Risk models derived from environmental data have been widely shown to be effective in delineating geographical areas of risk because they are intuitively easy to understand. We present a new method based on distances, which allows the modelling of continuous and non-continuous random variables through distance-based spatial generalised linear mixed models. The parameters are estimated using Markov chain Monte Carlo maximum likelihood, which is a feasible and a useful technique. The proposed method depends on a detrending step built from continuous or categorical explanatory variables, or a mixture among them, by using an appropriate Euclidean distance. The method is illustrated through the analysis of the variation in the prevalence of Loa loa among a sample of village residents in Cameroon, where the explanatory variables included elevation, together with maximum normalised-difference vegetation index and the standard deviation of normalised-difference vegetation index calculated from repeated satellite scans over time. © The Author(s) 2013.
A brief introduction to regression designs and mixed-effects modelling by a recent convert
DEFF Research Database (Denmark)
Balling, Laura Winther
2008-01-01
This article discusses the advantages of multiple regression designs over the factorial designs traditionally used in many psycholinguistic experiments. It is shown that regression designs are typically more informative, statistically more powerful and better suited to the analysis of naturalistic...... tasks. The advantages of including both fixed and random effects are demonstrated with reference to linear mixed-effects models, and problems of collinearity, variable distribution and variable selection are discussed. The advantages of these techniques are exemplified in an analysis of a word...
Medicare capitation model, functional status, and multiple comorbidities: model accuracy
Noyes, Katia; Liu, Hangsheng; Temkin-Greener, Helena
2012-01-01
Objective This study examined financial implications of CMS-Hierarchical Condition Categories (HCC) risk-adjustment model on Medicare payments for individuals with comorbid chronic conditions. Study Design The study used 1992-2000 data from the Medicare Current Beneficiary Survey and corresponding Medicare claims. The pairs of comorbidities were formed based on the prior evidence about possible synergy between these conditions and activities of daily living (ADL) deficiencies and included heart disease and cancer, lung disease and cancer, stroke and hypertension, stroke and arthritis, congestive heart failure (CHF) and osteoporosis, diabetes and coronary artery disease, CHF and dementia. Methods For each beneficiary, we calculated the actual Medicare cost ratio as the ratio of the individual’s annualized costs to the mean annual Medicare cost of all people in the study. The actual Medicare cost ratios, by ADLs, were compared to the HCC ratios under the CMS-HCC payment model. Using multivariate regression models, we tested whether having the identified pairs of comorbidities affects the accuracy of CMS-HCC model predictions. Results The CMS-HCC model underpredicted Medicare capitation payments for patients with hypertension, lung disease, congestive heart failure and dementia. The difference between the actual costs and predicted payments was partially explained by beneficiary functional status and less than optimal adjustment for these chronic conditions. Conclusions Information about beneficiary functional status should be incorporated in reimbursement models since underpaying providers for caring for population with multiple comorbidities may provide severe disincentives for managed care plans to enroll such individuals and to appropriately manage their complex and costly conditions. PMID:18837646
Standard model fermion hierarchies with multiple Higgs doublets
International Nuclear Information System (INIS)
Solaguren-Beascoa Negre, Ana
2016-01-01
The hierarchies between the Standard Model (SM) fermion masses and mixing angles and the origin of neutrino masses are two of the biggest mysteries in particle physics. We extend the SM with new Higgs doublets to solve these issues. The lightest fermion masses and the mixing angles are generated through radiative effects, correctly reproducing the hierarchy pattern. Neutrino masses are generated in the see-saw mechanism.
Mixing height derived from the DMI-HIRLAM NWP model, and used for ETEX dispersion modelling
Energy Technology Data Exchange (ETDEWEB)
Soerensen, J.H.; Rasmussen, A. [Danish Meteorological Inst., Copenhagen (Denmark)
1997-10-01
For atmospheric dispersion modelling it is of great significance to estimate the mixing height well. Mesoscale and long-range diffusion models using output from numerical weather prediction (NWP) models may well use NWP model profiles of wind, temperature and humidity in computation of the mixing height. This is dynamically consistent, and enables calculation of the mixing height for predicted states of the atmosphere. In autumn 1994, the European Tracer Experiment (ETEX) was carried out with the objective to validate atmospheric dispersion models. The Danish Meteorological Institute (DMI) participates in the model evaluations with the Danish Emergency Response Model of the Atmosphere (DERMA) using NWP model data from the DMI version of the High Resolution Limited Area Model (HIRLAM) as well as from the global model of the European Centre for Medium-Range Weather Forecast (ECMWF). In DERMA, calculation of mixing heights are performed based on a bulk Richardson number approach. Comparing with tracer gas measurements for the first ETEX experiment, a sensitivity study is performed for DERMA. Using DMI-HIRLAM data, the study shows that optimum values of the critical bulk Richardson number in the range 0.15-0.35 are adequate. These results are in agreement with recent mixing height verification studies against radiosonde data. The fairly large range of adequate critical values is a signature of the robustness of the method. Direct verification results against observed missing heights from operational radio-sondes released under the ETEX plume are presented. (au) 10 refs.
A flavor symmetry model for bilarge leptonic mixing and the lepton masses
Ohlsson, Tommy; Seidl, Gerhart
2002-11-01
We present a model for leptonic mixing and the lepton masses based on flavor symmetries and higher-dimensional mass operators. The model predicts bilarge leptonic mixing (i.e., the mixing angles θ12 and θ23 are large and the mixing angle θ13 is small) and an inverted hierarchical neutrino mass spectrum. Furthermore, it approximately yields the experimental hierarchical mass spectrum of the charged leptons. The obtained values for the leptonic mixing parameters and the neutrino mass squared differences are all in agreement with atmospheric neutrino data, the Mikheyev-Smirnov-Wolfenstein large mixing angle solution of the solar neutrino problem, and consistent with the upper bound on the reactor mixing angle. Thus, we have a large, but not close to maximal, solar mixing angle θ12, a nearly maximal atmospheric mixing angle θ23, and a small reactor mixing angle θ13. In addition, the model predicts θ 12≃ {π}/{4}-θ 13.
Hatzell, Marta C.
2014-12-09
Efficient conversion of “mixing energy” to electricity through capacitive mixing (CapMix) has been limited by low energy recoveries, low power densities, and noncontinuous energy production resulting from intermittent charging and discharging cycles. We show here that a CapMix system based on a four-reactor process with flow electrodes can generate constant and continuous energy, providing a more flexible platform for harvesting mixing energy. The power densities were dependent on the flow-electrode carbon loading, with 5.8 ± 0.2 mW m–2 continuously produced in the charging reactor and 3.3 ± 0.4 mW m–2 produced in the discharging reactor (9.2 ± 0.6 mW m–2 for the whole system) when the flow-electrode carbon loading was 15%. Additionally, when the flow-electrode electrolyte ion concentration increased from 10 to 20 g L–1, the total power density of the whole system (charging and discharging) increased to 50.9 ± 2.5 mW m–2.
Multiple Scenario Generation of Subsurface Models
DEFF Research Database (Denmark)
Cordua, Knud Skou
of information is obeyed such that no unknown assumptions and biases influence the solution to the inverse problem. This involves a definition of the probabilistically formulated inverse problem, a discussion about how prior models can be established based on statistical information from sample models...... of the probabilistic formulation of the inverse problem. This function is based on an uncertainty model that describes the uncertainties related to the observed data. In a similar way, a formulation of the prior probability distribution that takes into account uncertainties related to the sample model statistics...... similar to observation uncertainties. We refer to the effect of these approximations as modeling errors. Examples that show how the modeling error is estimated are provided. Moreover, it is shown how these effects can be taken into account in the formulation of the posterior probability distribution...
Multiple system modelling of waste management
International Nuclear Information System (INIS)
Eriksson, Ola; Bisaillon, Mattias
2011-01-01
Highlights: → Linking of models will provide a more complete, correct and credible picture of the systems. → The linking procedure is easy to perform and also leads to activation of project partners. → The simulation procedure is a bit more complicated and calls for the ability to run both models. - Abstract: Due to increased environmental awareness, planning and performance of waste management has become more and more complex. Therefore waste management has early been subject to different types of modelling. Another field with long experience of modelling and systems perspective is energy systems. The two modelling traditions have developed side by side, but so far there are very few attempts to combine them. Waste management systems can be linked together with energy systems through incineration plants. The models for waste management can be modelled on a quite detailed level whereas surrounding systems are modelled in a more simplistic way. This is a problem, as previous studies have shown that assumptions on the surrounding system often tend to be important for the conclusions. In this paper it is shown how two models, one for the district heating system (MARTES) and another one for the waste management system (ORWARE), can be linked together. The strengths and weaknesses with model linking are discussed when compared to simplistic assumptions on effects in the energy and waste management systems. It is concluded that the linking of models will provide a more complete, correct and credible picture of the consequences of different simultaneous changes in the systems. The linking procedure is easy to perform and also leads to activation of project partners. However, the simulation procedure is a bit more complicated and calls for the ability to run both models.
Models for fluid flows with heat transfer in mixed convection
International Nuclear Information System (INIS)
Mompean Munhoz da Cruz, G.
1989-06-01
Second order models were studied in order to predict turbulent flows with heat transfer. The equations used correspond to the characteristic scale of turbulent flows. The order of magnitude of the terms of the equation is analyzed by using Reynolds and Peclet numbers. The two-equation model (K-ε) is applied in the hydrodynamic study. Two models are developed for the heat transfer analysis: the Prt + teta 2 and the complete model. In the first model, the turbulent thermal diffusivity is calculated by using the Prandtl number for turbulent flow and an equation for the variance of the temperature fluctuation. The second model consists of three equations concerning: the turbulent heat flow, the variance of the temperature fluctuation and its dissipation ratio. The equations were validated by four experiments, which were characterized by the analysis of: the air flow after passing through a grid of constant average temperature and with temperature gradient, an axysymmetric air jet submitted to high and low heating temperature, the mixing (cold-hot) of two coaxial jets of sodium at high Peclet number. The complete model is shown to be the most suitable for the investigations presented [fr
Mean multiplicity in the Regge models with rising cross sections
International Nuclear Information System (INIS)
Chikovani, Z.E.; Kobylisky, N.A.; Martynov, E.S.
1979-01-01
Behaviour of the mean multiplicity and the total cross section σsub(t) of hadron-hadron interactions is considered in the framework of the Regge models at high energies. Generating function was plotted for models of dipole and froissaron, and the mean multiplicity and multiplicity moments were calculated. It is shown that approximately ln 2 S (energy square) in the dipole model, which is in good agreement with the experiment. It is also found that in various Regge models approximately σsub(t)lnS
Discrete choice models with multiplicative error terms
DEFF Research Database (Denmark)
Fosgerau, Mogens; Bierlaire, Michel
2009-01-01
The conditional indirect utility of many random utility maximization (RUM) discrete choice models is specified as a sum of an index V depending on observables and an independent random term ε. In general, the universe of RUM consistent models is much larger, even fixing some specification of V due...
Xu, Chet C; Chan, Roger W; Sun, Han; Zhan, Xiaowei
2017-11-01
A mixed-effects model approach was introduced in this study for the statistical analysis of rheological data of vocal fold tissues, in order to account for the data correlation caused by multiple measurements of each tissue sample across the test frequency range. Such data correlation had often been overlooked in previous studies in the past decades. The viscoelastic shear properties of the vocal fold lamina propria of two commonly used laryngeal research animal species (i.e. rabbit, porcine) were measured by a linear, controlled-strain simple-shear rheometer. Along with published canine and human rheological data, the vocal fold viscoelastic shear moduli of these animal species were compared to those of human over a frequency range of 1-250Hz using the mixed-effects models. Our results indicated that tissues of the rabbit, canine and porcine vocal fold lamina propria were significantly stiffer and more viscous than those of human. Mixed-effects models were shown to be able to more accurately analyze rheological data generated from repeated measurements. Copyright © 2017 Elsevier Ltd. All rights reserved.
DEFF Research Database (Denmark)
Sahana, Goutam; Mailund, Thomas; Lund, Mogens Sandø
2011-01-01
be extended to incorporate other effects in a straightforward and rigorous fashion. Here, we present a complementary approach, called ‘GENMIX (genealogy based mixed model)’ which combines advantages from two powerful GWAS methods: genealogy-based haplotype grouping and MMA. Subjects and Methods: We validated......Introduction: The state-of-the-art for dealing with multiple levels of relationship among the samples in genome-wide association studies (GWAS) is unified mixed model analysis (MMA). This approach is very flexible, can be applied to both family-based and population-based samples, and can...
Bayesian Option Pricing using Mixed Normal Heteroskedasticity Models
DEFF Research Database (Denmark)
Rombouts, Jeroen; Stentoft, Lars
2014-01-01
Option pricing using mixed normal heteroscedasticity models is considered. It is explained how to perform inference and price options in a Bayesian framework. The approach allows to easily compute risk neutral predictive price densities which take into account parameter uncertainty....... In an application to the S&P 500 index, classical and Bayesian inference is performed on the mixture model using the available return data. Comparing the ML estimates and posterior moments small differences are found. When pricing a rich sample of options on the index, both methods yield similar pricing errors...... measured in dollar and implied standard deviation losses, and it turns out that the impact of parameter uncertainty is minor. Therefore, when it comes to option pricing where large amounts of data are available, the choice of the inference method is unimportant. The results are robust to different...
Goodness-of-fit tests in mixed models
Claeskens, Gerda
2009-05-12
Mixed models, with both random and fixed effects, are most often estimated on the assumption that the random effects are normally distributed. In this paper we propose several formal tests of the hypothesis that the random effects and/or errors are normally distributed. Most of the proposed methods can be extended to generalized linear models where tests for non-normal distributions are of interest. Our tests are nonparametric in the sense that they are designed to detect virtually any alternative to normality. In case of rejection of the null hypothesis, the nonparametric estimation method that is used to construct a test provides an estimator of the alternative distribution. © 2009 Sociedad de Estadística e Investigación Operativa.
Linear mixing model applied to coarse resolution satellite data
Holben, Brent N.; Shimabukuro, Yosio E.
1992-01-01
A linear mixing model typically applied to high resolution data such as Airborne Visible/Infrared Imaging Spectrometer, Thematic Mapper, and Multispectral Scanner System is applied to the NOAA Advanced Very High Resolution Radiometer coarse resolution satellite data. The reflective portion extracted from the middle IR channel 3 (3.55 - 3.93 microns) is used with channels 1 (0.58 - 0.68 microns) and 2 (0.725 - 1.1 microns) to run the Constrained Least Squares model to generate fraction images for an area in the west central region of Brazil. The derived fraction images are compared with an unsupervised classification and the fraction images derived from Landsat TM data acquired in the same day. In addition, the relationship betweeen these fraction images and the well known NDVI images are presented. The results show the great potential of the unmixing techniques for applying to coarse resolution data for global studies.
Structural model analysis of multiple quantitative traits.
Directory of Open Access Journals (Sweden)
Renhua Li
2006-07-01
Full Text Available We introduce a method for the analysis of multilocus, multitrait genetic data that provides an intuitive and precise characterization of genetic architecture. We show that it is possible to infer the magnitude and direction of causal relationships among multiple correlated phenotypes and illustrate the technique using body composition and bone density data from mouse intercross populations. Using these techniques we are able to distinguish genetic loci that affect adiposity from those that affect overall body size and thus reveal a shortcoming of standardized measures such as body mass index that are widely used in obesity research. The identification of causal networks sheds light on the nature of genetic heterogeneity and pleiotropy in complex genetic systems.
Evaluating significance in linear mixed-effects models in R.
Luke, Steven G
2017-08-01
Mixed-effects models are being used ever more frequently in the analysis of experimental data. However, in the lme4 package in R the standards for evaluating significance of fixed effects in these models (i.e., obtaining p-values) are somewhat vague. There are good reasons for this, but as researchers who are using these models are required in many cases to report p-values, some method for evaluating the significance of the model output is needed. This paper reports the results of simulations showing that the two most common methods for evaluating significance, using likelihood ratio tests and applying the z distribution to the Wald t values from the model output (t-as-z), are somewhat anti-conservative, especially for smaller sample sizes. Other methods for evaluating significance, including parametric bootstrapping and the Kenward-Roger and Satterthwaite approximations for degrees of freedom, were also evaluated. The results of these simulations suggest that Type 1 error rates are closest to .05 when models are fitted using REML and p-values are derived using the Kenward-Roger or Satterthwaite approximations, as these approximations both produced acceptable Type 1 error rates even for smaller samples.
Estimating preferential flow in karstic aquifers using statistical mixed models.
Anaya, Angel A; Padilla, Ingrid; Macchiavelli, Raul; Vesper, Dorothy J; Meeker, John D; Alshawabkeh, Akram N
2014-01-01
Karst aquifers are highly productive groundwater systems often associated with conduit flow. These systems can be highly vulnerable to contamination, resulting in a high potential for contaminant exposure to humans and ecosystems. This work develops statistical models to spatially characterize flow and transport patterns in karstified limestone and determines the effect of aquifer flow rates on these patterns. A laboratory-scale Geo-HydroBed model is used to simulate flow and transport processes in a karstic limestone unit. The model consists of stainless steel tanks containing a karstified limestone block collected from a karst aquifer formation in northern Puerto Rico. Experimental work involves making a series of flow and tracer injections, while monitoring hydraulic and tracer response spatially and temporally. Statistical mixed models (SMMs) are applied to hydraulic data to determine likely pathways of preferential flow in the limestone units. The models indicate a highly heterogeneous system with dominant, flow-dependent preferential flow regions. Results indicate that regions of preferential flow tend to expand at higher groundwater flow rates, suggesting a greater volume of the system being flushed by flowing water at higher rates. Spatial and temporal distribution of tracer concentrations indicates the presence of conduit-like and diffuse flow transport in the system, supporting the notion of both combined transport mechanisms in the limestone unit. The temporal response of tracer concentrations at different locations in the model coincide with, and confirms the preferential flow distribution generated with the SMMs used in the study. © 2013, National Ground Water Association.
Multiple-lesion track-structure model
International Nuclear Information System (INIS)
Wilson, J.W.; Cucinotta, F.A.; Shinn, J.L.
1992-03-01
A multilesion cell kinetic model is derived, and radiation kinetic coefficients are related to the Katz track structure model. The repair-related coefficients are determined from the delayed plating experiments of Yang et al. for the C3H10T1/2 cell system. The model agrees well with the x ray and heavy ion experiments of Yang et al. for the immediate plating, delaying plating, and fractionated exposure protocols employed by Yang. A study is made of the effects of target fragments in energetic proton exposures and of the repair-deficient target-fragment-induced lesions
Affine LIBOR Models with Multiple Curves
DEFF Research Database (Denmark)
Grbac, Zorana; Papapantoleon, Antonis; Schoenmakers, John
2015-01-01
are specified following the methodology of the affine LIBOR models and are driven by the wide and flexible class of affine processes. The affine property is preserved under forward measures, which allows us to derive Fourier pricing formulas for caps, swaptions, and basis swaptions. A model specification...... with dependent LIBOR rates is developed that allows for an efficient and accurate calibration to a system of caplet prices....
Morris, Jeffrey S; Baladandayuthapani, Veerabhadran; Herrick, Richard C; Sanna, Pietro; Gutstein, Howard
2011-01-01
Image data are increasingly encountered and are of growing importance in many areas of science. Much of these data are quantitative image data, which are characterized by intensities that represent some measurement of interest in the scanned images. The data typically consist of multiple images on the same domain and the goal of the research is to combine the quantitative information across images to make inference about populations or interventions. In this paper, we present a unified analysis framework for the analysis of quantitative image data using a Bayesian functional mixed model approach. This framework is flexible enough to handle complex, irregular images with many local features, and can model the simultaneous effects of multiple factors on the image intensities and account for the correlation between images induced by the design. We introduce a general isomorphic modeling approach to fitting the functional mixed model, of which the wavelet-based functional mixed model is one special case. With suitable modeling choices, this approach leads to efficient calculations and can result in flexible modeling and adaptive smoothing of the salient features in the data. The proposed method has the following advantages: it can be run automatically, it produces inferential plots indicating which regions of the image are associated with each factor, it simultaneously considers the practical and statistical significance of findings, and it controls the false discovery rate. Although the method we present is general and can be applied to quantitative image data from any application, in this paper we focus on image-based proteomic data. We apply our method to an animal study investigating the effects of opiate addiction on the brain proteome. Our image-based functional mixed model approach finds results that are missed with conventional spot-based analysis approaches. In particular, we find that the significant regions of the image identified by the proposed method
Modeling containment of large wildfires using generalized linear mixed-model analysis
Mark Finney; Isaac C. Grenfell; Charles W. McHugh
2009-01-01
Billions of dollars are spent annually in the United States to contain large wildland fires, but the factors contributing to suppression success remain poorly understood. We used a regression model (generalized linear mixed-model) to model containment probability of individual fires, assuming that containment was a repeated-measures problem (fixed effect) and...
Ker, H. W.
2014-01-01
Multilevel data are very common in educational research. Hierarchical linear models/linear mixed-effects models (HLMs/LMEs) are often utilized to analyze multilevel data nowadays. This paper discusses the problems of utilizing ordinary regressions for modeling multilevel educational data, compare the data analytic results from three regression…
A thermal mixing model of crossflow in tube bundles for use with the porous body approximation
International Nuclear Information System (INIS)
Ashcroft, J.; Kaminski, D.A.
1996-06-01
Diffusive thermal mixing in a heated tube bundle with a cooling fluid in crossflow was analyzed numerically. From the results of detailed two-dimensional models, which calculated the diffusion of heat downstream of one heated tube in an otherwise adiabatic flow field, a diffusion model appropriate for use with the porous body method was developed. The model accounts for both molecular and turbulent diffusion of heat by determining the effective thermal conductivity in the porous region. The model was developed for triangular shaped staggered tube bundles with pitch to diameter ratios between 1.10 and 2.00 and for Reynolds numbers between 1,000 and 20,000. The tubes are treated as nonconducting. Air and water were considered as working fluids. The effective thermal conductivity was found to be linearly dependent on the tube Reynolds number and fluid Prandtl number, and dependent on the bundle geometry. The porous body thermal mixing model was then compared against numerical models for flows with multiple heated tubes with very good agreement
Linear models for sound from supersonic reacting mixing layers
Chary, P. Shivakanth; Samanta, Arnab
2016-12-01
We perform a linearized reduced-order modeling of the aeroacoustic sound sources in supersonic reacting mixing layers to explore their sensitivities to some of the flow parameters in radiating sound. Specifically, we investigate the role of outer modes as the effective flow compressibility is raised, when some of these are expected to dominate over the traditional Kelvin-Helmholtz (K-H) -type central mode. Although the outer modes are known to be of lesser importance in the near-field mixing, how these radiate to the far-field is uncertain, on which we focus. On keeping the flow compressibility fixed, the outer modes are realized via biasing the respective mean densities of the fast (oxidizer) or slow (fuel) side. Here the mean flows are laminar solutions of two-dimensional compressible boundary layers with an imposed composite (turbulent) spreading rate, which we show to significantly alter the growth of instability waves by saturating them earlier, similar to in nonlinear calculations, achieved here via solving the linear parabolized stability equations. As the flow parameters are varied, instability of the slow modes is shown to be more sensitive to heat release, potentially exceeding equivalent central modes, as these modes yield relatively compact sound sources with lesser spreading of the mixing layer, when compared to the corresponding fast modes. In contrast, the radiated sound seems to be relatively unaffected when the mixture equivalence ratio is varied, except for a lean mixture which is shown to yield a pronounced effect on the slow mode radiation by reducing its modal growth.
SDG and qualitative trend based model multiple scale validation
Gao, Dong; Xu, Xin; Yin, Jianjin; Zhang, Hongyu; Zhang, Beike
2017-09-01
Verification, Validation and Accreditation (VV&A) is key technology of simulation and modelling. For the traditional model validation methods, the completeness is weak; it is carried out in one scale; it depends on human experience. The SDG (Signed Directed Graph) and qualitative trend based multiple scale validation is proposed. First the SDG model is built and qualitative trends are added to the model. And then complete testing scenarios are produced by positive inference. The multiple scale validation is carried out by comparing the testing scenarios with outputs of simulation model in different scales. Finally, the effectiveness is proved by carrying out validation for a reactor model.
Modelling of rate effects at multiple scales
DEFF Research Database (Denmark)
Pedersen, R.R.; Simone, A.; Sluys, L. J.
2008-01-01
, the length scale in the meso-model and the macro-model can be coupled. In this fashion, a bridging of length scales can be established. A computational analysis of a Split Hopkinson bar test at medium and high impact load is carried out at macro-scale and meso-scale including information from the micro-scale.......At the macro- and meso-scales a rate dependent constitutive model is used in which visco-elasticity is coupled to visco-plasticity and damage. A viscous length scale effect is introduced to control the size of the fracture process zone. By comparison of the widths of the fracture process zone...
Mixed Frequency Data Sampling Regression Models: The R Package midasr
Directory of Open Access Journals (Sweden)
Eric Ghysels
2016-08-01
Full Text Available When modeling economic relationships it is increasingly common to encounter data sampled at different frequencies. We introduce the R package midasr which enables estimating regression models with variables sampled at different frequencies within a MIDAS regression framework put forward in work by Ghysels, Santa-Clara, and Valkanov (2002. In this article we define a general autoregressive MIDAS regression model with multiple variables of different frequencies and show how it can be specified using the familiar R formula interface and estimated using various optimization methods chosen by the researcher. We discuss how to check the validity of the estimated model both in terms of numerical convergence and statistical adequacy of a chosen regression specification, how to perform model selection based on a information criterion, how to assess forecasting accuracy of the MIDAS regression model and how to obtain a forecast aggregation of different MIDAS regression models. We illustrate the capabilities of the package with a simulated MIDAS regression model and give two empirical examples of application of MIDAS regression.
Directory of Open Access Journals (Sweden)
Hae Kyung Im
2012-02-01
Full Text Available The International HapMap project has made publicly available extensive genotypic data on a number of lymphoblastoid cell lines (LCLs. Building on this resource, many research groups have generated a large amount of phenotypic data on these cell lines to facilitate genetic studies of disease risk or drug response. However, one problem that may reduce the usefulness of these resources is the biological noise inherent to cellular phenotypes. We developed a novel method, termed Mixed Effects Model Averaging (MEM, which pools data from multiple sources and generates an intrinsic cellular growth rate phenotype. This intrinsic growth rate was estimated for each of over 500 HapMap cell lines. We then examined the association of this intrinsic growth rate with gene expression levels and found that almost 30% (2,967 out of 10,748 of the genes tested were significant with FDR less than 10%. We probed further to demonstrate evidence of a genetic effect on intrinsic growth rate by determining a significant enrichment in growth-associated genes among genes targeted by top growth-associated SNPs (as eQTLs. The estimated intrinsic growth rate as well as the strength of the association with genetic variants and gene expression traits are made publicly available through a cell-based pharmacogenomics database, PACdb. This resource should enable researchers to explore the mediating effects of proliferation rate on other phenotypes.
Modeling of Cd(II) sorption on mixed oxide
International Nuclear Information System (INIS)
Waseem, M.; Mustafa, S.; Naeem, A.; Shah, K.H.; Hussain, S.Y.; Safdar, M.
2011-01-01
Mixed oxide of iron and silicon (0.75 M Fe(OH)3:0.25 M SiO/sub 2/) was synthesized and characterized by various techniques like surface area analysis, point of zero charge (PZC), energy dispersive X-rays (EDX) spectroscopy, Thermogravimetric and differential thermal analysis (TG-DTA), Fourier transform infrared spectroscopy (FTIR) and X-rays diffraction (XRD) analysis. The uptake of Cd/sup 2+/ ions on mixed oxide increased with pH, temperature and metal ion concentration. Sorption data have been interpreted in terms of both Langmuir and Freundlich models. The Xm values at pH 7 are found to be almost twice as compared to pH 5. The values of both DH and DS were found to be positive indicating that the sorption process was endothermic and accompanied by the dehydration of Cd/sup 2+/. Further, the negative value of DG confirms the spontaneity of the reaction. The ion exchange mechanism was suggested to take place for each Cd/sup 2+/ ions at pH 5, whereas ion exchange was found coupled with non specific adsorption of metal cations at pH 7. (author)
New experimental model of multiple myeloma.
Telegin, G B; Kalinina, A R; Ponomarenko, N A; Ovsepyan, A A; Smirnov, S V; Tsybenko, V V; Homeriki, S G
2001-06-01
NSO/1 (P3x63Ay 8Ut) and SP20 myeloma cells were inoculated to BALB/c OlaHsd mice. NSO/1 cells allowed adequate stage-by-stage monitoring of tumor development. The adequacy of this model was confirmed in experiments with conventional cytostatics: prospidium and cytarabine caused necrosis of tumor cells and reduced animal mortality.
Animal model of human disease. Multiple myeloma
Radl, J.; Croese, J.W.; Zurcher, C.; Enden-Vieveen, M.H.M. van den; Leeuw, A.M. de
1988-01-01
Animal models of spontaneous and induced plasmacytomas in some inbred strains of mice have proven to be useful tools for different studies on tumorigenesis and immunoregulation. Their wide applicability and the fact that after their intravenous transplantation, the recipient mice developed bone
Multiple Social Networks, Data Models and Measures for
DEFF Research Database (Denmark)
Magnani, Matteo; Rossi, Luca
2017-01-01
Multiple Social Network Analysis is a discipline defining models, measures, methodologies, and algorithms to study multiple social networks together as a single social system. It is particularly valuable when the networks are interconnected, e.g., the same actors are present in more than one...
Modeling Rabbit Responses to Single and Multiple Aerosol ...
Journal Article Survival models are developed here to predict response and time-to-response for mortality in rabbits following exposures to single or multiple aerosol doses of Bacillus anthracis spores. Hazard function models were developed for a multiple dose dataset to predict the probability of death through specifying dose-response functions and the time between exposure and the time-to-death (TTD). Among the models developed, the best-fitting survival model (baseline model) has an exponential dose-response model with a Weibull TTD distribution. Alternative models assessed employ different underlying dose-response functions and use the assumption that, in a multiple dose scenario, earlier doses affect the hazard functions of each subsequent dose. In addition, published mechanistic models are analyzed and compared with models developed in this paper. None of the alternative models that were assessed provided a statistically significant improvement in fit over the baseline model. The general approach utilizes simple empirical data analysis to develop parsimonious models with limited reliance on mechanistic assumptions. The baseline model predicts TTDs consistent with reported results from three independent high-dose rabbit datasets. More accurate survival models depend upon future development of dose-response datasets specifically designed to assess potential multiple dose effects on response and time-to-response. The process used in this paper to dev
International Nuclear Information System (INIS)
Jackson, V.L.
2011-01-01
The primary purpose of the tank mixing and sampling demonstration program is to mitigate the technical risks associated with the ability of the Hanford tank farm delivery and celtification systems to measure and deliver a uniformly mixed high-level waste (HLW) feed to the Waste Treatment and Immobilization Plant (WTP) Uniform feed to the WTP is a requirement of 24590-WTP-ICD-MG-01-019, ICD-19 - Interface Control Document for Waste Feed, although the exact definition of uniform is evolving in this context. Computational Fluid Dynamics (CFD) modeling has been used to assist in evaluating scaleup issues, study operational parameters, and predict mixing performance at full-scale.
Explaining clinical behaviors using multiple theoretical models
Eccles, Martin P; Grimshaw, Jeremy M; MacLennan, Graeme; Bonetti, Debbie; Glidewell, Liz; Pitts, Nigel B; Steen, Nick; Thomas, Ruth; Walker, Anne; Johnston, Marie
2012-01-01
Abstract Background In the field of implementation research, there is an increased interest in use of theory when designing implementation research studies involving behavior change. In 2003, we initiated a series of five studies to establish a scientific rationale for interventions to translate research findings into clinical practice by exploring the performance of a number of different, commonly used, overlapping behavioral theories and models. We reflect on the strengths and weaknesses of...
Airport choice model in multiple airport regions
Directory of Open Access Journals (Sweden)
Claudia Muñoz
2017-02-01
Full Text Available Purpose: This study aims to analyze travel choices made by air transportation users in multi airport regions because it is a crucial component when planning passenger redistribution policies. The purpose of this study is to find a utility function which makes it possible to know the variables that influence users’ choice of the airports on routes to the main cities in the Colombian territory. Design/methodology/approach: This research generates a Multinomial Logit Model (MNL, which is based on the theory of maximizing utility, and it is based on the data obtained on revealed and stated preference surveys applied to users who reside in the metropolitan area of Aburrá Valley (Colombia. This zone is the only one in the Colombian territory which has two neighboring airports for domestic flights. The airports included in the modeling process were Enrique Olaya Herrera (EOH Airport and José María Córdova (JMC Airport. Several structure models were tested, and the MNL proved to be the most significant revealing the common variables that affect passenger airport choice include the airfare, the price to travel the airport, and the time to get to the airport. Findings and Originality/value: The airport choice model which was calibrated corresponds to a valid powerful tool used to calculate the probability of each analyzed airport of being chosen for domestic flights in the Colombian territory. This is done bearing in mind specific characteristic of each of the attributes contained in the utility function. In addition, these probabilities will be used to calculate future market shares of the two airports considered in this study, and this will be done generating a support tool for airport and airline marketing policies.
Subgrid models for mass and thermal diffusion in turbulent mixing
International Nuclear Information System (INIS)
Lim, H; Yu, Y; Glimm, J; Li, X-L; Sharp, D H
2010-01-01
We propose a new method for the large eddy simulation (LES) of turbulent mixing flows. The method yields convergent probability distribution functions (PDFs) for temperature and concentration and a chemical reaction rate when applied to reshocked Richtmyer-Meshkov (RM) unstable flows. Because such a mesh convergence is an unusual and perhaps original capability for LES of RM flows, we review previous validation studies of the principal components of the algorithm. The components are (i) a front tracking code, FronTier, to control numerical mass diffusion and (ii) dynamic subgrid scale (SGS) models to compensate for unresolved scales in the LES. We also review the relevant code comparison studies. We compare our results to a simple model based on 1D diffusion, taking place in the geometry defined statistically by the interface (the 50% isoconcentration surface between the two fluids). Several conclusions important to physics could be drawn from our study. We model chemical reactions with no closure approximations beyond those in the LES of the fluid variables itself, and as with dynamic SGS models, these closures contain no adjustable parameters. The chemical reaction rate is specified by the joint PDF for temperature and concentration. We observe a bimodal distribution for the PDF and we observe significant dependence on fluid transport parameters.
Attempted integration of multiple species of turaco into a mixed-species aviary.
Valuska, Annie J; Leighty, Katherine A; Ferrie, Gina M; Nichols, Valerie D; Tybor, Cheryl L; Plassé, Chelle; Bettinger, Tamara L
2013-03-01
Mixed-species exhibits offer a variety of benefits but can be challenging to maintain due to difficulty in managing interspecific interactions. This is particularly true when little has been documented on the behavior of the species being mixed. This was the case when we attempted to house three species of turaco (family: Musophagidae) together with other species in a walk-through aviary. To learn more about the behavior of great blue turacos, violaceous turacos, and white-bellied gray go-away birds, we supplemented opportunistic keeper observations with systematic data collection on their behavior, location, distance from other birds, and visibility to visitors. Keepers reported high levels of aggression among turacos, usually initiated by a go-away bird or a violaceous turaco. Most aggression occurred during feedings or when pairs were defending nest sites. Attempts to reduce aggression by temporarily removing birds to holding areas and reintroducing them days later were ineffective. Systematic data collection revealed increased social behavior, including aggression, during breeding season in the violaceous turacos, as well as greater location fidelity. These behavioral cues may be useful in predicting breeding behavior in the future. Ultimately, we were only able to house three species of turaco together for a short time, and prohibitively high levels of conflict occurred when pairs were breeding. We conclude that mixing these three turaco species is challenging and may not be the most appropriate housing situation for them, particularly during breeding season. However, changes in turaco species composition, sex composition, or exhibit design may result in more compatible mixed-turaco species groups. © 2012 Wiley Periodicals, Inc.
Noor, A. K.; Andersen, C. M.; Tanner, J. A.
1984-01-01
An effective computational strategy is presented for the large-rotation, nonlinear axisymmetric analysis of shells of revolution. The three key elements of the computational strategy are: (1) use of mixed finite-element models with discontinuous stress resultants at the element interfaces; (2) substantial reduction in the total number of degrees of freedom through the use of a multiple-parameter reduction technique; and (3) reduction in the size of the analysis model through the decomposition of asymmetric loads into symmetric and antisymmetric components coupled with the use of the multiple-parameter reduction technique. The potential of the proposed computational strategy is discussed. Numerical results are presented to demonstrate the high accuracy of the mixed models developed and to show the potential of using the proposed computational strategy for the analysis of tires.
Adaptability and stability of maize varieties using mixed model methodology
Directory of Open Access Journals (Sweden)
Walter Fernandes Meirelles
2012-01-01
Full Text Available The objective of this study was to evaluate the performance, adaptability and stability of corn cultivars simultaneously in unbalanced experiments, using the method of harmonic means of the relative performance of genetic values. The grain yield of 45 cultivars, including hybrids and varieties, was evaluated in 49 environments in two growing seasons. In the 2007/2008 growing season, 36 cultivars were evaluated and in 2008/2009 25 cultivars, of which 16 were used in both seasons. Statistical analyses were performed based on mixed models, considering genotypes as random and replications within environments as fixed factors. The experimental precision in the combined analyses was high (accuracy estimates > 92 %. Despite the existence of genotype x environment interaction, hybrids and varieties with high adaptability and stability were identified. Results showed that the method of harmonic means of the relative performance of genetic values is a suitable method for maize breeding programs.
Latent Fundamentals Arbitrage with a Mixed Effects Factor Model
Directory of Open Access Journals (Sweden)
Andrei Salem Gonçalves
2012-09-01
Full Text Available We propose a single-factor mixed effects panel data model to create an arbitrage portfolio that identifies differences in firm-level latent fundamentals. Furthermore, we show that even though the characteristics that affect returns are unknown variables, it is possible to identify the strength of the combination of these latent fundamentals for each stock by following a simple approach using historical data. As a result, a trading strategy that bought the stocks with the best fundamentals (strong fundamentals portfolio and sold the stocks with the worst ones (weak fundamentals portfolio realized significant risk-adjusted returns in the U.S. market for the period between July 1986 and June 2008. To ensure robustness, we performed sub period and seasonal analyses and adjusted for trading costs and we found further empirical evidence that using a simple investment rule, that identified these latent fundamentals from the structure of past returns, can lead to profit.
Rahn, A C; Köpke, S; Backhus, I; Kasper, J; Anger, K; Untiedt, B; Alegiani, A; Kleiter, I; Mühlhauser, I; Heesen, C
2018-02-01
Treatment decision-making is complex for people with multiple sclerosis. Profound information on available options is virtually not possible in regular neurologist encounters. The "nurse decision coach model" was developed to redistribute health professionals' tasks in supporting immunotreatment decision-making following the principles of informed shared decision-making. To test the feasibility of a decision coaching programme and recruitment strategies to inform the main trial. Feasibility testing and parallel pilot randomised controlled trial, accompanied by a mixed methods process evaluation. Two German multiple sclerosis university centres. People with suspected or relapsing-remitting multiple sclerosis facing immunotreatment decisions on first line drugs were recruited. Randomisation to the intervention (n = 38) or control group (n = 35) was performed on a daily basis. Quantitative and qualitative process data were collected from people with multiple sclerosis, nurses and physicians. We report on the development and piloting of the decision coaching programme. It comprises a training course for multiple sclerosis nurses and the coaching intervention. The intervention consists of up to three structured nurse-led decision coaching sessions, access to an evidence-based online information platform (DECIMS-Wiki) and a final physician consultation. After feasibility testing, a pilot randomised controlled trial was performed. People with multiple sclerosis were randomised to the intervention or control group. The latter had also access to the DECIMS-Wiki, but received otherwise care as usual. Nurses were not blinded to group assignment, while people with multiple sclerosis and physicians were. The primary outcome was 'informed choice' after six months including the sub-dimensions' risk knowledge (after 14 days), attitude concerning immunotreatment (after physician consultation), and treatment uptake (after six months). Quantitative process evaluation data
Multiple Imputation of Predictor Variables Using Generalized Additive Models
de Jong, Roel; van Buuren, Stef; Spiess, Martin
2016-01-01
The sensitivity of multiple imputation methods to deviations from their distributional assumptions is investigated using simulations, where the parameters of scientific interest are the coefficients of a linear regression model, and values in predictor variables are missing at random. The
A mixed integer linear programming model applied in barge planning for Omya
Directory of Open Access Journals (Sweden)
David Bredström
2015-12-01
Full Text Available This article presents a mathematical model for barge transport planning on the river Rhine, which is part of a decision support system (DSS recently taken into use by the Swiss company Omya. The system is operated by Omya’s regional office in Cologne, Germany, responsible for distribution planning at the regional distribution center (RDC in Moerdijk, the Netherlands. The distribution planning is a vital part of supply chain management of Omya’s production of Norwegian high quality calcium carbonate slurry, supplied to European paper manufacturers. The DSS operates within a vendor managed inventory (VMI setting, where the customer inventories are monitored by Omya, who decides upon the refilling days and quantities delivered by barges. The barge planning problem falls into the category of inventory routing problems (IRP and is further characterized with multiple products, heterogeneous fleet with availability restrictions (the fleet is owned by third party, vehicle compartments, dependency of barge capacity on water-level, multiple customer visits, bounded customer inventories and rolling planning horizon. There are additional modelling details which had to be considered to make it possible to employ the model in practice at a sufficient level of detail. To the best of our knowledge, we have not been able to find similar models covering all these aspects in barge planning. This article presents the developed mixed-integer programming model and discusses practical experience with its solution. Briefly, it also puts the model into the context of the entire business case of value chain optimization in Omya.
Directory of Open Access Journals (Sweden)
Cascaval Dan
2004-01-01
Full Text Available The mixing time for bioreactors depends mainly on the rheoiogicai properties of the broths, the biomass concentration and morphology, mixing system characteristics and fermentation conditions. For quantifying the influence of these factors on the mixing efficiency for stirred bioreactors, aerated broths of bacteria (P. shermanii, yeasts (S. cerevisiae and fungi (P. chrysogenum, free mycelia and mycelial aggregates of different concentrations have been investigated using a laboratory bioreactor with a double turbine impeller. The experimental data indicated that the influence of the rotation speed, aeration rate and stirrer positions on the mixing intensity strongly differ from one system to another and must be correlated with the microorganism characteristics, namely: the biomass concentration and morphology. Moreover, compared with non-aerated broths, variations of the mixing time with the considered parameters are very different, due to the complex flow mechanism of gas-liquid dispersions. By means of the experimental data and using a multiregression analysis method some mathematical correlations for the mixing time of the general form: tm = a1*Cx2+a2*Cx+a3*IgVa+a4-N2+a5-N+a6/a7*L2+a8*L+a9 were established. The proposed equations offer good agreement with the experiments, the average deviation being ±6.7% - ±9.4 and are adequate for the flow regime Re < 25,000.
Bauer, Susanne E.; Ault, Andrew; Prather, Kimberly A.
2013-01-01
Aerosol particles in the atmosphere are composed of multiple chemical species. The aerosol mixing state, which describes how chemical species are mixed at the single-particle level, provides critical information on microphysical characteristics that determine the interaction of aerosols with the climate system. The evaluation of mixing state has become the next challenge. This study uses aerosol time-of-flight mass spectrometry (ATOFMS) data and compares the results to those of the Goddard Institute for Space Studies modelE-MATRIX (Multiconfiguration Aerosol TRacker of mIXing state) model, a global climate model that includes a detailed aerosol microphysical scheme. We use data from field campaigns that examine a variety of air mass regimens (urban, rural, and maritime). At all locations, polluted areas in California (Riverside, La Jolla, and Long Beach), a remote location in the Sierra Nevada Mountains (Sugar Pine) and observations from Jeju (South Korea), the majority of aerosol species are internally mixed. Coarse aerosol particles, those above 1 micron, are typically aged, such as coated dust or reacted sea-salt particles. Particles below 1 micron contain large fractions of organic material, internally-mixed with sulfate and black carbon, and few external mixtures. We conclude that observations taken over multiple weeks characterize typical air mass types at a given location well; however, due to the instrumentation, we could not evaluate mass budgets. These results represent the first detailed comparison of single-particle mixing states in a global climate model with real-time single-particle mass spectrometry data, an important step in improving the representation of mixing state in global climate models.
Cooper, Richard J; Krueger, Tobias; Hiscock, Kevin M; Rawlins, Barry G
2014-11-01
Mixing models have become increasingly common tools for apportioning fluvial sediment load to various sediment sources across catchments using a wide variety of Bayesian and frequentist modeling approaches. In this study, we demonstrate how different model setups can impact upon resulting source apportionment estimates in a Bayesian framework via a one-factor-at-a-time (OFAT) sensitivity analysis. We formulate 13 versions of a mixing model, each with different error assumptions and model structural choices, and apply them to sediment geochemistry data from the River Blackwater, Norfolk, UK, to apportion suspended particulate matter (SPM) contributions from three sources (arable topsoils, road verges, and subsurface material) under base flow conditions between August 2012 and August 2013. Whilst all 13 models estimate subsurface sources to be the largest contributor of SPM (median ∼76%), comparison of apportionment estimates reveal varying degrees of sensitivity to changing priors, inclusion of covariance terms, incorporation of time-variant distributions, and methods of proportion characterization. We also demonstrate differences in apportionment results between a full and an empirical Bayesian setup, and between a Bayesian and a frequentist optimization approach. This OFAT sensitivity analysis reveals that mixing model structural choices and error assumptions can significantly impact upon sediment source apportionment results, with estimated median contributions in this study varying by up to 21% between model versions. Users of mixing models are therefore strongly advised to carefully consider and justify their choice of model structure prior to conducting sediment source apportionment investigations. An OFAT sensitivity analysis of sediment fingerprinting mixing models is conductedBayesian models display high sensitivity to error assumptions and structural choicesSource apportionment results differ between Bayesian and frequentist approaches.
Faraway, Julian J
2005-01-01
Linear models are central to the practice of statistics and form the foundation of a vast range of statistical methodologies. Julian J. Faraway''s critically acclaimed Linear Models with R examined regression and analysis of variance, demonstrated the different methods available, and showed in which situations each one applies. Following in those footsteps, Extending the Linear Model with R surveys the techniques that grow from the regression model, presenting three extensions to that framework: generalized linear models (GLMs), mixed effect models, and nonparametric regression models. The author''s treatment is thoroughly modern and covers topics that include GLM diagnostics, generalized linear mixed models, trees, and even the use of neural networks in statistics. To demonstrate the interplay of theory and practice, throughout the book the author weaves the use of the R software environment to analyze the data of real examples, providing all of the R commands necessary to reproduce the analyses. All of the ...
Modeling of RFID-Enabled Real-Time Manufacturing Execution System in Mixed-Model Assembly Lines
Directory of Open Access Journals (Sweden)
Zhixin Yang
2015-01-01
Full Text Available To quickly respond to the diverse product demands, mixed-model assembly lines are well adopted in discrete manufacturing industries. Besides the complexity in material distribution, mixed-model assembly involves a variety of components, different process plans and fast production changes, which greatly increase the difficulty for agile production management. Aiming at breaking through the bottlenecks in existing production management, a novel RFID-enabled manufacturing execution system (MES, which is featured with real-time and wireless information interaction capability, is proposed to identify various manufacturing objects including WIPs, tools, and operators, etc., and to trace their movements throughout the production processes. However, being subject to the constraints in terms of safety stock, machine assignment, setup, and scheduling requirements, the optimization of RFID-enabled MES model for production planning and scheduling issues is a NP-hard problem. A new heuristical generalized Lagrangian decomposition approach has been proposed for model optimization, which decomposes the model into three subproblems: computation of optimal configuration of RFID senor networks, optimization of production planning subjected to machine setup cost and safety stock constraints, and optimization of scheduling for minimized overtime. RFID signal processing methods that could solve unreliable, redundant, and missing tag events are also described in detail. The model validity is discussed through algorithm analysis and verified through numerical simulation. The proposed design scheme has important reference value for the applications of RFID in multiple manufacturing fields, and also lays a vital research foundation to leverage digital and networked manufacturing system towards intelligence.
Entrepreneurial intention modeling using hierarchical multiple regression
Directory of Open Access Journals (Sweden)
Marina Jeger
2014-12-01
Full Text Available The goal of this study is to identify the contribution of effectuation dimensions to the predictive power of the entrepreneurial intention model over and above that which can be accounted for by other predictors selected and confirmed in previous studies. As is often the case in social and behavioral studies, some variables are likely to be highly correlated with each other. Therefore, the relative amount of variance in the criterion variable explained by each of the predictors depends on several factors such as the order of variable entry and sample specifics. The results show the modest predictive power of two dimensions of effectuation prior to the introduction of the theory of planned behavior elements. The article highlights the main advantages of applying hierarchical regression in social sciences as well as in the specific context of entrepreneurial intention formation, and addresses some of the potential pitfalls that this type of analysis entails.
Differential expression analysis for RNAseq using Poisson mixed models.
Sun, Shiquan; Hood, Michelle; Scott, Laura; Peng, Qinke; Mukherjee, Sayan; Tung, Jenny; Zhou, Xiang
2017-06-20
Identifying differentially expressed (DE) genes from RNA sequencing (RNAseq) studies is among the most common analyses in genomics. However, RNAseq DE analysis presents several statistical and computational challenges, including over-dispersed read counts and, in some settings, sample non-independence. Previous count-based methods rely on simple hierarchical Poisson models (e.g. negative binomial) to model independent over-dispersion, but do not account for sample non-independence due to relatedness, population structure and/or hidden confounders. Here, we present a Poisson mixed model with two random effects terms that account for both independent over-dispersion and sample non-independence. We also develop a scalable sampling-based inference algorithm using a latent variable representation of the Poisson distribution. With simulations, we show that our method properly controls for type I error and is generally more powerful than other widely used approaches, except in small samples (n <15) with other unfavorable properties (e.g. small effect sizes). We also apply our method to three real datasets that contain related individuals, population stratification or hidden confounders. Our results show that our method increases power in all three data compared to other approaches, though the power gain is smallest in the smallest sample (n = 6). Our method is implemented in MACAU, freely available at www.xzlab.org/software.html. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.
Multiple Time Series Ising Model for Financial Market Simulations
International Nuclear Information System (INIS)
Takaishi, Tetsuya
2015-01-01
In this paper we propose an Ising model which simulates multiple financial time series. Our model introduces the interaction which couples to spins of other systems. Simulations from our model show that time series exhibit the volatility clustering that is often observed in the real financial markets. Furthermore we also find non-zero cross correlations between the volatilities from our model. Thus our model can simulate stock markets where volatilities of stocks are mutually correlated
The transition model test for serial dependence in mixed-effects models for binary data
DEFF Research Database (Denmark)
Breinegaard, Nina; Rabe-Hesketh, Sophia; Skrondal, Anders
2017-01-01
Generalized linear mixed models for longitudinal data assume that responses at different occasions are conditionally independent, given the random effects and covariates. Although this assumption is pivotal for consistent estimation, violation due to serial dependence is hard to assess by model...
Cheek, Julianne; Lipschitz, David L; Abrams, Elizabeth M; Vago, David R; Nakamura, Yoshio
2015-06-01
Dynamic reflexivity is central to enabling flexible and emergent qualitatively driven inductive mixed-method and multiple methods research designs. Yet too often, such reflexivity, and how it is used at various points of a study, is absent when we write our research reports. Instead, reports of mixed-method and multiple methods research focus on what was done rather than how it came to be done. This article seeks to redress this absence of emphasis on the reflexive thinking underpinning the way that mixed- and multiple methods, qualitatively driven research approaches are thought about and subsequently used throughout a project. Using Morse's notion of an armchair walkthrough, we excavate and explore the layers of decisions we made about how, and why, to use qualitatively driven mixed-method and multiple methods research in a study of mindfulness training (MT) in schoolchildren. © The Author(s) 2015.
A time dependent mixing model to close PDF equations for transport in heterogeneous aquifers
Schüler, L.; Suciu, N.; Knabner, P.; Attinger, S.
2016-10-01
Probability density function (PDF) methods are a promising alternative to predicting the transport of solutes in groundwater under uncertainty. They make it possible to derive the evolution equations of the mean concentration and the concentration variance, used in moment methods. The mixing model, describing the transport of the PDF in concentration space, is essential for both methods. Finding a satisfactory mixing model is still an open question and due to the rather elaborate PDF methods, a difficult undertaking. Both the PDF equation and the concentration variance equation depend on the same mixing model. This connection is used to find and test an improved mixing model for the much easier to handle concentration variance. Subsequently, this mixing model is transferred to the PDF equation and tested. The newly proposed mixing model yields significantly improved results for both variance modelling and PDF modelling.
Jonrinaldi; Rahman, T.; Henmaidi; Wirdianto, E.; Zhang, D. Z.
2018-03-01
This paper proposed a mathematical model for multiple items Economic Production and Order Quantity (EPQ/EOQ) with considering continuous and discrete demand simultaneously in a system consisting of a vendor and multiple buyers. This model is used to investigate the optimal production lot size of the vendor and the number of shipments policy of orders to multiple buyers. The model considers the multiple buyers’ holding cost as well as transportation cost, which minimize the total production and inventory costs of the system. The continuous demand from any other customers can be fulfilled anytime by the vendor while the discrete demand from multiple buyers can be fulfilled by the vendor using the multiple delivery policy with a number of shipments of items in the production cycle time. A mathematical model is developed to illustrate the system based on EPQ and EOQ model. Solution procedures are proposed to solve the model using a Mixed Integer Non Linear Programming (MINLP) and algorithm methods. Then, the numerical example is provided to illustrate the system and results are discussed.
Correlations in multiple production on nuclei and Glauber model of multiple scattering
International Nuclear Information System (INIS)
Zoller, V.R.; Nikolaev, N.N.
1982-01-01
Critical analysis of possibility for describing correlation phenomena during multiple production on nuclei within the framework of the Glauber multiple seattering model generalized for particle production processes with Capella, Krziwinski and Shabelsky has been performed. It was mainly concluded that the suggested generalization of the Glauber model gives dependences on Ng(Np) (where Ng-the number of ''grey'' tracess, and Np-the number of protons flying out of nucleus) and, eventually, on #betta# (where #betta#-the number of intranuclear interactions) contradicting experience. Independent of choice of relation between #betta# and Ng(Np) in the model the rapidity corrletor Rsub(eta) is overstated in the central region and understated in the region of nucleus fragmentation. In mean multiplicities these two contradictions of experience are disguised with random compensation and agreement with experience in Nsub(S) (function of Ng) cannot be an argument in favour of the model. It is concluded that eiconal model doesn't permit to quantitatively describe correlation phenomena during the multiple production on nuclei
Mixed dark matter in left-right symmetric models
Energy Technology Data Exchange (ETDEWEB)
Berlin, Asher [Department of Physics, University of Chicago,Chicago, Illinois 60637 (United States); Fox, Patrick J. [Theoretical Physics Department, Fermilab,Batavia, Illinois 60510 (United States); Hooper, Dan [Center for Particle Astrophysics, Fermi National Accelerator Laboratory,Batavia, Illinois 60510 (United States); Department of Astronomy and Astrophysics, University of Chicago,Chicago, Illinois 60637 (United States); Mohlabeng, Gopolang [Center for Particle Astrophysics, Fermi National Accelerator Laboratory,Batavia, Illinois 60510 (United States); Department of Physics and Astronomy, University of Kansas,Lawrence, Kansas 66045 (United States)
2016-06-08
Motivated by the recently reported diboson and dijet excesses in Run 1 data at ATLAS and CMS, we explore models of mixed dark matter in left-right symmetric theories. In this study, we calculate the relic abundance and the elastic scattering cross section with nuclei for a number of dark matter candidates that appear within the fermionic multiplets of left-right symmetric models. In contrast to the case of pure multiplets, WIMP-nucleon scattering proceeds at tree-level, and hence the projected reach of future direct detection experiments such as LUX-ZEPLIN and XENON1T will cover large regions of parameter space for TeV-scale thermal dark matter. Decays of the heavy charged W{sup ′} boson to particles in the dark sector can potentially shift the right-handed gauge coupling to larger values when fixed to the rate of the Run 1 excesses, moving towards the theoretically attractive scenario, g{sub R}=g{sub L}. This region of parameter space may be probed by future collider searches for new Higgs bosons or electroweak fermions.
Extended Mixed-Efects Item Response Models with the MH-RM Algorithm
Chalmers, R. Philip
2015-01-01
A mixed-effects item response theory (IRT) model is presented as a logical extension of the generalized linear mixed-effects modeling approach to formulating explanatory IRT models. Fixed and random coefficients in the extended model are estimated using a Metropolis-Hastings Robbins-Monro (MH-RM) stochastic imputation algorithm to accommodate for…
Lyoo, C H; Jeong, Y; Ryu, Y H; Lee, S Y; Song, T J; Lee, J H; Rinne, J O; Lee, M S
2008-02-01
To study the effect of disease duration on the clinical, neuropsychological and [(18)F]-deoxyglucose (FDG) PET findings in patients with mixed type multiple system atrophy (MSA), this study included 16 controls and 37 mixed-type MSA patients with a shorter than a 3-year history of cerebellar or parkinsonian symptoms. We classified the patients into three groups according to the duration of parkinsonian or cerebellar symptoms (Group I = battery. We compared the FDG PET findings of each group of patients with controls. Group I patients frequently had memory and frontal executive dysfunction. They showed hypometabolism in the frontal cortex, anterior cerebellar hemisphere and vermis. They had parkinsonian motor deficits, but no basal ganglia hypometabolism. Group II and III patients frequently had multiple domain cognitive impairments, and showed hypometabolism in the frontal and parieto-temporal cortices. Hypometabolism of the bilateral caudate and the left posterolateral putamen was observed in Group II, and whole striatum in Group III. In summary, the cortical hypometabolism begins in the frontal cortex and spreads to the parieto-temporal cortex in MSA. This spreading pattern coincides with the progressive cognitive decline. Early caudate hypometabolism may also contribute to the cognitive impairment. Parkinsonian motor deficits precede putaminal hypometabolism that begins in its posterolateral part. Cerebellar hypometabolism occurs early in the clinical courses and seems to be a relevant metabolic descriptor of cerebellar deficits.
Numerical modelling of the atmospheric mixing-layer diurnal evolution
International Nuclear Information System (INIS)
Molnary, L. de.
1990-03-01
This paper introduce a numeric procedure to determine the temporal evolution of the height, potential temperature and mixing ratio in the atmospheric mixing layer. The time and spatial derivatives were evaluated via forward in time scheme to predict the local evolution of the mixing-layer parameters, and a forward in time, upstream in space scheme to predict the evolution of the mixing-layer over a flat region with a one-dimensional advection component. The surface turbulent fluxes of sensible and latent heat were expressed using a simple sine wave that is function of the hour day and kind of the surface (water or country). (author) [pt
Models of mixed irradiation with a 'reciprocal-time' pattern of the repair function
Energy Technology Data Exchange (ETDEWEB)
Suzuki, Shozo; Miura, Yuri; Mizuno, Shoichi [Tokyo Metropolitan Inst. of Gerontology (Japan); Furusawa, Yoshiya [National Inst. of Radiological Sciences, Chiba (Japan)
2002-09-01
Suzuki presented models for mixed irradiation with two and multiple types of radiation by extending the Zaider and Rossi model, which is based on the theory of dual radiation action. In these models, the repair function was simply assumed to be semi-logarithmically linear (i.e., monoexponential), or a first-order process, which has been experimentally contradicted. Fowler, however, suggested that the repair of radiation damage might be largely a second-order process rather than a first-order one, and presented data in support of this hypothesis. In addition, a second-order repair function is preferred to an n-exponential repair function for the reason that only one parameter is used in the former instead of 2n-1 parameters for the latter, although both repair functions show a good fit to the experimental data. However, according to a second-order repair function, the repair rate depends on the dose, which is incompatible with the experimental data. We, therefore, revised the models for mixed irradiation by Zaider and Rossi and by Suzuki, by substituting a 'reciprocal-time' pattern of the repair function, which is derived from the assumption that the repair rate is independent of the dose in a second-order repair function, for a first-order one in reduction and interaction factors of the models, although the underlying mechanism for this assumption cannot be well-explained. The reduction factor, which reduces the contribution of the square of a dose to cell killing in the linear-quadratic model and its derivatives, and the interaction factor, which also reduces the contribution of the interaction of two or more doses of different types of radiation, were formulated by using a 'reciprocal-time' patterns of the repair function. Cell survivals calculated from the older and the newly modified models were compared in terms of the dose-rate by assuming various types of single and mixed irradiation. The result implies that the newly modified models for
Oxygen reduction kinetics on mixed conducting SOFC model cathodes
Energy Technology Data Exchange (ETDEWEB)
Baumann, F.S.
2006-07-01
The kinetics of the oxygen reduction reaction at the surface of mixed conducting solid oxide fuel cell (SOFC) cathodes is one of the main limiting factors to the performance of these promising systems. For ''realistic'' porous electrodes, however, it is usually very difficult to separate the influence of different resistive processes. Therefore, a suitable, geometrically well-defined model system was used in this work to enable an unambiguous distinction of individual electrochemical processes by means of impedance spectroscopy. The electrochemical measurements were performed on dense thin film microelectrodes, prepared by PLD and photolithography, of mixed conducting perovskite-type materials. The first part of the thesis consists of an extensive impedance spectroscopic investigation of La0.6Sr0.4Co0.8Fe0.2O3 (LSCF) microelectrodes. An equivalent circuit was identified that describes the electrochemical properties of the model electrodes appropriately and enables an unambiguous interpretation of the measured impedance spectra. Hence, the dependencies of individual electrochemical processes such as the surface exchange reaction on a wide range of experimental parameters including temperature, dc bias and oxygen partial pressure could be studied. As a result, a comprehensive set of experimental data has been obtained, which was previously not available for a mixed conducting model system. In the course of the experiments on the dc bias dependence of the electrochemical processes a new and surprising effect was discovered: It could be shown that a short but strong dc polarisation of a LSCF microelectrode at high temperature improves its electrochemical performance with respect to the oxygen reduction reaction drastically. The electrochemical resistance associated with the oxygen surface exchange reaction, initially the dominant contribution to the total electrode resistance, can be reduced by two orders of magnitude. This &apos
Multiple Response Regression for Gaussian Mixture Models with Known Labels.
Lee, Wonyul; Du, Ying; Sun, Wei; Hayes, D Neil; Liu, Yufeng
2012-12-01
Multiple response regression is a useful regression technique to model multiple response variables using the same set of predictor variables. Most existing methods for multiple response regression are designed for modeling homogeneous data. In many applications, however, one may have heterogeneous data where the samples are divided into multiple groups. Our motivating example is a cancer dataset where the samples belong to multiple cancer subtypes. In this paper, we consider modeling the data coming from a mixture of several Gaussian distributions with known group labels. A naive approach is to split the data into several groups according to the labels and model each group separately. Although it is simple, this approach ignores potential common structures across different groups. We propose new penalized methods to model all groups jointly in which the common and unique structures can be identified. The proposed methods estimate the regression coefficient matrix, as well as the conditional inverse covariance matrix of response variables. Asymptotic properties of the proposed methods are explored. Through numerical examples, we demonstrate that both estimation and prediction can be improved by modeling all groups jointly using the proposed methods. An application to a glioblastoma cancer dataset reveals some interesting common and unique gene relationships across different cancer subtypes.
Tang, Jiafu; Liu, Yang; Fung, Richard; Luo, Xinggang
2008-12-01
Manufacturers have a legal accountability to deal with industrial waste generated from their production processes in order to avoid pollution. Along with advances in waste recovery techniques, manufacturers may adopt various recycling strategies in dealing with industrial waste. With reuse strategies and technologies, byproducts or wastes will be returned to production processes in the iron and steel industry, and some waste can be recycled back to base material for reuse in other industries. This article focuses on a recovery strategies optimization problem for a typical class of industrial waste recycling process in order to maximize profit. There are multiple strategies for waste recycling available to generate multiple byproducts; these byproducts are then further transformed into several types of chemical products via different production patterns. A mixed integer programming model is developed to determine which recycling strategy and which production pattern should be selected with what quantity of chemical products corresponding to this strategy and pattern in order to yield maximum marginal profits. The sales profits of chemical products and the set-up costs of these strategies, patterns and operation costs of production are considered. A simulated annealing (SA) based heuristic algorithm is developed to solve the problem. Finally, an experiment is designed to verify the effectiveness and feasibility of the proposed method. By comparing a single strategy to multiple strategies in an example, it is shown that the total sales profit of chemical products can be increased by around 25% through the simultaneous use of multiple strategies. This illustrates the superiority of combinatorial multiple strategies. Furthermore, the effects of the model parameters on profit are discussed to help manufacturers organize their waste recycling network.
Prediction of stock markets by the evolutionary mix-game model
Chen, Fang; Gou, Chengling; Guo, Xiaoqian; Gao, Jieping
2008-06-01
This paper presents the efforts of using the evolutionary mix-game model, which is a modified form of the agent-based mix-game model, to predict financial time series. Here, we have carried out three methods to improve the original mix-game model by adding the abilities of strategy evolution to agents, and then applying the new model referred to as the evolutionary mix-game model to forecast the Shanghai Stock Exchange Composite Index. The results show that these modifications can improve the accuracy of prediction greatly when proper parameters are chosen.
Metabolic modelling of polyhydroxyalkanoate copolymers production by mixed microbial cultures
Directory of Open Access Journals (Sweden)
Reis Maria AM
2008-07-01
Full Text Available Abstract Background This paper presents a metabolic model describing the production of polyhydroxyalkanoate (PHA copolymers in mixed microbial cultures, using mixtures of acetic and propionic acid as carbon source material. Material and energetic balances were established on the basis of previously elucidated metabolic pathways. Equations were derived for the theoretical yields for cell growth and PHA production on mixtures of acetic and propionic acid as functions of the oxidative phosphorylation efficiency, P/O ratio. The oxidative phosphorylation efficiency was estimated from rate measurements, which in turn allowed the estimation of the theoretical yield coefficients. Results The model was validated with experimental data collected in a sequencing batch reactor (SBR operated under varying feeding conditions: feeding of acetic and propionic acid separately (control experiments, and the feeding of acetic and propionic acid simultaneously. Two different feast and famine culture enrichment strategies were studied: (i either with acetate or (ii with propionate as carbon source material. Metabolic flux analysis (MFA was performed for the different feeding conditions and culture enrichment strategies. Flux balance analysis (FBA was used to calculate optimal feeding scenarios for high quality PHA polymers production, where it was found that a suitable polymer would be obtained when acetate is fed in excess and the feeding rate of propionate is limited to ~0.17 C-mol/(C-mol.h. The results were compared with published pure culture metabolic studies. Conclusion Acetate was more conducive toward the enrichment of a microbial culture with higher PHA storage fluxes and yields as compared to propionate. The P/O ratio was not only influenced by the selected microbial culture, but also by the carbon substrate fed to each culture, where higher P/O ratio values were consistently observed for acetate than propionate. MFA studies suggest that when mixtures of
Log-normal frailty models fitted as Poisson generalized linear mixed models.
Hirsch, Katharina; Wienke, Andreas; Kuss, Oliver
2016-12-01
The equivalence of a survival model with a piecewise constant baseline hazard function and a Poisson regression model has been known since decades. As shown in recent studies, this equivalence carries over to clustered survival data: A frailty model with a log-normal frailty term can be interpreted and estimated as a generalized linear mixed model with a binary response, a Poisson likelihood, and a specific offset. Proceeding this way, statistical theory and software for generalized linear mixed models are readily available for fitting frailty models. This gain in flexibility comes at the small price of (1) having to fix the number of pieces for the baseline hazard in advance and (2) having to "explode" the data set by the number of pieces. In this paper we extend the simulations of former studies by using a more realistic baseline hazard (Gompertz) and by comparing the model under consideration with competing models. Furthermore, the SAS macro %PCFrailty is introduced to apply the Poisson generalized linear mixed approach to frailty models. The simulations show good results for the shared frailty model. Our new %PCFrailty macro provides proper estimates, especially in case of 4 events per piece. The suggested Poisson generalized linear mixed approach for log-normal frailty models based on the %PCFrailty macro provides several advantages in the analysis of clustered survival data with respect to more flexible modelling of fixed and random effects, exact (in the sense of non-approximate) maximum likelihood estimation, and standard errors and different types of confidence intervals for all variance parameters. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Adaptive Active Noise Suppression Using Multiple Model Switching Strategy
Directory of Open Access Journals (Sweden)
Quanzhen Huang
2017-01-01
Full Text Available Active noise suppression for applications where the system response varies with time is a difficult problem. The computation burden for the existing control algorithms with online identification is heavy and easy to cause control system instability. A new active noise control algorithm is proposed in this paper by employing multiple model switching strategy for secondary path varying. The computation is significantly reduced. Firstly, a noise control system modeling method is proposed for duct-like applications. Then a multiple model adaptive control algorithm is proposed with a new multiple model switching strategy based on filter-u least mean square (FULMS algorithm. Finally, the proposed algorithm was implemented on Texas Instruments digital signal processor (DSP TMS320F28335 and real time experiments were done to test the proposed algorithm and FULMS algorithm with online identification. Experimental verification tests show that the proposed algorithm is effective with good noise suppression performance.
Efficient Adoption and Assessment of Multiple Process Improvement Reference Models
Directory of Open Access Journals (Sweden)
Simona Jeners
2013-06-01
Full Text Available A variety of reference models such as CMMI, COBIT or ITIL support IT organizations to improve their processes. These process improvement reference models (IRMs cover different domains such as IT development, IT Services or IT Governance but also share some similarities. As there are organizations that address multiple domains and need to coordinate their processes in their improvement we present MoSaIC, an approach to support organizations to efficiently adopt and conform to multiple IRMs. Our solution realizes a semantic integration of IRMs based on common meta-models. The resulting IRM integration model enables organizations to efficiently implement and asses multiple IRMs and to benefit from synergy effects.
A Multiple-Reception Access Protocol with Interruptions with Mixed Priorities in CDMA Networks
Institute of Scientific and Technical Information of China (English)
Lu Xiaowen; Zhu Jinkang
2003-01-01
A novel access protocol called Multiple-Reception Access Protocol (MRAP) and its modification MRAP/WI are proposed. In this protocol, all colliding users with a common code can be identified by the base station due to the offset of arrival time Thus they can retransmit access requests under the base station's control. Furthermore new arrivals with higher priority level can interrupt the lower retransmission in order to reduce its access delay although it increases the lower priority's delay. Simulation results of MRAP and MRAP/WI are given in order to highlight the superior performance of the proposed approach.
Thurman, G. B.; Strong, D. M.; Ahmed, A.; Green, S. S.; Sell, K. W.; Hartzman, R. J.; Bach, F. H.
1973-01-01
Use of lymphocyte cultures for in vitro studies such as pretransplant histocompatibility testing has established the need for standardization of this technique. A microculture technique has been developed that has facilitated the culturing of lymphocytes and increased the quantity of cultures feasible, while lowering the variation between replicate samples. Cultures were prepared for determination of tritiated thymidine incorporation using a Multiple Automated Sample Harvester (MASH). Using this system, the parameters that influence the in vitro responsiveness of human lymphocytes to allogeneic lymphocytes have been investigated. PMID:4271568
Model Seleksi Premi Asuransi Jiwa Dwiguna untuk Kasus Multiple Decrement
Cita, Devi Ramana; Pane, Rolan; ', Harison
2015-01-01
This article discusses a select survival model for the case of multiple decrements in evaluating endowment life insurance premium for person currently aged ( + ) years, who is selected at age with ℎ years selection period. The case of multiple decrements in this case is limited to two cases. The calculation of the annual premium is done by prior evaluating of the single premium, and the present value of annuity depends on theconstant force assumption.
Modeling pedestrian gap crossing index under mixed traffic condition.
Naser, Mohamed M; Zulkiple, Adnan; Al Bargi, Walid A; Khalifa, Nasradeen A; Daniel, Basil David
2017-12-01
There are a variety of challenges faced by pedestrians when they walk along and attempt to cross a road, as the most recorded accidents occur during this time. Pedestrians of all types, including both sexes with numerous aging groups, are always subjected to risk and are characterized as the most exposed road users. The increased demand for better traffic management strategies to reduce the risks at intersections, improve quality traffic management, traffic volume, and longer cycle time has further increased concerns over the past decade. This paper aims to develop a sustainable pedestrian gap crossing index model based on traffic flow density. It focusses on the gaps accepted by pedestrians and their decision for street crossing, where (Log-Gap) logarithm of accepted gaps was used to optimize the result of a model for gap crossing behavior. Through a review of extant literature, 15 influential variables were extracted for further empirical analysis. Subsequently, data from the observation at an uncontrolled mid-block in Jalan Ampang in Kuala Lumpur, Malaysia was gathered and Multiple Linear Regression (MLR) and Binary Logit Model (BLM) techniques were employed to analyze the results. From the results, different pedestrian behavioral characteristics were considered for a minimum gap size model, out of which only a few (four) variables could explain the pedestrian road crossing behavior while the remaining variables have an insignificant effect. Among the different variables, age, rolling gap, vehicle type, and crossing were the most influential variables. The study concludes that pedestrians' decision to cross the street depends on the pedestrian age, rolling gap, vehicle type, and size of traffic gap before crossing. The inferences from these models will be useful to increase pedestrian safety and performance evaluation of uncontrolled midblock road crossings in developing countries. Copyright © 2017 National Safety Council and Elsevier Ltd. All rights reserved.
Generating synthetic wave climates for coastal modelling: a linear mixed modelling approach
Thomas, C.; Lark, R. M.
2013-12-01
Numerical coastline morphological evolution models require wave climate properties to drive morphological change through time. Wave climate properties (typically wave height, period and direction) may be temporally fixed, culled from real wave buoy data, or allowed to vary in some way defined by a Gaussian or other pdf. However, to examine sensitivity of coastline morphologies to wave climate change, it seems desirable to be able to modify wave climate time series from a current to some new state along a trajectory, but in a way consistent with, or initially conditioned by, the properties of existing data, or to generate fully synthetic data sets with realistic time series properties. For example, mean or significant wave height time series may have underlying periodicities, as revealed in numerous analyses of wave data. Our motivation is to develop a simple methodology to generate synthetic wave climate time series that can change in some stochastic way through time. We wish to use such time series in a coastline evolution model to test sensitivities of coastal landforms to changes in wave climate over decadal and centennial scales. We have worked initially on time series of significant wave height, based on data from a Waverider III buoy located off the coast of Yorkshire, England. The statistical framework for the simulation is the linear mixed model. The target variable, perhaps after transformation (Box-Cox), is modelled as a multivariate Gaussian, the mean modelled as a function of a fixed effect, and two random components, one of which is independently and identically distributed (iid) and the second of which is temporally correlated. The model was fitted to the data by likelihood methods. We considered the option of a periodic mean, the period either fixed (e.g. at 12 months) or estimated from the data. We considered two possible correlation structures for the second random effect. In one the correlation decays exponentially with time. In the second
AgMIP Training in Multiple Crop Models and Tools
Boote, Kenneth J.; Porter, Cheryl H.; Hargreaves, John; Hoogenboom, Gerrit; Thornburn, Peter; Mutter, Carolyn
2015-01-01
The Agricultural Model Intercomparison and Improvement Project (AgMIP) has the goal of using multiple crop models to evaluate climate impacts on agricultural production and food security in developed and developing countries. There are several major limitations that must be overcome to achieve this goal, including the need to train AgMIP regional research team (RRT) crop modelers to use models other than the ones they are currently familiar with, plus the need to harmonize and interconvert the disparate input file formats used for the various models. Two activities were followed to address these shortcomings among AgMIP RRTs to enable them to use multiple models to evaluate climate impacts on crop production and food security. We designed and conducted courses in which participants trained on two different sets of crop models, with emphasis on the model of least experience. In a second activity, the AgMIP IT group created templates for inputting data on soils, management, weather, and crops into AgMIP harmonized databases, and developed translation tools for converting the harmonized data into files that are ready for multiple crop model simulations. The strategies for creating and conducting the multi-model course and developing entry and translation tools are reviewed in this chapter.
Czech Academy of Sciences Publication Activity Database
Jordanova, P.; Dušek, Jiří; Stehlík, M.
2013-01-01
Roč. 128, OCT 15 (2013), s. 124-134 ISSN 0169-7439 R&D Projects: GA ČR(CZ) GAP504/11/1151; GA MŠk(CZ) ED1.1.00/02.0073 Institutional support: RVO:67179843 Keywords : environmental chemistry * ebullition of methane * mixed poisson processes * renewal process * pareto distribution * moving average process * robust statistics * sedge–grass marsh Subject RIV: EH - Ecology, Behaviour Impact factor: 2.381, year: 2013
Digital Repository Service at National Institute of Oceanography (India)
Nakamoto, S.; PrasannaKumar, S.; Oberhuber, J.M.; Saito, H.; Muneyama, K.
and supported by quasi-steady upwelling. Remotely sensed chlorophyll pigment concentrations from the Coastal Zone Color Scanner (CZCS) are used to investigate the chlorophyll modulation of ocean mixed layer thermodynamics in a bulk mixed-layer model, embedded...
Mixed-order phase transition of the contact process near multiple junctions.
Juhász, Róbert; Iglói, Ferenc
2017-02-01
We have studied the phase transition of the contact process near a multiple junction of M semi-infinite chains by Monte Carlo simulations. As opposed to the continuous transitions of the translationally invariant (M=2) and semi-infinite (M=1) system, the local order parameter is found to be discontinuous for M>2. Furthermore, the temporal correlation length diverges algebraically as the critical point is approached, but with different exponents on the two sides of the transition. In the active phase, the estimate is compatible with the bulk value, while in the inactive phase it exceeds the bulk value and increases with M. The unusual local critical behavior is explained by a scaling theory with an irrelevant variable, which becomes dangerous in the inactive phase. Quenched spatial disorder is found to make the transition continuous in agreement with earlier renormalization group results.
Cho, Sun-Joo; Goodwin, Amanda P
2016-04-01
When word learning is supported by instruction in experimental studies for adolescents, word knowledge outcomes tend to be collected from complex data structure, such as multiple aspects of word knowledge, multilevel reader data, multilevel item data, longitudinal design, and multiple groups. This study illustrates how generalized linear mixed models can be used to measure and explain word learning for data having such complexity. Results from this application provide deeper understanding of word knowledge than could be attained from simpler models and show that word knowledge is multidimensional and depends on word characteristics and instructional contexts.
An improved mixing model providing joint statistics of scalar and scalar dissipation
Energy Technology Data Exchange (ETDEWEB)
Meyer, Daniel W. [Department of Energy Resources Engineering, Stanford University, Stanford, CA (United States); Jenny, Patrick [Institute of Fluid Dynamics, ETH Zurich (Switzerland)
2008-11-15
For the calculation of nonpremixed turbulent flames with thin reaction zones the joint probability density function (PDF) of the mixture fraction and its dissipation rate plays an important role. The corresponding PDF transport equation involves a mixing model for the closure of the molecular mixing term. Here, the parameterized scalar profile (PSP) mixing model is extended to provide the required joint statistics. Model predictions are validated using direct numerical simulation (DNS) data of a passive scalar mixing in a statistically homogeneous turbulent flow. Comparisons between the DNS and the model predictions are provided, which involve different initial scalar-field lengthscales. (author)
A Bayesian Hierarchical Model for Relating Multiple SNPs within Multiple Genes to Disease Risk
Directory of Open Access Journals (Sweden)
Lewei Duan
2013-01-01
Full Text Available A variety of methods have been proposed for studying the association of multiple genes thought to be involved in a common pathway for a particular disease. Here, we present an extension of a Bayesian hierarchical modeling strategy that allows for multiple SNPs within each gene, with external prior information at either the SNP or gene level. The model involves variable selection at the SNP level through latent indicator variables and Bayesian shrinkage at the gene level towards a prior mean vector and covariance matrix that depend on external information. The entire model is fitted using Markov chain Monte Carlo methods. Simulation studies show that the approach is capable of recovering many of the truly causal SNPs and genes, depending upon their frequency and size of their effects. The method is applied to data on 504 SNPs in 38 candidate genes involved in DNA damage response in the WECARE study of second breast cancers in relation to radiotherapy exposure.
Estimating marginal properties of quantitative real-time PCR data using nonlinear mixed models
DEFF Research Database (Denmark)
Gerhard, Daniel; Bremer, Melanie; Ritz, Christian
2014-01-01
A unified modeling framework based on a set of nonlinear mixed models is proposed for flexible modeling of gene expression in real-time PCR experiments. Focus is on estimating the marginal or population-based derived parameters: cycle thresholds and ΔΔc(t), but retaining the conditional mixed mod...
PREDICTION OF THE MIXING ENTHALPIES OF BINARY LIQUID ALLOYS BY MOLECULAR INTERACTION VOLUME MODEL
Institute of Scientific and Technical Information of China (English)
H.W.Yang; D.P.Tao; Z.H.Zhou
2008-01-01
The mixing enthalpies of 23 binary liquid alloys are calculated by molecular interaction volume model (MIVM), which is a two-parameter model with the partial molar infinite dilute mixing enthalpies. The predicted values are in agreement with the experimental data and then indicate that the model is reliable and convenient.
From linear to generalized linear mixed models: A case study in repeated measures
Compared to traditional linear mixed models, generalized linear mixed models (GLMMs) can offer better correspondence between response variables and explanatory models, yielding more efficient estimates and tests in the analysis of data from designed experiments. Using proportion data from a designed...
Lewis Jordon; Richard F. Daniels; Alexander Clark; Rechun He
2005-01-01
Earlywood and latewood microfibril angle (MFA) was determined at I-millimeter intervals from disks at 1.4 meters, then at 3-meter intervals to a height of 13.7 meters, from 18 loblolly pine (Pinus taeda L.) trees grown in southeastern Texas. A modified three-parameter logistic function with mixed effects is used for modeling earlywood and latewood...
[Multiple emissions in organic electroluminescent device using a mixed layer as an emitter].
Zhu, Wen-qing; Wu, You-zhi; Zheng, Xin-you; Jiang, Xue-yin; Zhang, Zhi-lin; Sun, Run-guang; Xu, Shao-hong
2005-04-01
A organic electroluminescent device has been fabricated by using a mixed layer as an emitter. The configuration of the device is ITO/TPD/TPD: PBD(equimole)/PBD/A1, in which TPD (N,N'-diphenyl-N,N'-bis(3-methylphenyl)-1,1'-biphenyl-4,4'-diamine) and PBD (2-(4'-biphenyl)-5-(4''-tert-butylphenyl)-1,3,4-oxadiazole) are used as hole transport material and electron transport material, respectively. Broad and red-shifted electroluminescent spectra related to the fluorescence of constituent materials were observed. It is suggested that the monomer, exciplex and electroplex emissions are simultaneously involved in EL spectra by comparison of the EL with the PL spectra and decomposition of the EL spectrum. The type of exciplex is the interaction between the excited state TPD (TPD*) and PBD in the ground state, and the type of electroplex is a (D+-A-)* complex by cross-recombination of hole on the charged hole transport molecule (D+) and electron on the charged electron transport molecule (A-). All types of excited states show different formation mechanisms and recombination processes under electric field. The change of emission strengths from monomer and excited complexes lead to a blue-shift of the emissive spectra with an increasing electric field. The maximum luminance and external quantum efficiency of this device are 240 cd x (cm2)(-1) and 0.49%, respectively. The emissions from exciplex or electroplex formation at the organic solid interface generally present a broad and red-shifted emissive band, providing an effective method for tuning of emission color in organic electroluminescent devices.
Rating the raters in a mixed model: An approach to deciphering the rater reliability
Shang, Junfeng; Wang, Yougui
2013-05-01
Rating the raters has attracted extensive attention in recent years. Ratings are quite complex in that the subjective assessment and a number of criteria are involved in a rating system. Whenever the human judgment is a part of ratings, the inconsistency of ratings is the source of variance in scores, and it is therefore quite natural for people to verify the trustworthiness of ratings. Accordingly, estimation of the rater reliability will be of great interest and an appealing issue. To facilitate the evaluation of the rater reliability in a rating system, we propose a mixed model where the scores of the ratees offered by a rater are described with the fixed effects determined by the ability of the ratees and the random effects produced by the disagreement of the raters. In such a mixed model, for the rater random effects, we derive its posterior distribution for the prediction of random effects. To quantitatively make a decision in revealing the unreliable raters, the predictive influence function (PIF) serves as a criterion which compares the posterior distributions of random effects between the full data and rater-deleted data sets. The benchmark for this criterion is also discussed. This proposed methodology of deciphering the rater reliability is investigated in the multiple simulated and two real data sets.
DEFF Research Database (Denmark)
Goutianos, Stergios; Sørensen, Bent F.
beams bonded together with a thermoset adhesive, more delamination cracks could be developed next to the main/primary adhesive/laminate crack. An analytical model, based on the J integral, was developed for multiple delaminations [3]. It is shown that the maximum possible increase (upper limit...
Parametric modeling for damped sinusoids from multiple channels
DEFF Research Database (Denmark)
Zhou, Zhenhua; So, Hing Cheung; Christensen, Mads Græsbøll
2013-01-01
frequencies and damping factors are then computed with the multi-channel weighted linear prediction method. The estimated sinusoidal poles are then matched to each channel according to the extreme value theory of distribution of random fields. Simulations are performed to show the performance advantages......The problem of parametric modeling for noisy damped sinusoidal signals from multiple channels is addressed. Utilizing the shift invariance property of the signal subspace, the number of distinct sinusoidal poles in the multiple channels is first determined. With the estimated number, the distinct...... of the proposed multi-channel sinusoidal modeling methodology compared with existing methods....
A Multiple Model Prediction Algorithm for CNC Machine Wear PHM
Directory of Open Access Journals (Sweden)
Huimin Chen
2011-01-01
Full Text Available The 2010 PHM data challenge focuses on the remaining useful life (RUL estimation for cutters of a high speed CNC milling machine using measurements from dynamometer, accelerometer, and acoustic emission sensors. We present a multiple model approach for wear depth estimation of milling machine cutters using the provided data. The feature selection, initial wear estimation and multiple model fusion components of the proposed algorithm are explained in details and compared with several alternative methods using the training data. The final submission ranked #2 among professional and student participants and the method is applicable to other data driven PHM problems.
On the thermoluminescent interactive multiple-trap system (IMTS) model: is it a simple model?
International Nuclear Information System (INIS)
Gil T, M. I.; Perez C, L.; Cruz Z, E.; Furetta, C.; Roman L, J.
2016-10-01
In the thermally stimulated luminescence phenomenon, named thermoluminescence (Tl), the electrons and holes generated by the radiation-matter interaction can be trapped by the metastable levels in the band gap of the solid. Following, the electron can be thermally releases into the conduction band and a radiatively recombination with hole close to the recombination center occurred and the glow curve is emitted. However, the complex mechanism of trapping and thermally releases occurred in the band gap of solid. Some models, such as; first, second and general-order kinetics, have been well established to explain the behaviour of the glow curves and their defects recombination mechanism. In this work, expressions for and Interactive Multiple-Trap System model (IMTS) was obtained assuming: a set of discrete electron traps (active traps At), another set of thermally disconnected trap (TDT) and a recombination center (Rc) too. A numerical analysis based on the Levenberg-Marquardt method in conjunction with an implicit Rosenbrock method was taken into account to simulate the glow curve. The numerical method was tested through synthetic Tl glow curves for a wide range of trap parameters. The activation energy and kinetics order were determined using values from the General Order Kinetics (GOK) model as entry data to IMTS model. This model was tested using the experimental glow curves obtained from Ce or Eu-doped MgF 2 (LiF) polycrystals samples. Results shown that the IMTS model can predict more accurately the behavior of the Tl glow curves that those obtained by the GOK modified by Rasheedy and by the Mixed Order Kinetics model. (Author)
On the thermoluminescent interactive multiple-trap system (IMTS) model: is it a simple model?
Energy Technology Data Exchange (ETDEWEB)
Gil T, M. I.; Perez C, L. [UNAM, Facultad de Quimica, Ciudad Universitaria, 04510 Ciudad de Mexico (Mexico); Cruz Z, E.; Furetta, C.; Roman L, J., E-mail: ecruz@nucleares.unam.mx [UNAM, Instituto de Ciencias Nucleares, Ciudad Universitaria, 04510 Ciudad de Mexico (Mexico)
2016-10-15
In the thermally stimulated luminescence phenomenon, named thermoluminescence (Tl), the electrons and holes generated by the radiation-matter interaction can be trapped by the metastable levels in the band gap of the solid. Following, the electron can be thermally releases into the conduction band and a radiatively recombination with hole close to the recombination center occurred and the glow curve is emitted. However, the complex mechanism of trapping and thermally releases occurred in the band gap of solid. Some models, such as; first, second and general-order kinetics, have been well established to explain the behaviour of the glow curves and their defects recombination mechanism. In this work, expressions for and Interactive Multiple-Trap System model (IMTS) was obtained assuming: a set of discrete electron traps (active traps At), another set of thermally disconnected trap (TDT) and a recombination center (Rc) too. A numerical analysis based on the Levenberg-Marquardt method in conjunction with an implicit Rosenbrock method was taken into account to simulate the glow curve. The numerical method was tested through synthetic Tl glow curves for a wide range of trap parameters. The activation energy and kinetics order were determined using values from the General Order Kinetics (GOK) model as entry data to IMTS model. This model was tested using the experimental glow curves obtained from Ce or Eu-doped MgF{sub 2}(LiF) polycrystals samples. Results shown that the IMTS model can predict more accurately the behavior of the Tl glow curves that those obtained by the GOK modified by Rasheedy and by the Mixed Order Kinetics model. (Author)
Computer modeling of forced mixing in waste storage tanks
International Nuclear Information System (INIS)
Eyler, L.L.; Michener, T.E.
1992-01-01
In this paper, numerical simulation results of fluid dynamic and physical process in radioactive waste storage tanks are presented. Investigations include simulation of jet mixing pump induced flows intended to mix and maintain particulate material uniformly distributed throughout the liquid volume. Physical effects of solids are included in the code. These are particle size through a settling velocity and mixture properties through density and viscosity. Calculations have been accomplished for centrally located, rotationally-oscillating, horizontally-directed jet mixing pump for two cases. One case is with low jet velocity an flow settling velocity. It results in uniform conditions. Results are being used to aid in experiment design and to understand mixing in the waste tanks. These results are to be used in conjunction with scaled experiments to define limits of pump operation to maintain uniformity of the mixture in the storage tanks during waste retrieval operations
Computer modeling of forced mixing in waste storage tanks
International Nuclear Information System (INIS)
Eyler, L.L.; Michener, T.E.
1992-04-01
Numerical simulation results of fluid dynamic and physical processes in radioactive waste storage tanks are presented. Investigations include simulation of jet mixing pump induced flows intended to mix and maintain particulate material uniformly distributed throughout the liquid volume. Physical effects of solids are included in the code. These are particle size through a settling velocity and mixture properties through density and viscosity. Calculations have been accomplished for a centrally located, rotationally-oscillating, horizontally-directed jet mixing pump for two cases. One case is with low jet velocity and high settling velocity. It results in nonuniform distribution. The other case is with high jet velocity and low settling velocity. It results in uniform conditions. Results are being used to aid in experiment design and to understand mixing in the waste tanks. These results are to be used in conjunction with scaled experiments to define limits of pump operation to maintain uniformity of the mixture in the storage tanks during waste retrieval operations
Modeling of mixing processes: Fluids, particulates, and powders
Energy Technology Data Exchange (ETDEWEB)
Ottino, J.M.; Hansen, S. [Northwestern Univ., Evanston, IL (United States)
1995-12-31
Work under this grant involves two main areas: (1) Mixing of Viscous Liquids, this first area comprising aggregation, fragmentation and dispersion, and (2) Mixing of Powders. In order to produce a coherent self-contained picture, we report primarily on results obtained under (1), and within this area, mostly on computational studies of particle aggregation in regular and chaotic flows. Numerical simulations show that the average cluster size of compact clusters grows algebraically, while the average cluster size of fractal clusters grows exponentially; companion mathematical arguments are used to describe the initial growth of average cluster size and polydispersity. It is found that when the system is well mixed and the capture radius independent of mass, the polydispersity is constant for long-times and the cluster size distribution is self-similar. Furthermore, our simulations indicate that the fractal nature of the clusters is dependent upon the mixing.
Short communication: Alteration of priors for random effects in Gaussian linear mixed model
DEFF Research Database (Denmark)
Vandenplas, Jérémie; Christensen, Ole Fredslund; Gengler, Nicholas
2014-01-01
such alterations. Therefore, the aim of this study was to propose a method to alter both the mean and (co)variance of the prior multivariate normal distributions of random effects of linear mixed models while using currently available software packages. The proposed method was tested on simulated examples with 3......, multiple-trait predictions of lactation yields, and Bayesian approaches integrating external information into genetic evaluations) need to alter both the mean and (co)variance of the prior distributions and, to our knowledge, most software packages available in the animal breeding community do not permit...... different software packages available in animal breeding. The examples showed the possibility of the proposed method to alter both the mean and (co)variance of the prior distributions with currently available software packages through the use of an extended data file and a user-supplied (co)variance matrix....
Hao, Xu; Yujun, Sun; Xinjie, Wang; Jin, Wang; Yao, Fu
2015-01-01
A multiple linear model was developed for individual tree crown width of Cunninghamia lanceolata (Lamb.) Hook in Fujian province, southeast China. Data were obtained from 55 sample plots of pure China-fir plantation stands. An Ordinary Linear Least Squares (OLS) regression was used to establish the crown width model. To adjust for correlations between observations from the same sample plots, we developed one level linear mixed-effects (LME) models based on the multiple linear model, which take into account the random effects of plots. The best random effects combinations for the LME models were determined by the Akaike's information criterion, the Bayesian information criterion and the -2logarithm likelihood. Heteroscedasticity was reduced by three residual variance functions: the power function, the exponential function and the constant plus power function. The spatial correlation was modeled by three correlation structures: the first-order autoregressive structure [AR(1)], a combination of first-order autoregressive and moving average structures [ARMA(1,1)], and the compound symmetry structure (CS). Then, the LME model was compared to the multiple linear model using the absolute mean residual (AMR), the root mean square error (RMSE), and the adjusted coefficient of determination (adj-R2). For individual tree crown width models, the one level LME model showed the best performance. An independent dataset was used to test the performance of the models and to demonstrate the advantage of calibrating LME models.
International Nuclear Information System (INIS)
Yan, Wei; Qu, Junle; Niu, H B
2014-01-01
We perform a time-dependent analysis of the formation and stable propagation of an ultraslow optical soliton pair, and four-wave mixing (FWM) via tunable Fano interference in double-cascade type semiconductor multiple quantum wells (SMQWs). By using the probability amplitude method to describe the interaction of the system, we demonstrate that the electromagnetically induced transparency (EIT) can be controlled by Fano interference in the linear case and the strength of Fano interference has an important effect on the group velocity and amplitude of the soliton pair in the nonlinear case. Then, when the signal field is removed, the dynamic FWM process is analyzed in detail, and we find that the strength of Fano interference also has an important effect on the FWM’s efficiency: the maximum FWM efficiency is ∼28% in appropriate conditions. The investigations are promising for practical applications in optical devices and optical information processing for solid systems. (paper)
Exploring Mixed Membership Stochastic Block Models via Non-negative Matrix Factorization
Peng, Chengbin
2014-12-01
Many real-world phenomena can be modeled by networks in which entities and connections are represented by nodes and edges respectively. When certain nodes are highly connected with each other, those nodes forms a cluster, which is called community in our context. It is usually assumed that each node belongs to one community only, but evidences in biology and social networks reveal that the communities often overlap with each other. In other words, one node can probably belong to multiple communities. In light of that, mixed membership stochastic block models (MMB) have been developed to model those networks with overlapping communities. Such a model contains three matrices: two incidence matrices indicating in and out connections and one probability matrix. When the probability of connections for nodes between communities are significantly small, the parameter inference problem to this model can be solved by a constrained non-negative matrix factorization (NMF) algorithm. In this paper, we explore the connection between the two models and propose an algorithm based on NMF to infer the parameters of MMB. The proposed algorithms can detect overlapping communities regardless of knowing or not the number of communities. Experiments show that our algorithm can achieve a better community detection performance than the traditional NMF algorithm. © 2014 IEEE.
Analysis of oligonucleotide array experiments with repeated measures using mixed models
Directory of Open Access Journals (Sweden)
Getchell Thomas V
2004-12-01
Full Text Available Abstract Background Two or more factor mixed factorial experiments are becoming increasingly common in microarray data analysis. In this case study, the two factors are presence (Patients with Alzheimer's disease or absence (Control of the disease, and brain regions including olfactory bulb (OB or cerebellum (CER. In the design considered in this manuscript, OB and CER are repeated measurements from the same subject and, hence, are correlated. It is critical to identify sources of variability in the analysis of oligonucleotide array experiments with repeated measures and correlations among data points have to be considered. In addition, multiple testing problems are more complicated in experiments with multi-level treatments or treatment combinations. Results In this study we adopted a linear mixed model to analyze oligonucleotide array experiments with repeated measures. We first construct a generalized F test to select differentially expressed genes. The Benjamini and Hochberg (BH procedure of controlling false discovery rate (FDR at 5% was applied to the P values of the generalized F test. For those genes with significant generalized F test, we then categorize them based on whether the interaction terms were significant or not at the α-level (αnew = 0.0033 determined by the FDR procedure. Since simple effects may be examined for the genes with significant interaction effect, we adopt the protected Fisher's least significant difference test (LSD procedure at the level of αnew to control the family-wise error rate (FWER for each gene examined. Conclusions A linear mixed model is appropriate for analysis of oligonucleotide array experiments with repeated measures. We constructed a generalized F test to select differentially expressed genes, and then applied a specific sequence of tests to identify factorial effects. This sequence of tests applied was designed to control for gene based FWER.
Keith, Timothy Z
2014-01-01
Multiple Regression and Beyond offers a conceptually oriented introduction to multiple regression (MR) analysis and structural equation modeling (SEM), along with analyses that flow naturally from those methods. By focusing on the concepts and purposes of MR and related methods, rather than the derivation and calculation of formulae, this book introduces material to students more clearly, and in a less threatening way. In addition to illuminating content necessary for coursework, the accessibility of this approach means students are more likely to be able to conduct research using MR or SEM--and more likely to use the methods wisely. Covers both MR and SEM, while explaining their relevance to one another Also includes path analysis, confirmatory factor analysis, and latent growth modeling Figures and tables throughout provide examples and illustrate key concepts and techniques For additional resources, please visit: http://tzkeith.com/.
Li, Guo; Lv, Fei; Guan, Xu
2014-01-01
This paper investigates a collaborative scheduling model in the assembly system, wherein multiple suppliers have to deliver their components to the multiple manufacturers under the operation of Supply-Hub. We first develop two different scenarios to examine the impact of Supply-Hub. One is that suppliers and manufacturers make their decisions separately, and the other is that the Supply-Hub makes joint decisions with collaborative scheduling. The results show that our scheduling model with the Supply-Hub is a NP-complete problem, therefore, we propose an auto-adapted differential evolution algorithm to solve this problem. Moreover, we illustrate that the performance of collaborative scheduling by the Supply-Hub is superior to separate decision made by each manufacturer and supplier. Furthermore, we also show that the algorithm proposed has good convergence and reliability, which can be applicable to more complicated supply chain environment.
Directory of Open Access Journals (Sweden)
Guo Li
2014-01-01
Full Text Available This paper investigates a collaborative scheduling model in the assembly system, wherein multiple suppliers have to deliver their components to the multiple manufacturers under the operation of Supply-Hub. We first develop two different scenarios to examine the impact of Supply-Hub. One is that suppliers and manufacturers make their decisions separately, and the other is that the Supply-Hub makes joint decisions with collaborative scheduling. The results show that our scheduling model with the Supply-Hub is a NP-complete problem, therefore, we propose an auto-adapted differential evolution algorithm to solve this problem. Moreover, we illustrate that the performance of collaborative scheduling by the Supply-Hub is superior to separate decision made by each manufacturer and supplier. Furthermore, we also show that the algorithm proposed has good convergence and reliability, which can be applicable to more complicated supply chain environment.
Lv, Fei; Guan, Xu
2014-01-01
This paper investigates a collaborative scheduling model in the assembly system, wherein multiple suppliers have to deliver their components to the multiple manufacturers under the operation of Supply-Hub. We first develop two different scenarios to examine the impact of Supply-Hub. One is that suppliers and manufacturers make their decisions separately, and the other is that the Supply-Hub makes joint decisions with collaborative scheduling. The results show that our scheduling model with the Supply-Hub is a NP-complete problem, therefore, we propose an auto-adapted differential evolution algorithm to solve this problem. Moreover, we illustrate that the performance of collaborative scheduling by the Supply-Hub is superior to separate decision made by each manufacturer and supplier. Furthermore, we also show that the algorithm proposed has good convergence and reliability, which can be applicable to more complicated supply chain environment. PMID:24892104
Double-multiple streamtube model for Darrieus in turbines
Paraschivoiu, I.
1981-01-01
An analytical model is proposed for calculating the rotor performance and aerodynamic blade forces for Darrieus wind turbines with curved blades. The method of analysis uses a multiple-streamtube model, divided into two parts: one modeling the upstream half-cycle of the rotor and the other, the downstream half-cycle. The upwind and downwind components of the induced velocities at each level of the rotor were obtained using the principle of two actuator disks in tandem. Variation of the induced velocities in the two parts of the rotor produces larger forces in the upstream zone and smaller forces in the downstream zone. Comparisons of the overall rotor performance with previous methods and field test data show the important improvement obtained with the present model. The calculations were made using the computer code CARDAA developed at IREQ. The double-multiple streamtube model presented has two major advantages: it requires a much shorter computer time than the three-dimensional vortex model and is more accurate than multiple-streamtube model in predicting the aerodynamic blade loads.
Mixed models approaches for joint modeling of different types of responses.
Ivanova, Anna; Molenberghs, Geert; Verbeke, Geert
2016-01-01
In many biomedical studies, one jointly collects longitudinal continuous, binary, and survival outcomes, possibly with some observations missing. Random-effects models, sometimes called shared-parameter models or frailty models, received a lot of attention. In such models, the corresponding variance components can be employed to capture the association between the various sequences. In some cases, random effects are considered common to various sequences, perhaps up to a scaling factor; in others, there are different but correlated random effects. Even though a variety of data types has been considered in the literature, less attention has been devoted to ordinal data. For univariate longitudinal or hierarchical data, the proportional odds mixed model (POMM) is an instance of the generalized linear mixed model (GLMM; Breslow and Clayton, 1993). Ordinal data are conveniently replaced by a parsimonious set of dummies, which in the longitudinal setting leads to a repeated set of dummies. When ordinal longitudinal data are part of a joint model, the complexity increases further. This is the setting considered in this paper. We formulate a random-effects based model that, in addition, allows for overdispersion. Using two case studies, it is shown that the combination of random effects to capture association with further correction for overdispersion can improve the model's fit considerably and that the resulting models allow to answer research questions that could not be addressed otherwise. Parameters can be estimated in a fairly straightforward way, using the SAS procedure NLMIXED.
Application of zero-inflated poisson mixed models in prognostic factors of hepatitis C.
Akbarzadeh Baghban, Alireza; Pourhoseingholi, Asma; Zayeri, Farid; Jafari, Ali Akbar; Alavian, Seyed Moayed
2013-01-01
In recent years, hepatitis C virus (HCV) infection represents a major public health problem. Evaluation of risk factors is one of the solutions which help protect people from the infection. This study aims to employ zero-inflated Poisson mixed models to evaluate prognostic factors of hepatitis C. The data was collected from a longitudinal study during 2005-2010. First, mixed Poisson regression (PR) model was fitted to the data. Then, a mixed zero-inflated Poisson model was fitted with compound Poisson random effects. For evaluating the performance of the proposed mixed model, standard errors of estimators were compared. The results obtained from mixed PR showed that genotype 3 and treatment protocol were statistically significant. Results of zero-inflated Poisson mixed model showed that age, sex, genotypes 2 and 3, the treatment protocol, and having risk factors had significant effects on viral load of HCV patients. Of these two models, the estimators of zero-inflated Poisson mixed model had the minimum standard errors. The results showed that a mixed zero-inflated Poisson model was the almost best fit. The proposed model can capture serial dependence, additional overdispersion, and excess zeros in the longitudinal count data.
Extending existing structural identifiability analysis methods to mixed-effects models.
Janzén, David L I; Jirstrand, Mats; Chappell, Michael J; Evans, Neil D
2018-01-01
The concept of structural identifiability for state-space models is expanded to cover mixed-effects state-space models. Two methods applicable for the analytical study of the structural identifiability of mixed-effects models are presented. The two methods are based on previously established techniques for non-mixed-effects models; namely the Taylor series expansion and the input-output form approach. By generating an exhaustive summary, and by assuming an infinite number of subjects, functions of random variables can be derived which in turn determine the distribution of the system's observation function(s). By considering the uniqueness of the analytical statistical moments of the derived functions of the random variables, the structural identifiability of the corresponding mixed-effects model can be determined. The two methods are applied to a set of examples of mixed-effects models to illustrate how they work in practice. Copyright © 2017 Elsevier Inc. All rights reserved.
Ziyatdinov, Andrey; Vázquez-Santiago, Miquel; Brunel, Helena; Martinez-Perez, Angel; Aschard, Hugues; Soria, Jose Manuel
2018-02-27
Quantitative trait locus (QTL) mapping in genetic data often involves analysis of correlated observations, which need to be accounted for to avoid false association signals. This is commonly performed by modeling such correlations as random effects in linear mixed models (LMMs). The R package lme4 is a well-established tool that implements major LMM features using sparse matrix methods; however, it is not fully adapted for QTL mapping association and linkage studies. In particular, two LMM features are lacking in the base version of lme4: the definition of random effects by custom covariance matrices; and parameter constraints, which are essential in advanced QTL models. Apart from applications in linkage studies of related individuals, such functionalities are of high interest for association studies in situations where multiple covariance matrices need to be modeled, a scenario not covered by many genome-wide association study (GWAS) software. To address the aforementioned limitations, we developed a new R package lme4qtl as an extension of lme4. First, lme4qtl contributes new models for genetic studies within a single tool integrated with lme4 and its companion packages. Second, lme4qtl offers a flexible framework for scenarios with multiple levels of relatedness and becomes efficient when covariance matrices are sparse. We showed the value of our package using real family-based data in the Genetic Analysis of Idiopathic Thrombophilia 2 (GAIT2) project. Our software lme4qtl enables QTL mapping models with a versatile structure of random effects and efficient computation for sparse covariances. lme4qtl is available at https://github.com/variani/lme4qtl .
Multiple commodities in statistical microeconomics: Model and market
Baaquie, Belal E.; Yu, Miao; Du, Xin
2016-11-01
A statistical generalization of microeconomics has been made in Baaquie (2013). In Baaquie et al. (2015), the market behavior of single commodities was analyzed and it was shown that market data provides strong support for the statistical microeconomic description of commodity prices. The case of multiple commodities is studied and a parsimonious generalization of the single commodity model is made for the multiple commodities case. Market data shows that the generalization can accurately model the simultaneous correlation functions of up to four commodities. To accurately model five or more commodities, further terms have to be included in the model. This study shows that the statistical microeconomics approach is a comprehensive and complete formulation of microeconomics, and which is independent to the mainstream formulation of microeconomics.
Risk Prediction Models for Other Cancers or Multiple Sites
Developing statistical models that estimate the probability of developing other multiple cancers over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.
An extension of the multiple-trapping model
International Nuclear Information System (INIS)
Shkilev, V. P.
2012-01-01
The hopping charge transport in disordered semiconductors is considered. Using the concept of the transport energy level, macroscopic equations are derived that extend a multiple-trapping model to the case of semiconductors with both energy and spatial disorders. It is shown that, although both types of disorder can cause dispersive transport, the frequency dependence of conductivity is determined exclusively by the spatial disorder.
Selecting Tools to Model Integer and Binomial Multiplication
Pratt, Sarah Smitherman; Eddy, Colleen M.
2017-01-01
Mathematics teachers frequently provide concrete manipulatives to students during instruction; however, the rationale for using certain manipulatives in conjunction with concepts may not be explored. This article focuses on area models that are currently used in classrooms to provide concrete examples of integer and binomial multiplication. The…
Modeling single versus multiple systems in implicit and explicit memory.
Starns, Jeffrey J; Ratcliff, Roger; McKoon, Gail
2012-04-01
It is currently controversial whether priming on implicit tasks and discrimination on explicit recognition tests are supported by a single memory system or by multiple, independent systems. In a Psychological Review article, Berry and colleagues used mathematical modeling to address this question and provide compelling evidence against the independent-systems approach. Copyright © 2012 Elsevier Ltd. All rights reserved.
Green communication: The enabler to multiple business models
DEFF Research Database (Denmark)
Lindgren, Peter; Clemmensen, Suberia; Taran, Yariv
2010-01-01
Companies stand at the forefront of a new business model reality with new potentials - that will change their basic understanding and practice of running their business models radically. One of the drivers to this change is green communication, its strong relation to green business models and its...... possibility to enable lower energy consumption. This paper shows how green communication enables innovation of green business models and multiple business models running simultaneously in different markets to different customers.......Companies stand at the forefront of a new business model reality with new potentials - that will change their basic understanding and practice of running their business models radically. One of the drivers to this change is green communication, its strong relation to green business models and its...
Grajeda, Laura M; Ivanescu, Andrada; Saito, Mayuko; Crainiceanu, Ciprian; Jaganath, Devan; Gilman, Robert H; Crabtree, Jean E; Kelleher, Dermott; Cabrera, Lilia; Cama, Vitaliano; Checkley, William
2016-01-01
Childhood growth is a cornerstone of pediatric research. Statistical models need to consider individual trajectories to adequately describe growth outcomes. Specifically, well-defined longitudinal models are essential to characterize both population and subject-specific growth. Linear mixed-effect models with cubic regression splines can account for the nonlinearity of growth curves and provide reasonable estimators of population and subject-specific growth, velocity and acceleration. We provide a stepwise approach that builds from simple to complex models, and account for the intrinsic complexity of the data. We start with standard cubic splines regression models and build up to a model that includes subject-specific random intercepts and slopes and residual autocorrelation. We then compared cubic regression splines vis-à-vis linear piecewise splines, and with varying number of knots and positions. Statistical code is provided to ensure reproducibility and improve dissemination of methods. Models are applied to longitudinal height measurements in a cohort of 215 Peruvian children followed from birth until their fourth year of life. Unexplained variability, as measured by the variance of the regression model, was reduced from 7.34 when using ordinary least squares to 0.81 (p linear mixed-effect models with random slopes and a first order continuous autoregressive error term. There was substantial heterogeneity in both the intercept (p modeled with a first order continuous autoregressive error term as evidenced by the variogram of the residuals and by a lack of association among residuals. The final model provides a parametric linear regression equation for both estimation and prediction of population- and individual-level growth in height. We show that cubic regression splines are superior to linear regression splines for the case of a small number of knots in both estimation and prediction with the full linear mixed effect model (AIC 19,352 vs. 19
Infinite Multiple Membership Relational Modeling for Complex Networks
DEFF Research Database (Denmark)
Mørup, Morten; Schmidt, Mikkel Nørgaard; Hansen, Lars Kai
Learning latent structure in complex networks has become an important problem fueled by many types of networked data originating from practically all fields of science. In this paper, we propose a new non-parametric Bayesian multiplemembership latent feature model for networks. Contrary to existing...... multiplemembership models that scale quadratically in the number of vertices the proposedmodel scales linearly in the number of links admittingmultiple-membership analysis in large scale networks. We demonstrate a connection between the single membership relational model and multiple membership models and show...
Finite mixture models for the computation of isotope ratios in mixed isotopic samples
Koffler, Daniel; Laaha, Gregor; Leisch, Friedrich; Kappel, Stefanie; Prohaska, Thomas
2013-04-01
Finite mixture models have been used for more than 100 years, but have seen a real boost in popularity over the last two decades due to the tremendous increase in available computing power. The areas of application of mixture models range from biology and medicine to physics, economics and marketing. These models can be applied to data where observations originate from various groups and where group affiliations are not known, as is the case for multiple isotope ratios present in mixed isotopic samples. Recently, the potential of finite mixture models for the computation of 235U/238U isotope ratios from transient signals measured in individual (sub-)µm-sized particles by laser ablation - multi-collector - inductively coupled plasma mass spectrometry (LA-MC-ICPMS) was demonstrated by Kappel et al. [1]. The particles, which were deposited on the same substrate, were certified with respect to their isotopic compositions. Here, we focus on the statistical model and its application to isotope data in ecogeochemistry. Commonly applied evaluation approaches for mixed isotopic samples are time-consuming and are dependent on the judgement of the analyst. Thus, isotopic compositions may be overlooked due to the presence of more dominant constituents. Evaluation using finite mixture models can be accomplished unsupervised and automatically. The models try to fit several linear models (regression lines) to subgroups of data taking the respective slope as estimation for the isotope ratio. The finite mixture models are parameterised by: • The number of different ratios. • Number of points belonging to each ratio-group. • The ratios (i.e. slopes) of each group. Fitting of the parameters is done by maximising the log-likelihood function using an iterative expectation-maximisation (EM) algorithm. In each iteration step, groups of size smaller than a control parameter are dropped; thereby the number of different ratios is determined. The analyst only influences some control
International Nuclear Information System (INIS)
Ying Jinfa; Chill, Jordan H.; Louis, John M.; Bax, Ad
2007-01-01
A new strategy is demonstrated that simultaneously enhances sensitivity and resolution in three- or higher-dimensional heteronuclear multiple quantum NMR experiments. The approach, referred to as mixed-time parallel evolution (MT-PARE), utilizes evolution of chemical shifts of the spins participating in the multiple quantum coherence in parallel, thereby reducing signal losses relative to sequential evolution. The signal in a given PARE dimension, t 1 , is of a non-decaying constant-time nature for a duration that depends on the length of t 2 , and vice versa, prior to the onset of conventional exponential decay. Line shape simulations for the 1 H- 15 N PARE indicate that this strategy significantly enhances both sensitivity and resolution in the indirect 1 H dimension, and that the unusual signal decay profile results in acceptable line shapes. Incorporation of the MT-PARE approach into a 3D HMQC-NOESY experiment for measurement of H N -H N NOEs in KcsA in SDS micelles at 50 o C was found to increase the experimental sensitivity by a factor of 1.7±0.3 with a concomitant resolution increase in the indirectly detected 1 H dimension. The method is also demonstrated for a situation in which homonuclear 13 C- 13 C decoupling is required while measuring weak H3'-2'OH NOEs in an RNA oligomer
Decesari, S.; Allan, J.; Plass-Duelmer, C.; Williams, B. J.; Paglione, M.; Facchini, M. C.; O'Dowd, C.; Harrison, R. M.; Gietl, J. K.; Coe, H.; Giulianelli, L.; Gobbi, G. P.; Lanconelli, C.; Carbone, C.; Worsnop, D.; Lambe, A. T.; Ahern, A. T.; Moretti, F.; Tagliavini, E.; Elste, T.; Gilge, S.; Zhang, Y.; Dall'Osto, M.
2014-11-01
The use of co-located multiple spectroscopic techniques can provide detailed information on the atmospheric processes regulating aerosol chemical composition and mixing state. So far, field campaigns heavily equipped with aerosol mass spectrometers have been carried out mainly in large conurbations and in areas directly affected by their outflow, whereas lesser efforts have been dedicated to continental areas characterised by a less dense urbanisation. We present here the results obtained at a background site in the Po Valley, Italy, in summer 2009. For the first time in Europe, six state-of-the-art spectrometric techniques were used in parallel: aerosol time-of-flight mass spectrometer (ATOFMS), two aerosol mass spectrometers (high-resolution time-of-flight aerosol mass spectrometer - HR-ToF-AMS and soot particle aerosol mass spectrometer - SP-AMS), thermal desorption aerosol gas chromatography (TAG), chemical ionisation mass spectrometry (CIMS) and (offline) proton nuclear magnetic resonance (1H-NMR) spectroscopy. The results indicate that, under high-pressure conditions, atmospheric stratification at night and early morning hours led to the accumulation of aerosols produced by anthropogenic sources distributed over the Po Valley plain. Such aerosols include primary components such as black carbon (BC), secondary semivolatile compounds such as ammonium nitrate and amines and a class of monocarboxylic acids which correspond to the AMS cooking organic aerosol (COA) already identified in urban areas. In daytime, the entrainment of aged air masses in the mixing layer is responsible for the accumulation of low-volatility oxygenated organic aerosol (LV-OOA) and also for the recycling of non-volatile primary species such as black carbon. According to organic aerosol source apportionment, anthropogenic aerosols accumulating in the lower layers overnight accounted for 38% of organic aerosol mass on average, another 21% was accounted for by aerosols recirculated in
Directory of Open Access Journals (Sweden)
Naveen Kumar
Full Text Available Successful purification of multiple viruses from mixed infections remains a challenge. In this study, we investigated peste des petits ruminants virus (PPRV and foot-and-mouth disease virus (FMDV mixed infection in goats. Rather than in a single cell type, cytopathic effect (CPE of the virus was observed in cocultured Vero/BHK-21 cells at 6th blind passage (BP. PPRV, but not FMDV could be purified from the virus mixture by plaque assay. Viral RNA (mixture transfection in BHK-21 cells produced FMDV but not PPRV virions, a strategy which we have successfully employed for the first time to eliminate the negative-stranded RNA virus from the virus mixture. FMDV phenotypes, such as replication competent but noncytolytic, cytolytic but defective in plaque formation and, cytolytic but defective in both plaque formation and standard FMDV genome were observed respectively, at passage level BP8, BP15 and BP19 and hence complicated virus isolation in the cell culture system. Mixed infection was not found to induce any significant antigenic and genetic diversity in both PPRV and FMDV. Further, we for the first time demonstrated the viral interference between PPRV and FMDV. Prior transfection of PPRV RNA, but not Newcastle disease virus (NDV and rotavirus RNA resulted in reduced FMDV replication in BHK-21 cells suggesting that the PPRV RNA-induced interference was specifically directed against FMDV. On long-term coinfection of some acute pathogenic viruses (all possible combinations of PPRV, FMDV, NDV and buffalopox virus in Vero cells, in most cases, one of the coinfecting viruses was excluded at passage level 5 suggesting that the long-term coinfection may modify viral persistence. To the best of our knowledge, this is the first documented evidence describing a natural mixed infection of FMDV and PPRV. The study not only provides simple and reliable methodologies for isolation and purification of two epidemiologically and economically important groups of
Halliwell, George R.
Vertical coordinate and vertical mixing algorithms included in the HYbrid Coordinate Ocean Model (HYCOM) are evaluated in low-resolution climatological simulations of the Atlantic Ocean. The hybrid vertical coordinates are isopycnic in the deep ocean interior, but smoothly transition to level (pressure) coordinates near the ocean surface, to sigma coordinates in shallow water regions, and back again to level coordinates in very shallow water. By comparing simulations to climatology, the best model performance is realized using hybrid coordinates in conjunction with one of the three available differential vertical mixing models: the nonlocal K-Profile Parameterization, the NASA GISS level 2 turbulence closure, and the Mellor-Yamada level 2.5 turbulence closure. Good performance is also achieved using the quasi-slab Price-Weller-Pinkel dynamical instability model. Differences among these simulations are too small relative to other errors and biases to identify the "best" vertical mixing model for low-resolution climate simulations. Model performance deteriorates slightly when the Kraus-Turner slab mixed layer model is used with hybrid coordinates. This deterioration is smallest when solar radiation penetrates beneath the mixed layer and when shear instability mixing is included. A simulation performed using isopycnic coordinates to emulate the Miami Isopycnic Coordinate Ocean Model (MICOM), which uses Kraus-Turner mixing without penetrating shortwave radiation and shear instability mixing, demonstrates that the advantages of switching from isopycnic to hybrid coordinates and including more sophisticated turbulence closures outweigh the negative numerical effects of maintaining hybrid vertical coordinates.
Best practices for use of stable isotope mixing models in food-web studies
Stable isotope mixing models are increasingly used to quantify contributions of resources to consumers. While potentially powerful tools, these mixing models have the potential to be misused, abused, and misinterpreted. Here we draw on our collective experiences to address the qu...
A Proposed Model of Retransformed Qualitative Data within a Mixed Methods Research Design
Palladino, John M.
2009-01-01
Most models of mixed methods research design provide equal emphasis of qualitative and quantitative data analyses and interpretation. Other models stress one method more than the other. The present article is a discourse about the investigator's decision to employ a mixed method design to examine special education teachers' advocacy and…
LES of n-Dodecane Spray Combustion Using a Multiple Representative Interactive Flamelets Model
Directory of Open Access Journals (Sweden)
Davidovic Marco
2017-09-01
Full Text Available A single-hole n-dodecane spray flame is studied in a Large-Eddy Simulation (LES framework under Diesel-relevant conditions using a Multiple Representative Interactive Flamelets (MRIF combustion model. Diesel spray combustion is strongly affected by the mixture formation process, which is dominated by several physical processes such as the flow within the injector, break-up of the liquid fuel jet, evaporation and turbulent mixing with the surrounding gas. While the effects of nozzle-internal flow and primary breakup are captured within tuned model parameters in traditional Lagrangian spray models, an alternative approach is applied in this study, where the initial droplet conditions and primary fuel jet breakup are modeled based on results from highly resolved multiphase simulations with resolved interface. A highly reduced chemical mechanism consisting of 57 species and 217 reactions has been developed for n-dodecane achiving a good computational performance at solving the chemical reactions. The MRIF model, which has demonstrated its capability of capturing combustion and pollutant formation under typical Diesel conditions in Reynolds-Averaged Navier-Stokes (RANS simulations is extended for the application in LES. In the standard RIF combustion model, representative chemistry conditioned on mixture fraction is solved interactively with the flow. Subfilter-scale mixing is modeled by the scalar dissipation rate. While the standard RIF model only includes temporal changes of the scalar dissipation rate, the spatial distribution can be accounted for by extending the model to multiple flamelets, which also enables the possibility of capturing different fuel residence times. Overall, the model shows good agreement with experimental data regarding both, low and high temperature combustion characteristics. It is shown that the ignition process and pollutant formation are affected by turbulent mixing. First, a cool flame is initiated at approximately
Directory of Open Access Journals (Sweden)
S. E. Bauer
2008-10-01
Full Text Available A new aerosol microphysical module MATRIX, the Multiconfiguration Aerosol TRacker of mIXing state, and its application in the Goddard Institute for Space Studies (GISS climate model (ModelE are described. This module, which is based on the quadrature method of moments (QMOM, represents nucleation, condensation, coagulation, internal and external mixing, and cloud-drop activation and provides aerosol particle mass and number concentration and particle size information for up to 16 mixed-mode aerosol populations. Internal and external mixing among aerosol components sulfate, nitrate, ammonium, carbonaceous aerosols, dust and sea-salt particles are represented. The solubility of each aerosol population, which is explicitly calculated based on its soluble and insoluble components, enables calculation of the dependence of cloud drop activation on the microphysical characterization of multiple soluble aerosol populations.
A detailed model description and results of box-model simulations of various aerosol population configurations are presented. The box model experiments demonstrate the dependence of cloud activating aerosol number concentration on the aerosol population configuration; comparisons to sectional models are quite favorable. MATRIX is incorporated into the GISS climate model and simulations are carried out primarily to assess its performance/efficiency for global-scale atmospheric model application. Simulation results were compared with aircraft and station measurements of aerosol mass and number concentration and particle size to assess the ability of the new method to yield data suitable for such comparison. The model accurately captures the observed size distributions in the Aitken and accumulation modes up to particle diameter 1 μm, in which sulfate, nitrate, black and organic carbon are predominantly located; however the model underestimates coarse-mode number concentration and size, especially in the marine environment
Three novel approaches to structural identifiability analysis in mixed-effects models.
Janzén, David L I; Jirstrand, Mats; Chappell, Michael J; Evans, Neil D
2016-05-06
Structural identifiability is a concept that considers whether the structure of a model together with a set of input-output relations uniquely determines the model parameters. In the mathematical modelling of biological systems, structural identifiability is an important concept since biological interpretations are typically made from the parameter estimates. For a system defined by ordinary differential equations, several methods have been developed to analyse whether the model is structurally identifiable or otherwise. Another well-used modelling framework, which is particularly useful when the experimental data are sparsely sampled and the population variance is of interest, is mixed-effects modelling. However, established identifiability analysis techniques for ordinary differential equations are not directly applicable to such models. In this paper, we present and apply three different methods that can be used to study structural identifiability in mixed-effects models. The first method, called the repeated measurement approach, is based on applying a set of previously established statistical theorems. The second method, called the augmented system approach, is based on augmenting the mixed-effects model to an extended state-space form. The third method, called the Laplace transform mixed-effects extension, is based on considering the moment invariants of the systems transfer function as functions of random variables. To illustrate, compare and contrast the application of the three methods, they are applied to a set of mixed-effects models. Three structural identifiability analysis methods applicable to mixed-effects models have been presented in this paper. As method development of structural identifiability techniques for mixed-effects models has been given very little attention, despite mixed-effects models being widely used, the methods presented in this paper provides a way of handling structural identifiability in mixed-effects models previously not
How ocean lateral mixing changes Southern Ocean variability in coupled climate models
Pradal, M. A. S.; Gnanadesikan, A.; Thomas, J. L.
2016-02-01
The lateral mixing of tracers represents a major uncertainty in the formulation of coupled climate models. The mixing of tracers along density surfaces in the interior and horizontally within the mixed layer is often parameterized using a mixing coefficient ARedi. The models used in the Coupled Model Intercomparison Project 5 exhibit more than an order of magnitude range in the values of this coefficient used within the Southern Ocean. The impacts of such uncertainty on Southern Ocean variability have remained unclear, even as recent work has shown that this variability differs between different models. In this poster, we change the lateral mixing coefficient within GFDL ESM2Mc, a coarse-resolution Earth System model that nonetheless has a reasonable circulation within the Southern Ocean. As the coefficient varies from 400 to 2400 m2/s the amplitude of the variability varies significantly. The low-mixing case shows strong decadal variability with an annual mean RMS temperature variability exceeding 1C in the Circumpolar Current. The highest-mixing case shows a very similar spatial pattern of variability, but with amplitudes only about 60% as large. The suppression of mixing is larger in the Atlantic Sector of the Southern Ocean relatively to the Pacific sector. We examine the salinity budgets of convective regions, paying particular attention to the extent to which high mixing prevents the buildup of low-saline waters that are capable of shutting off deep convection entirely.
Vehicle coordinated transportation dispatching model base on multiple crisis locations
Tian, Ran; Li, Shanwei; Yang, Guoying
2018-05-01
Many disastrous events are often caused after unconventional emergencies occur, and the requirements of disasters are often different. It is difficult for a single emergency resource center to satisfy such requirements at the same time. Therefore, how to coordinate the emergency resources stored by multiple emergency resource centers to various disaster sites requires the coordinated transportation of emergency vehicles. In this paper, according to the problem of emergency logistics coordination scheduling, based on the related constraints of emergency logistics transportation, an emergency resource scheduling model based on multiple disasters is established.
Multiple Surrogate Modeling for Wire-Wrapped Fuel Assembly Optimization
International Nuclear Information System (INIS)
Raza, Wasim; Kim, Kwang-Yong
2007-01-01
In this work, shape optimization of seven pin wire wrapped fuel assembly has been carried out in conjunction with RANS analysis in order to evaluate the performances of surrogate models. Previously, Ahmad and Kim performed the flow and heat transfer analysis based on the three-dimensional RANS analysis. But numerical optimization has not been applied to the design of wire-wrapped fuel assembly, yet. Surrogate models are being widely used in multidisciplinary optimization. Queipo et al. reviewed various surrogates based models used in aerospace applications. Goel et al. developed weighted average surrogate model based on response surface approximation (RSA), radial basis neural network (RBNN) and Krigging (KRG) models. In addition to the three basic models, RSA, RBNN and KRG, the multiple surrogate model, PBA also has been employed. Two geometric design variables and a multi-objective function with a weighting factor have been considered for this problem
A model for diagnosing and explaining multiple disorders.
Jamieson, P W
1991-08-01
The ability to diagnose multiple interacting disorders and explain them in a coherent causal framework has only partially been achieved in medical expert systems. This paper proposes a causal model for diagnosing and explaining multiple disorders whose key elements are: physician-directed hypotheses generation, object-oriented knowledge representation, and novel explanation heuristics. The heuristics modify and link the explanations to make the physician aware of diagnostic complexities. A computer program incorporating the model currently is in use for diagnosing peripheral nerve and muscle disorders. The program successfully diagnoses and explains interactions between diseases in terms of underlying pathophysiologic concepts. The model offers a new architecture for medical domains where reasoning from first principles is difficult but explanation of disease interactions is crucial for the system's operation.
Experiments and CFD Modelling of Turbulent Mass Transfer in a Mixing Channel
DEFF Research Database (Denmark)
Hjertager Osenbroch, Lene Kristin; Hjertager, Bjørn H.; Solberg, Tron
2006-01-01
. Three different flow cases are studied. The 2D numerical predictions of the mixing channel show that none of the k-ε turbulence models tested is suitable for the flow cases studied here. The turbulent Schmidt number is reduced to obtain a better agreement between measured and predicted mean......Experiments are carried out for passive mixing in order to obtain local mean and turbulent velocities and concentrations. The mixing takes place in a square channel with two inlets separated by a block. A combined PIV/PLIF technique is used to obtain instantaneous velocity and concentration fields...... and fluctuating concentrations. The multi-peak presumed PDF mixing model is tested....
DEFF Research Database (Denmark)
Tornøe, Christoffer Wenzel; Agersø, Henrik; Madsen, Henrik
2004-01-01
The standard software for non-linear mixed-effect analysis of pharmacokinetic/phar-macodynamic (PK/PD) data is NONMEM while the non-linear mixed-effects package NLME is an alternative as tong as the models are fairly simple. We present the nlmeODE package which combines the ordinary differential...... equation (ODE) solver package odesolve and the non-Linear mixed effects package NLME thereby enabling the analysis of complicated systems of ODEs by non-linear mixed-effects modelling. The pharmacokinetics of the anti-asthmatic drug theophylline is used to illustrate the applicability of the nlme...
MFAM: Multiple Frequency Adaptive Model-Based Indoor Localization Method.
Tuta, Jure; Juric, Matjaz B
2018-03-24
This paper presents MFAM (Multiple Frequency Adaptive Model-based localization method), a novel model-based indoor localization method that is capable of using multiple wireless signal frequencies simultaneously. It utilizes indoor architectural model and physical properties of wireless signal propagation through objects and space. The motivation for developing multiple frequency localization method lies in the future Wi-Fi standards (e.g., 802.11ah) and the growing number of various wireless signals present in the buildings (e.g., Wi-Fi, Bluetooth, ZigBee, etc.). Current indoor localization methods mostly rely on a single wireless signal type and often require many devices to achieve the necessary accuracy. MFAM utilizes multiple wireless signal types and improves the localization accuracy over the usage of a single frequency. It continuously monitors signal propagation through space and adapts the model according to the changes indoors. Using multiple signal sources lowers the required number of access points for a specific signal type while utilizing signals, already present in the indoors. Due to the unavailability of the 802.11ah hardware, we have evaluated proposed method with similar signals; we have used 2.4 GHz Wi-Fi and 868 MHz HomeMatic home automation signals. We have performed the evaluation in a modern two-bedroom apartment and measured mean localization error 2.0 to 2.3 m and median error of 2.0 to 2.2 m. Based on our evaluation results, using two different signals improves the localization accuracy by 18% in comparison to 2.4 GHz Wi-Fi-only approach. Additional signals would improve the accuracy even further. We have shown that MFAM provides better accuracy than competing methods, while having several advantages for real-world usage.
MFAM: Multiple Frequency Adaptive Model-Based Indoor Localization Method
Directory of Open Access Journals (Sweden)
Jure Tuta
2018-03-01
Full Text Available This paper presents MFAM (Multiple Frequency Adaptive Model-based localization method, a novel model-based indoor localization method that is capable of using multiple wireless signal frequencies simultaneously. It utilizes indoor architectural model and physical properties of wireless signal propagation through objects and space. The motivation for developing multiple frequency localization method lies in the future Wi-Fi standards (e.g., 802.11ah and the growing number of various wireless signals present in the buildings (e.g., Wi-Fi, Bluetooth, ZigBee, etc.. Current indoor localization methods mostly rely on a single wireless signal type and often require many devices to achieve the necessary accuracy. MFAM utilizes multiple wireless signal types and improves the localization accuracy over the usage of a single frequency. It continuously monitors signal propagation through space and adapts the model according to the changes indoors. Using multiple signal sources lowers the required number of access points for a specific signal type while utilizing signals, already present in the indoors. Due to the unavailability of the 802.11ah hardware, we have evaluated proposed method with similar signals; we have used 2.4 GHz Wi-Fi and 868 MHz HomeMatic home automation signals. We have performed the evaluation in a modern two-bedroom apartment and measured mean localization error 2.0 to 2.3 m and median error of 2.0 to 2.2 m. Based on our evaluation results, using two different signals improves the localization accuracy by 18% in comparison to 2.4 GHz Wi-Fi-only approach. Additional signals would improve the accuracy even further. We have shown that MFAM provides better accuracy than competing methods, while having several advantages for real-world usage.
CFD modeling of thermal mixing in a T-junction geometry using LES model
Energy Technology Data Exchange (ETDEWEB)
Ayhan, Hueseyin, E-mail: huseyinayhan@hacettepe.edu.tr [Hacettepe University, Department of Nuclear Engineering, Beytepe, Ankara 06800 (Turkey); Soekmen, Cemal Niyazi, E-mail: cemalniyazi.sokmen@hacettepe.edu.tr [Hacettepe University, Department of Nuclear Engineering, Beytepe, Ankara 06800 (Turkey)
2012-12-15
Highlights: Black-Right-Pointing-Pointer CFD simulations of temperature and velocity fluctuations for thermal mixing cases in T-junction are performed. Black-Right-Pointing-Pointer It is found that the frequency range of 2-5 Hz contains most of the energy; therefore, may cause thermal fatigue. Black-Right-Pointing-Pointer This study shows that RANS based calculations fail to predict a realistic mixing between the fluids. Black-Right-Pointing-Pointer LES model can predict instantaneous turbulence behavior. - Abstract: Turbulent mixing of fluids at different temperatures can lead to temperature fluctuations at the pipe material. These fluctuations, or thermal striping, inducing cyclical thermal stresses and resulting thermal fatigue, may cause unexpected failure of pipe material. Therefore, an accurate characterization of temperature fluctuations is important in order to estimate the lifetime of pipe material. Thermal fatigue of the coolant circuits of nuclear power plants is one of the major issues in nuclear safety. To investigate thermal fatigue damage, the OECD/NEA has recently organized a blind benchmark study including some of results of present work for prediction of temperature and velocity fluctuations performing a thermal mixing experiment in a T-junction. This paper aims to estimate the frequency of velocity and temperature fluctuations in the mixing region using Computational Fluid Dynamics (CFD). Reynolds Averaged Navier-Stokes and Large Eddy Simulation (LES) models were used to simulate turbulence. CFD results were compared with the available experimental results. Predicted LES results, even in coarse mesh, were found to be in well-agreement with the experimental results in terms of amplitude and frequency of temperature and velocity fluctuations. Analysis of the temperature fluctuations and the power spectrum densities (PSD) at the locations having the strongest temperature fluctuations in the tee junction shows that the frequency range of 2-5 Hz
Seniors managing multiple medications: using mixed methods to view the home care safety lens.
Lang, Ariella; Macdonald, Marilyn; Marck, Patricia; Toon, Lynn; Griffin, Melissa; Easty, Tony; Fraser, Kimberly; MacKinnon, Neil; Mitchell, Jonathan; Lang, Eddy; Goodwin, Sharon
2015-12-12
. There is a need for policy makers, health system leaders, care providers, researchers, and educators to work with home care clients and caregivers on three key messages for improvement: adapt care delivery models to the home care landscape; develop a palette of user-centered tools to support medication safety in the home; and strengthen health systems integration.
Modeling Temporal Behavior in Large Networks: A Dynamic Mixed-Membership Model
Energy Technology Data Exchange (ETDEWEB)
Rossi, R; Gallagher, B; Neville, J; Henderson, K
2011-11-11
Given a large time-evolving network, how can we model and characterize the temporal behaviors of individual nodes (and network states)? How can we model the behavioral transition patterns of nodes? We propose a temporal behavior model that captures the 'roles' of nodes in the graph and how they evolve over time. The proposed dynamic behavioral mixed-membership model (DBMM) is scalable, fully automatic (no user-defined parameters), non-parametric/data-driven (no specific functional form or parameterization), interpretable (identifies explainable patterns), and flexible (applicable to dynamic and streaming networks). Moreover, the interpretable behavioral roles are generalizable, computationally efficient, and natively supports attributes. We applied our model for (a) identifying patterns and trends of nodes and network states based on the temporal behavior, (b) predicting future structural changes, and (c) detecting unusual temporal behavior transitions. We use eight large real-world datasets from different time-evolving settings (dynamic and streaming). In particular, we model the evolving mixed-memberships and the corresponding behavioral transitions of Twitter, Facebook, IP-Traces, Email (University), Internet AS, Enron, Reality, and IMDB. The experiments demonstrate the scalability, flexibility, and effectiveness of our model for identifying interesting patterns, detecting unusual structural transitions, and predicting the future structural changes of the network and individual nodes.
Cold and hot model investigation of flow and mixing in a multi-jet flare
Energy Technology Data Exchange (ETDEWEB)
Pagot, P.R. [Petrobras Petroleo Brasileiro S.A., Rio de Janeiro (Brazil); Sobiesiak, A. [Windsor Univ., ON (Canada); Grandmaison, E.W. [Queen' s Univ., Kingston, ON (Canada). Centre for Advanced Gas Combustion Technology
2003-07-01
The oil and gas industry commonly disposes of hydrocarbon wastes by flaring. This study simulated several features of industrial offshore flares in a multi-jet burner. Cold and hot flow experiments were performed. Twenty-four nozzles mounted on radial arms originating from a central fuel plenum were used in the burner design. In an effort to improve the mixing and radiation characteristics of this type of burner, an examination of the effect of various mixing-altering devices on the nozzle exit ports was performed. Flow visualization studies of the cold and hot flow systems were presented, along with details concerning temperature, gas composition and radiation levels from the burner models. The complex flow pattern resulting when multiple jets are injected into a cross flow stream were demonstrated with the flow visualization studies from the cold model. The trajectory followed by the leading edge jet for the reference case and the ring attachments was higher but similar to the simple round jet in a cross flow. The precessing jets and the cone attachments were more strongly deflected by the cross flow with a higher degree of mixing between the jets in the nozzle region. For different firing rates, flow visualization, gas temperature, gas composition and radiative heat flux measurements were performed in the hot model studies. Flame trajectories, projected side view areas and volumes increased with firing rates for all nozzle configurations and the ring attachment flare had the smallest flame volume. The gas temperatures reached maximum values at close to 30 per cent of the flame length and the lowest gas temperature was observed for the flare model with precessing jets. For the reference case nozzle, nitrogen oxide (NOx) concentrations were in the 30 to 45 parts per million (ppm) range. The precessing jet model yielded NOx concentrations in the 22 to 24 ppm range, the lowest obtained. There was a linear dependence between the radiative heat flux from the flames
Ruiz-Baier, Ricardo; Lunati, Ivan
2016-10-01
We present a novel discretization scheme tailored to a class of multiphase models that regard the physical system as consisting of multiple interacting continua. In the framework of mixture theory, we consider a general mathematical model that entails solving a system of mass and momentum equations for both the mixture and one of the phases. The model results in a strongly coupled and nonlinear system of partial differential equations that are written in terms of phase and mixture (barycentric) velocities, phase pressure, and saturation. We construct an accurate, robust and reliable hybrid method that combines a mixed finite element discretization of the momentum equations with a primal discontinuous finite volume-element discretization of the mass (or transport) equations. The scheme is devised for unstructured meshes and relies on mixed Brezzi-Douglas-Marini approximations of phase and total velocities, on piecewise constant elements for the approximation of phase or total pressures, as well as on a primal formulation that employs discontinuous finite volume elements defined on a dual diamond mesh to approximate scalar fields of interest (such as volume fraction, total density, saturation, etc.). As the discretization scheme is derived for a general formulation of multicontinuum physical systems, it can be readily applied to a large class of simplified multiphase models; on the other, the approach can be seen as a generalization of these models that are commonly encountered in the literature and employed when the latter are not sufficiently accurate. An extensive set of numerical test cases involving two- and three-dimensional porous media are presented to demonstrate the accuracy of the method (displaying an optimal convergence rate), the physics-preserving properties of the mixed-primal scheme, as well as the robustness of the method (which is successfully used to simulate diverse physical phenomena such as density fingering, Terzaghi's consolidation
Directory of Open Access Journals (Sweden)
Maitraye Sen
2017-04-01
Full Text Available A discrete element model (DEM has been developed for an industrial batch bin blender in which three different types of materials are mixed. The mixing dynamics have been evaluated from a model-based study with respect to the blend critical quality attributes (CQAs which are relative standard deviation (RSD and segregation intensity. In the actual industrial setup, a sensor mounted on the blender lid is used to determine the blend composition in this region. A model-based analysis has been used to understand the mixing efficiency in the other zones inside the blender and to determine if the data obtained near the blender-lid region are able to provide a good representation of the overall blend quality. Sub-optimal mixing zones have been identified and other potential sampling locations have been investigated in order to obtain a good approximation of the blend variability. The model has been used to study how the mixing efficiency can be improved by varying the key processing parameters, i.e., blender RPM/speed, fill level/volume and loading order. Both segregation intensity and RSD reduce at a lower fill level and higher blender RPM and are a function of the mixing time. This work demonstrates the use of a model-based approach to improve process knowledge regarding a pharmaceutical mixing process. The model can be used to acquire qualitative information about the influence of different critical process parameters and equipment geometry on the mixing dynamics.
A PDP model of the simultaneous perception of multiple objects
Henderson, Cynthia M.; McClelland, James L.
2011-06-01
Illusory conjunctions in normal and simultanagnosic subjects are two instances where the visual features of multiple objects are incorrectly 'bound' together. A connectionist model explores how multiple objects could be perceived correctly in normal subjects given sufficient time, but could give rise to illusory conjunctions with damage or time pressure. In this model, perception of two objects benefits from lateral connections between hidden layers modelling aspects of the ventral and dorsal visual pathways. As with simultanagnosia, simulations of dorsal lesions impair multi-object recognition. In contrast, a large ventral lesion has minimal effect on dorsal functioning, akin to dissociations between simple object manipulation (retained in visual form agnosia and semantic dementia) and object discrimination (impaired in these disorders) [Hodges, J.R., Bozeat, S., Lambon Ralph, M.A., Patterson, K., and Spatt, J. (2000), 'The Role of Conceptual Knowledge: Evidence from Semantic Dementia', Brain, 123, 1913-1925; Milner, A.D., and Goodale, M.A. (2006), The Visual Brain in Action (2nd ed.), New York: Oxford]. It is hoped that the functioning of this model might suggest potential processes underlying dorsal and ventral contributions to the correct perception of multiple objects.
Supersymmetric U(1)' model with multiple dark matters
International Nuclear Information System (INIS)
Hur, Taeil; Lee, Hye-Sung; Nasri, Salah
2008-01-01
We consider a scenario where a supersymmetric model has multiple dark matter particles. Adding a U(1) ' gauge symmetry is a well-motivated extension of the minimal supersymmetric standard model (MSSM). It can cure the problems of the MSSM such as the μ problem or the proton decay problem with high-dimensional lepton number and baryon number violating operators which R parity allows. An extra parity (U parity) may arise as a residual discrete symmetry after U(1) ' gauge symmetry is spontaneously broken. The lightest U-parity particle (LUP) is stable under the new parity becoming a new dark matter candidate. Up to three massive particles can be stable in the presence of the R parity and the U parity. We numerically illustrate that multiple stable particles in our model can satisfy both constraints from the relic density and the direct detection, thus providing a specific scenario where a supersymmetric model has well-motivated multiple dark matters consistent with experimental constraints. The scenario provides new possibilities in the present and upcoming dark matter searches in the direct detection and collider experiments
A Finite Element Model for Mixed Porohyperelasticity with Transport, Swelling, and Growth.
Directory of Open Access Journals (Sweden)
Michelle Hine Armstrong
Full Text Available The purpose of this manuscript is to establish a unified theory of porohyperelasticity with transport and growth and to demonstrate the capability of this theory using a finite element model developed in MATLAB. We combine the theories of volumetric growth and mixed porohyperelasticity with transport and swelling (MPHETS to derive a new method that models growth of biological soft tissues. The conservation equations and constitutive equations are developed for both solid-only growth and solid/fluid growth. An axisymmetric finite element framework is introduced for the new theory of growing MPHETS (GMPHETS. To illustrate the capabilities of this model, several example finite element test problems are considered using model geometry and material parameters based on experimental data from a porcine coronary artery. Multiple growth laws are considered, including time-driven, concentration-driven, and stress-driven growth. Time-driven growth is compared against an exact analytical solution to validate the model. For concentration-dependent growth, changing the diffusivity (representing a change in drug fundamentally changes growth behavior. We further demonstrate that for stress-dependent, solid-only growth of an artery, growth of an MPHETS model results in a more uniform hoop stress than growth in a hyperelastic model for the same amount of growth time using the same growth law. This may have implications in the context of developing residual stresses in soft tissues under intraluminal pressure. To our knowledge, this manuscript provides the first full description of an MPHETS model with growth. The developed computational framework can be used in concert with novel in-vitro and in-vivo experimental approaches to identify the governing growth laws for various soft tissues.
A Finite Element Model for Mixed Porohyperelasticity with Transport, Swelling, and Growth.
Armstrong, Michelle Hine; Buganza Tepole, Adrián; Kuhl, Ellen; Simon, Bruce R; Vande Geest, Jonathan P
2016-01-01
The purpose of this manuscript is to establish a unified theory of porohyperelasticity with transport and growth and to demonstrate the capability of this theory using a finite element model developed in MATLAB. We combine the theories of volumetric growth and mixed porohyperelasticity with transport and swelling (MPHETS) to derive a new method that models growth of biological soft tissues. The conservation equations and constitutive equations are developed for both solid-only growth and solid/fluid growth. An axisymmetric finite element framework is introduced for the new theory of growing MPHETS (GMPHETS). To illustrate the capabilities of this model, several example finite element test problems are considered using model geometry and material parameters based on experimental data from a porcine coronary artery. Multiple growth laws are considered, including time-driven, concentration-driven, and stress-driven growth. Time-driven growth is compared against an exact analytical solution to validate the model. For concentration-dependent growth, changing the diffusivity (representing a change in drug) fundamentally changes growth behavior. We further demonstrate that for stress-dependent, solid-only growth of an artery, growth of an MPHETS model results in a more uniform hoop stress than growth in a hyperelastic model for the same amount of growth time using the same growth law. This may have implications in the context of developing residual stresses in soft tissues under intraluminal pressure. To our knowledge, this manuscript provides the first full description of an MPHETS model with growth. The developed computational framework can be used in concert with novel in-vitro and in-vivo experimental approaches to identify the governing growth laws for various soft tissues.
Yu-Kang, Tu
2016-12-01
Network meta-analysis for multiple treatment comparisons has been a major development in evidence synthesis methodology. The validity of a network meta-analysis, however, can be threatened by inconsistency in evidence within the network. One particular issue of inconsistency is how to directly evaluate the inconsistency between direct and indirect evidence with regard to the effects difference between two treatments. A Bayesian node-splitting model was first proposed and a similar frequentist side-splitting model has been put forward recently. Yet, assigning the inconsistency parameter to one or the other of the two treatments or splitting the parameter symmetrically between the two treatments can yield different results when multi-arm trials are involved in the evaluation. We aimed to show that a side-splitting model can be viewed as a special case of design-by-treatment interaction model, and different parameterizations correspond to different design-by-treatment interactions. We demonstrated how to evaluate the side-splitting model using the arm-based generalized linear mixed model, and an example data set was used to compare results from the arm-based models with those from the contrast-based models. The three parameterizations of side-splitting make slightly different assumptions: the symmetrical method assumes that both treatments in a treatment contrast contribute to inconsistency between direct and indirect evidence, whereas the other two parameterizations assume that only one of the two treatments contributes to this inconsistency. With this understanding in mind, meta-analysts can then make a choice about how to implement the side-splitting method for their analysis. Copyright © 2016 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Mixed Higher Order Variational Model for Image Recovery
Directory of Open Access Journals (Sweden)
Pengfei Liu
2014-01-01
Full Text Available A novel mixed higher order regularizer involving the first and second degree image derivatives is proposed in this paper. Using spectral decomposition, we reformulate the new regularizer as a weighted L1-L2 mixed norm of image derivatives. Due to the equivalent formulation of the proposed regularizer, an efficient fast projected gradient algorithm combined with monotone fast iterative shrinkage thresholding, called, FPG-MFISTA, is designed to solve the resulting variational image recovery problems under majorization-minimization framework. Finally, we demonstrate the effectiveness of the proposed regularization scheme by the experimental comparisons with total variation (TV scheme, nonlocal TV scheme, and current second degree methods. Specifically, the proposed approach achieves better results than related state-of-the-art methods in terms of peak signal to ratio (PSNR and restoration quality.
Advective Mixing in a Nondivergent Barotropic Hurricane Model
2010-01-20
voted to the mixing of fluid from different regions of a hurri- cane, which is considered as a fundamental mechanism that is intimately related to...range is governed by the Cauchy-Riemann deformation tensor , 1(x0,t0)= ( dx0φ t0+T t0 (x0) )∗( dx0φ t0+T t0 (x0) ) , and becomes maximal when ξ0 is
Improving Mixed-phase Cloud Parameterization in Climate Model with the ACRF Measurements
Energy Technology Data Exchange (ETDEWEB)
Wang, Zhien [Univ. of Wyoming, Laramie, WY (United States)
2016-12-13
Mixed-phase cloud microphysical and dynamical processes are still poorly understood, and their representation in GCMs is a major source of uncertainties in overall cloud feedback in GCMs. Thus improving mixed-phase cloud parameterizations in climate models is critical to reducing the climate forecast uncertainties. This study aims at providing improved knowledge of mixed-phase cloud properties from the long-term ACRF observations and improving mixed-phase clouds simulations in the NCAR Community Atmosphere Model version 5 (CAM5). The key accomplishments are: 1) An improved retrieval algorithm was developed to provide liquid droplet concentration for drizzling or mixed-phase stratiform clouds. 2) A new ice concentration retrieval algorithm for stratiform mixed-phase clouds was developed. 3) A strong seasonal aerosol impact on ice generation in Arctic mixed-phase clouds was identified, which is mainly attributed to the high dust occurrence during the spring season. 4) A suite of multi-senor algorithms was applied to long-term ARM observations at the Barrow site to provide a complete dataset (LWC and effective radius profile for liquid phase, and IWC, Dge profiles and ice concentration for ice phase) to characterize Arctic stratiform mixed-phase clouds. This multi-year stratiform mixed-phase cloud dataset provides necessary information to study related processes, evaluate model stratiform mixed-phase cloud simulations, and improve model stratiform mixed-phase cloud parameterization. 5). A new in situ data analysis method was developed to quantify liquid mass partition in convective mixed-phase clouds. For the first time, we reliably compared liquid mass partitions in stratiform and convective mixed-phase clouds. Due to the different dynamics in stratiform and convective mixed-phase clouds, the temperature dependencies of liquid mass partitions are significantly different due to much higher ice concentrations in convective mixed phase clouds. 6) Systematic evaluations
Hossain, Ahmed; Beyene, Joseph
2014-01-01
This article compares baseline, average, and longitudinal data analysis methods for identifying genetic variants in genome-wide association study using the Genetic Analysis Workshop 18 data. We apply methods that include (a) linear mixed models with baseline measures, (b) random intercept linear mixed models with mean measures outcome, and (c) random intercept linear mixed models with longitudinal measurements. In the linear mixed models, covariates are included as fixed effects, whereas relatedness among individuals is incorporated as the variance-covariance structure of the random effect for the individuals. The overall strategy of applying linear mixed models decorrelate the data is based on Aulchenko et al.'s GRAMMAR. By analyzing systolic and diastolic blood pressure, which are used separately as outcomes, we compare the 3 methods in identifying a known genetic variant that is associated with blood pressure from chromosome 3 and simulated phenotype data. We also analyze the real phenotype data to illustrate the methods. We conclude that the linear mixed model with longitudinal measurements of diastolic blood pressure is the most accurate at identifying the known single-nucleotide polymorphism among the methods, but linear mixed models with baseline measures perform best with systolic blood pressure as the outcome.
Dealing with Multiple Solutions in Structural Vector Autoregressive Models.
Beltz, Adriene M; Molenaar, Peter C M
2016-01-01
Structural vector autoregressive models (VARs) hold great potential for psychological science, particularly for time series data analysis. They capture the magnitude, direction of influence, and temporal (lagged and contemporaneous) nature of relations among variables. Unified structural equation modeling (uSEM) is an optimal structural VAR instantiation, according to large-scale simulation studies, and it is implemented within an SEM framework. However, little is known about the uniqueness of uSEM results. Thus, the goal of this study was to investigate whether multiple solutions result from uSEM analysis and, if so, to demonstrate ways to select an optimal solution. This was accomplished with two simulated data sets, an empirical data set concerning children's dyadic play, and modifications to the group iterative multiple model estimation (GIMME) program, which implements uSEMs with group- and individual-level relations in a data-driven manner. Results revealed multiple solutions when there were large contemporaneous relations among variables. Results also verified several ways to select the correct solution when the complete solution set was generated, such as the use of cross-validation, maximum standardized residuals, and information criteria. This work has immediate and direct implications for the analysis of time series data and for the inferences drawn from those data concerning human behavior.
Czech Academy of Sciences Publication Activity Database
Brabec, Marek; Konár, Ondřej; Pelikán, Emil; Malý, Marek
2008-01-01
Roč. 24, č. 4 (2008), s. 659-678 ISSN 0169-2070 R&D Projects: GA AV ČR 1ET400300513 Institutional research plan: CEZ:AV0Z10300504 Keywords : individual gas consumption * nonlinear mixed effects model * ARIMAX * ARX * generalized linear mixed model * conditional modeling Subject RIV: JE - Non-nuclear Energetics, Energy Consumption ; Use Impact factor: 1.685, year: 2008
Automatic Generation of 3D Building Models with Multiple Roofs
Institute of Scientific and Technical Information of China (English)
Kenichi Sugihara; Yoshitugu Hayashi
2008-01-01
Based on building footprints (building polygons) on digital maps, we are proposing the GIS and CG integrated system that automatically generates 3D building models with multiple roofs. Most building polygons' edges meet at right angles (orthogonal polygon). The integrated system partitions orthogonal building polygons into a set of rectangles and places rectangular roofs and box-shaped building bodies on these rectangles. In order to partition an orthogonal polygon, we proposed a useful polygon expression in deciding from which vertex a dividing line is drawn. In this paper, we propose a new scheme for partitioning building polygons and show the process of creating 3D roof models.
A tactical supply chain planning model with multiple flexibility options
DEFF Research Database (Denmark)
Esmaeilikia, Masoud; Fahimnia, Behnam; Sarkis, Joeseph
2016-01-01
Supply chain flexibility is widely recognized as an approach to manage uncertainty. Uncertainty in the supply chain may arise from a number of sources such as demand and supply interruptions and lead time variability. A tactical supply chain planning model with multiple flexibility options...... incorporated in sourcing, manufacturing and logistics functions can be used for the analysis of flexibility adjustment in an existing supply chain. This paper develops such a tactical supply chain planning model incorporating a realistic range of flexibility options. A novel solution method is designed...
Hierarchical Multiple Markov Chain Model for Unsupervised Texture Segmentation
Czech Academy of Sciences Publication Activity Database
Scarpa, G.; Gaetano, R.; Haindl, Michal; Zerubia, J.
2009-01-01
Roč. 18, č. 8 (2009), s. 1830-1843 ISSN 1057-7149 R&D Projects: GA ČR GA102/08/0593 EU Projects: European Commission(XE) 507752 - MUSCLE Institutional research plan: CEZ:AV0Z10750506 Keywords : Classification * texture analysis * segmentation * hierarchical image models * Markov process Subject RIV: BD - Theory of Information Impact factor: 2.848, year: 2009 http://library.utia.cas.cz/separaty/2009/RO/haindl-hierarchical multiple markov chain model for unsupervised texture segmentation.pdf
Mathematical, physical and numerical principles essential for models of turbulent mixing
Energy Technology Data Exchange (ETDEWEB)
Sharp, David Howland [Los Alamos National Laboratory; Lim, Hyunkyung [STONY BROOK UNIV; Yu, Yan [STONY BROOK UNIV; Glimm, James G [STONY BROOK UNIV
2009-01-01
We propose mathematical, physical and numerical principles which are important for the modeling of turbulent mixing, especially the classical and well studied Rayleigh-Taylor and Richtmyer-Meshkov instabilities which involve acceleration driven mixing of a fluid discontinuity layer, by a steady accerleration or an impulsive force.
The Simulation of Financial Markets by Agent-Based Mix-Game Models
Chengling Gou
2006-01-01
This paper studies the simulation of financial markets using an agent-based mix-game model which is a variant of the minority game (MG). It specifies the spectra of parameters of mix-game models that fit financial markets by investigating the dynamic behaviors of mix-game models under a wide range of parameters. The main findings are (a) in order to approach efficiency, agents in a real financial market must be heterogeneous, boundedly rational and subject to asymmetric information; (b) an ac...
The Simulation of Financial Markets by an Agent-Based Mix-Game Model
Chengling Gou
2006-01-01
This paper studies the simulation of financial markets using an agent-based mix-game model which is a variant of the minority game (MG). It specifies the spectra of parameters of mix-game models that fit financial markets by investigating the dynamic behaviors of mix-game models under a wide range of parameters. The main findings are (a) in order to approach efficiency, agents in a real financial market must be heterogeneous, boundedly rational and subject to asymmetric information; (b) an ac...
Feedback structure based entropy approach for multiple-model estimation
Institute of Scientific and Technical Information of China (English)
Shen-tu Han; Xue Anke; Guo Yunfei
2013-01-01
The variable-structure multiple-model (VSMM) approach, one of the multiple-model (MM) methods, is a popular and effective approach in handling problems with mode uncertainties. The model sequence set adaptation (MSA) is the key to design a better VSMM. However, MSA methods in the literature have big room to improve both theoretically and practically. To this end, we propose a feedback structure based entropy approach that could find the model sequence sets with the smallest size under certain conditions. The filtered data are fed back in real time and can be used by the minimum entropy (ME) based VSMM algorithms, i.e., MEVSMM. Firstly, the full Markov chains are used to achieve optimal solutions. Secondly, the myopic method together with particle filter (PF) and the challenge match algorithm are also used to achieve sub-optimal solutions, a trade-off between practicability and optimality. The numerical results show that the proposed algorithm provides not only refined model sets but also a good robustness margin and very high accuracy.
Bayes factor between Student t and Gaussian mixed models within an animal breeding context
Directory of Open Access Journals (Sweden)
García-Cortés Luis
2008-07-01
Full Text Available Abstract The implementation of Student t mixed models in animal breeding has been suggested as a useful statistical tool to effectively mute the impact of preferential treatment or other sources of outliers in field data. Nevertheless, these additional sources of variation are undeclared and we do not know whether a Student t mixed model is required or if a standard, and less parameterized, Gaussian mixed model would be sufficient to serve the intended purpose. Within this context, our aim was to develop the Bayes factor between two nested models that only differed in a bounded variable in order to easily compare a Student t and a Gaussian mixed model. It is important to highlight that the Student t density converges to a Gaussian process when degrees of freedom tend to infinity. The twomodels can then be viewed as nested models that differ in terms of degrees of freedom. The Bayes factor can be easily calculated from the output of a Markov chain Monte Carlo sampling of the complex model (Student t mixed model. The performance of this Bayes factor was tested under simulation and on a real dataset, using the deviation information criterion (DIC as the standard reference criterion. The two statistical tools showed similar trends along the parameter space, although the Bayes factor appeared to be the more conservative. There was considerable evidence favoring the Student t mixed model for data sets simulated under Student t processes with limited degrees of freedom, and moderate advantages associated with using the Gaussian mixed model when working with datasets simulated with 50 or more degrees of freedom. For the analysis of real data (weight of Pietrain pigs at six months, both the Bayes factor and DIC slightly favored the Student t mixed model, with there being a reduced incidence of outlier individuals in this population.
Swell impact on wind stress and atmospheric mixing in a regional coupled atmosphere-wave model
DEFF Research Database (Denmark)
Wu, Lichuan; Rutgersson, Anna; Sahlée, Erik
2016-01-01
Over the ocean, the atmospheric turbulence can be significantly affected by swell waves. Change in the atmospheric turbulence affects the wind stress and atmospheric mixing over swell waves. In this study, the influence of swell on atmospheric mixing and wind stress is introduced into an atmosphere-wave-coupled...... regional climate model, separately and combined. The swell influence on atmospheric mixing is introduced into the atmospheric mixing length formula by adding a swell-induced contribution to the mixing. The swell influence on the wind stress under wind-following swell, moderate-range wind, and near......-neutral and unstable stratification conditions is introduced by changing the roughness length. Five year simulation results indicate that adding the swell influence on atmospheric mixing has limited influence, only slightly increasing the near-surface wind speed; in contrast, adding the swell influence on wind stress...
Li, Mingming; Li, Lin; Li, Qiang; Zou, Zongshu
2018-05-01
A filter-based Euler-Lagrange multiphase flow model is used to study the mixing behavior in a combined blowing steelmaking converter. The Euler-based volume of fluid approach is employed to simulate the top blowing, while the Lagrange-based discrete phase model that embeds the local volume change of rising bubbles for the bottom blowing. A filter-based turbulence method based on the local meshing resolution is proposed aiming to improve the modeling of turbulent eddy viscosities. The model validity is verified through comparison with physical experiments in terms of mixing curves and mixing times. The effects of the bottom gas flow rate on bath flow and mixing behavior are investigated and the inherent reasons for the mixing result are clarified in terms of the characteristics of bottom-blowing plumes, the interaction between plumes and top-blowing jets, and the change of bath flow structure.
Alpha-modeling strategy for LES of turbulent mixing
Geurts, Bernard J.; Holm, Darryl D.; Drikakis, D.; Geurts, B.J.
2002-01-01
The α-modeling strategy is followed to derive a new subgrid parameterization of the turbulent stress tensor in large-eddy simulation (LES). The LES-α modeling yields an explicitly filtered subgrid parameterization which contains the filtered nonlinear gradient model as well as a model which
Modelling of diffuse solar fraction with multiple predictors
Energy Technology Data Exchange (ETDEWEB)
Ridley, Barbara; Boland, John [Centre for Industrial and Applied Mathematics, University of South Australia, Mawson Lakes Boulevard, Mawson Lakes, SA 5095 (Australia); Lauret, Philippe [Laboratoire de Physique du Batiment et des Systemes, University of La Reunion, Reunion (France)
2010-02-15
For some locations both global and diffuse solar radiation are measured. However, for many locations, only global radiation is measured, or inferred from satellite data. For modelling solar energy applications, the amount of radiation on a tilted surface is needed. Since only the direct component on a tilted surface can be calculated from direct on some other plane using trigonometry, we need to have diffuse radiation on the horizontal plane available. There are regression relationships for estimating the diffuse on a tilted surface from diffuse on the horizontal. Models for estimating the diffuse on the horizontal from horizontal global that have been developed in Europe or North America have proved to be inadequate for Australia. Boland et al. developed a validated model for Australian conditions. Boland et al. detailed our recent advances in developing the theoretical framework for the use of the logistic function instead of piecewise linear or simple nonlinear functions and was the first step in identifying the means for developing a generic model for estimating diffuse from global and other predictors. We have developed a multiple predictor model, which is much simpler than previous models, and uses hourly clearness index, daily clearness index, solar altitude, apparent solar time and a measure of persistence of global radiation level as predictors. This model performs marginally better than currently used models for locations in the Northern Hemisphere and substantially better for Southern Hemisphere locations. We suggest it can be used as a universal model. (author)
Eliciting mixed emotions: A meta-analysis comparing models, types and measures.
Directory of Open Access Journals (Sweden)
Raul eBerrios
2015-04-01
Full Text Available The idea that people can experience two oppositely valenced emotions has been controversial ever since early attempts to investigate the construct of mixed emotions. This meta-analysis examined the robustness with which mixed emotions have been elicited experimentally. A systematic literature search identified 63 experimental studies that instigated the experience of mixed emotions. Studies were distinguished according to the structure of the underlying affect model – dimensional or discrete – as well as according to the type of mixed emotions studied (e.g., happy-sad, fearful-happy, positive-negative. The meta-analysis using a random-effects model revealed a moderate to high effect size for the elicitation of mixed emotions (dIG+ = .77, which remained consistent regardless of the structure of the affect model, and across different types of mixed emotions. Several methodological and design moderators were tested. Studies using the minimum index (i.e., the minimum value between a pair of opposite valenced affects resulted in smaller effect sizes, whereas subjective measures of mixed emotions increased the effect sizes. The presence of more women in the samples was also associated with larger effect sizes. The current study indicates that mixed emotions are a robust, measurable and non-artifactual experience. The results are discussed in terms of the implications for an affect system that has greater versatility and flexibility than previously thought.
Unit physics performance of a mix model in Eulerian fluid computations
Energy Technology Data Exchange (ETDEWEB)
Vold, Erik [Los Alamos National Laboratory; Douglass, Rod [Los Alamos National Laboratory
2011-01-25
In this report, we evaluate the performance of a K-L drag-buoyancy mix model, described in a reference study by Dimonte-Tipton [1] hereafter denoted as [D-T]. The model was implemented in an Eulerian multi-material AMR code, and the results are discussed here for a series of unit physics tests. The tests were chosen to calibrate the model coefficients against empirical data, principally from RT (Rayleigh-Taylor) and RM (Richtmyer-Meshkov) experiments, and the present results are compared to experiments and to results reported in [D-T]. Results show the Eulerian implementation of the mix model agrees well with expectations for test problems in which there is no convective flow of the mass averaged fluid, i.e., in RT mix or in the decay of homogeneous isotropic turbulence (HIT). In RM shock-driven mix, the mix layer moves through the Eulerian computational grid, and there are differences with the previous results computed in a Lagrange frame [D-T]. The differences are attributed to the mass averaged fluid motion and examined in detail. Shock and re-shock mix are not well matched simultaneously. Results are also presented and discussed regarding model sensitivity to coefficient values and to initial conditions (IC), grid convergence, and the generation of atomically mixed volume fractions.
Gundersen, Kenneth; Kvaløy, Jan Terje; Eftestøl, Trygve; Kramer-Johansen, Jo
2015-10-15
For patients undergoing cardiopulmonary resuscitation (CPR) and being in a shockable rhythm, the coarseness of the electrocardiogram (ECG) signal is an indicator of the state of the patient. In the current work, we show how mixed effects stochastic differential equations (SDE) models, commonly used in pharmacokinetic and pharmacodynamic modelling, can be used to model the relationship between CPR quality measurements and ECG coarseness. This is a novel application of mixed effects SDE models to a setting quite different from previous applications of such models and where using such models nicely solves many of the challenges involved in analysing the available data. Copyright © 2015 John Wiley & Sons, Ltd.
The Impact of Varied Discrimination Parameters on Mixed-Format Item Response Theory Model Selection
Whittaker, Tiffany A.; Chang, Wanchen; Dodd, Barbara G.
2013-01-01
Whittaker, Chang, and Dodd compared the performance of model selection criteria when selecting among mixed-format IRT models and found that the criteria did not perform adequately when selecting the more parameterized models. It was suggested by M. S. Johnson that the problems when selecting the more parameterized models may be because of the low…
Review and comparison of bi-fluid interpenetration mixing models
International Nuclear Information System (INIS)
Enaux, C.
2006-01-01
Today, there is a lot of bi-fluid models with two different speeds: Baer-Nunziato models; Godunov-Romensky models. coupled Euler's equations, and so on. In this report, one compares the most used models in the fields of physics and mathematics while basing this study on the literature. From the point of view of physics. for each model. one reviews: -) the type of mixture considered and modeling assumptions, -) the technique of construction, -) some properties like the respect of thermodynamical principles, the respect of the Galilean invariance principle, or the equilibrium conservation. From the point of view of mathematics, for each model, one looks at: -) the possibility of writing the equations in conservative form, -) hyperbolicity, -) the existence of a mathematical entropy. Finally, a unified review of the models is proposed. It is shown that under certain closing assumptions or for certain flow types. some of the models become equivalent. (author)
Rapidity correlations at fixed multiplicity in cluster emission models
Berger, M C
1975-01-01
Rapidity correlations in the central region among hadrons produced in proton-proton collisions of fixed final state multiplicity n at NAL and ISR energies are investigated in a two-step framework in which clusters of hadrons are emitted essentially independently, via a multiperipheral-like model, and decay isotropically. For n>or approximately=/sup 1///sub 2/(n), these semi-inclusive distributions are controlled by the reaction mechanism which dominates production in the central region. Thus, data offer cleaner insight into the properties of this mechanism than can be obtained from fully inclusive spectra. A method of experimental analysis is suggested to facilitate the extraction of new dynamical information. It is shown that the n independence of the magnitude of semi-inclusive correlation functions reflects directly the structure of the internal cluster multiplicity distribution. This conclusion is independent of certain assumptions concerning the form of the single cluster density in rapidity space. (23 r...
Mixed models, linear dependency, and identification in age-period-cohort models.
O'Brien, Robert M
2017-07-20
This paper examines the identification problem in age-period-cohort models that use either linear or categorically coded ages, periods, and cohorts or combinations of these parameterizations. These models are not identified using the traditional fixed effect regression model approach because of a linear dependency between the ages, periods, and cohorts. However, these models can be identified if the researcher introduces a single just identifying constraint on the model coefficients. The problem with such constraints is that the results can differ substantially depending on the constraint chosen. Somewhat surprisingly, age-period-cohort models that specify one or more of ages and/or periods and/or cohorts as random effects are identified. This is the case without introducing an additional constraint. I label this identification as statistical model identification and show how statistical model identification comes about in mixed models and why which effects are treated as fixed and which are treated as random can substantially change the estimates of the age, period, and cohort effects. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Evaluation of a Linear Mixing Model to Retrieve Soil and Vegetation Temperatures of Land Targets
International Nuclear Information System (INIS)
Yang, Jinxin; Jia, Li; Cui, Yaokui; Zhou, Jie; Menenti, Massimo
2014-01-01
A simple linear mixing model of heterogeneous soil-vegetation system and retrieval of component temperatures from directional remote sensing measurements by inverting this model is evaluated in this paper using observations by a thermal camera. The thermal camera was used to obtain multi-angular TIR (Thermal Infra-Red) images over vegetable and orchard canopies. A whole thermal camera image was treated as a pixel of a satellite image to evaluate the model with the two-component system, i.e. soil and vegetation. The evaluation included two parts: evaluation of the linear mixing model and evaluation of the inversion of the model to retrieve component temperatures. For evaluation of the linear mixing model, the RMSE is 0.2 K between the observed and modelled brightness temperatures, which indicates that the linear mixing model works well under most conditions. For evaluation of the model inversion, the RMSE between the model retrieved and the observed vegetation temperatures is 1.6K, correspondingly, the RMSE between the observed and retrieved soil temperatures is 2.0K. According to the evaluation of the sensitivity of retrieved component temperatures on fractional cover, the linear mixing model gives more accurate retrieval accuracies for both soil and vegetation temperatures under intermediate fractional cover conditions
An R2 statistic for fixed effects in the linear mixed model.
Edwards, Lloyd J; Muller, Keith E; Wolfinger, Russell D; Qaqish, Bahjat F; Schabenberger, Oliver
2008-12-20
Statisticians most often use the linear mixed model to analyze Gaussian longitudinal data. The value and familiarity of the R(2) statistic in the linear univariate model naturally creates great interest in extending it to the linear mixed model. We define and describe how to compute a model R(2) statistic for the linear mixed model by using only a single model. The proposed R(2) statistic measures multivariate association between the repeated outcomes and the fixed effects in the linear mixed model. The R(2) statistic arises as a 1-1 function of an appropriate F statistic for testing all fixed effects (except typically the intercept) in a full model. The statistic compares the full model with a null model with all fixed effects deleted (except typically the intercept) while retaining exactly the same covariance structure. Furthermore, the R(2) statistic leads immediately to a natural definition of a partial R(2) statistic. A mixed model in which ethnicity gives a very small p-value as a longitudinal predictor of blood pressure (BP) compellingly illustrates the value of the statistic. In sharp contrast to the extreme p-value, a very small R(2) , a measure of statistical and scientific importance, indicates that ethnicity has an almost negligible association with the repeated BP outcomes for the study.
The Brown Muck of $B^0$ and $B^0_s$ Mixing: Beyond the Standard Model
Energy Technology Data Exchange (ETDEWEB)
Bouchard, Christopher Michael [Univ. of Illinois, Urbana-Champaign, IL (United States)
2011-01-01
Standard Model contributions to neutral $B$ meson mixing begin at the one loop level where they are further suppressed by a combination of the GIM mechanism and Cabibbo suppression. This combination makes $B$ meson mixing a promising probe of new physics, where as yet undiscovered particles and/or interactions can participate in the virtual loops. Relating underlying interactions of the mixing process to experimental observation requires a precise calculation of the non-perturbative process of hadronization, characterized by hadronic mixing matrix elements. This thesis describes a calculation of the hadronic mixing matrix elements relevant to a large class of new physics models. The calculation is performed via lattice QCD using the MILC collaboration's gauge configurations with $2+1$ dynamical sea quarks.
Multiplicative Attribute Graph Model of Real-World Networks
Energy Technology Data Exchange (ETDEWEB)
Kim, Myunghwan [Stanford Univ., CA (United States); Leskovec, Jure [Stanford Univ., CA (United States)
2010-10-20
Large scale real-world network data, such as social networks, Internet andWeb graphs, is ubiquitous in a variety of scientific domains. The study of such social and information networks commonly finds patterns and explain their emergence through tractable models. In most networks, especially in social networks, nodes also have a rich set of attributes (e.g., age, gender) associatedwith them. However, most of the existing network models focus only on modeling the network structure while ignoring the features of nodes in the network. Here we present a class of network models that we refer to as the Multiplicative Attribute Graphs (MAG), which naturally captures the interactions between the network structure and node attributes. We consider a model where each node has a vector of categorical features associated with it. The probability of an edge between a pair of nodes then depends on the product of individual attributeattribute similarities. The model yields itself to mathematical analysis as well as fit to real data. We derive thresholds for the connectivity, the emergence of the giant connected component, and show that the model gives rise to graphs with a constant diameter. Moreover, we analyze the degree distribution to show that the model can produce networks with either lognormal or power-law degree distribution depending on certain conditions.
Evaluation of scalar mixing and time scale models in PDF simulations of a turbulent premixed flame
Energy Technology Data Exchange (ETDEWEB)
Stoellinger, Michael; Heinz, Stefan [Department of Mathematics, University of Wyoming, Laramie, WY (United States)
2010-09-15
Numerical simulation results obtained with a transported scalar probability density function (PDF) method are presented for a piloted turbulent premixed flame. The accuracy of the PDF method depends on the scalar mixing model and the scalar time scale model. Three widely used scalar mixing models are evaluated: the interaction by exchange with the mean (IEM) model, the modified Curl's coalescence/dispersion (CD) model and the Euclidean minimum spanning tree (EMST) model. The three scalar mixing models are combined with a simple model for the scalar time scale which assumes a constant C{sub {phi}}=12 value. A comparison of the simulation results with available measurements shows that only the EMST model calculates accurately the mean and variance of the reaction progress variable. An evaluation of the structure of the PDF's of the reaction progress variable predicted by the three scalar mixing models confirms this conclusion: the IEM and CD models predict an unrealistic shape of the PDF. Simulations using various C{sub {phi}} values ranging from 2 to 50 combined with the three scalar mixing models have been performed. The observed deficiencies of the IEM and CD models persisted for all C{sub {phi}} values considered. The value C{sub {phi}}=12 combined with the EMST model was found to be an optimal choice. To avoid the ad hoc choice for C{sub {phi}}, more sophisticated models for the scalar time scale have been used in simulations using the EMST model. A new model for the scalar time scale which is based on a linear blending between a model for flamelet combustion and a model for distributed combustion is developed. The new model has proven to be very promising as a scalar time scale model which can be applied from flamelet to distributed combustion. (author)
Mixed deterministic statistical modelling of regional ozone air pollution
Kalenderski, Stoitchko
2011-03-17
We develop a physically motivated statistical model for regional ozone air pollution by separating the ground-level pollutant concentration field into three components, namely: transport, local production and large-scale mean trend mostly dominated by emission rates. The model is novel in the field of environmental spatial statistics in that it is a combined deterministic-statistical model, which gives a new perspective to the modelling of air pollution. The model is presented in a Bayesian hierarchical formalism, and explicitly accounts for advection of pollutants, using the advection equation. We apply the model to a specific case of regional ozone pollution-the Lower Fraser valley of British Columbia, Canada. As a predictive tool, we demonstrate that the model vastly outperforms existing, simpler modelling approaches. Our study highlights the importance of simultaneously considering different aspects of an air pollution problem as well as taking into account the physical bases that govern the processes of interest. © 2011 John Wiley & Sons, Ltd..
Consequences of observed Bd-anti Bd mixing in standard and nonstandard models
International Nuclear Information System (INIS)
Datta, A.; Paschos, E.A.; Tuerke, U.
1987-01-01
Implications of the B d -anti B d mixing report by the ARGUS group are investigated. We show that in order for the standard model to accomodate the result, the B → anti B hadronic matrix element must satisfy lower bounds as a function of top quark mass. In this case B S -anti B S mixing is necessarily large (r S > or approx. 0.74) irrespective of m t . This conclusion remains valid in several popular extensions of the standard model with three generations. In contrast to these models, four generation models can accomodate simultaneously the observed B d -anti B d mixing and a relatively small B S -anti B S mixing. (orig.)
On the use of the Prandtl mixing length model in the cutting torch modeling
Energy Technology Data Exchange (ETDEWEB)
Mancinelli, B [Grupo de Descargas Electricas, Departamento Ing. Electromecanica, Universidad Tecnologica Nacional, Regional Venado Tuerto, Laprida 651, Venado Tuerto (2600), Santa Fe (Argentina); Minotti, F O; Kelly, H, E-mail: bmancinelli@arnet.com.ar [Instituto de Fisica del Plasma (CONICET), Departamento de Fisica, Facultad de Ciencias Exactas y Naturales (UBA) Ciudad Universitaria Pab. I, 1428 Buenos Aires (Argentina)
2011-05-01
The Prandtl mixing length model has been used to take into account the turbulent effects in a 30 A high-energy density cutting torch model. In particular, the model requires the introduction of only one adjustable coefficient c corresponding to the length of action of the turbulence. It is shown that the c value has little effect on the plasma temperature profiles outside the nozzle (the differences being less than 10 %), but severely affects the plasma velocity distribution, with differences reaching about 100% at the middle of the nozzle-anode gap. Within the experimental uncertainties it was also found that the value c = 0.08 allows to reproduce both, the experimental data of velocity and temperature
On the use of the Prandtl mixing length model in the cutting torch modeling
International Nuclear Information System (INIS)
Mancinelli, B; Minotti, F O; Kelly, H
2011-01-01
The Prandtl mixing length model has been used to take into account the turbulent effects in a 30 A high-energy density cutting torch model. In particular, the model requires the introduction of only one adjustable coefficient c corresponding to the length of action of the turbulence. It is shown that the c value has little effect on the plasma temperature profiles outside the nozzle (the differences being less than 10 %), but severely affects the plasma velocity distribution, with differences reaching about 100% at the middle of the nozzle-anode gap. Within the experimental uncertainties it was also found that the value c = 0.08 allows to reproduce both, the experimental data of velocity and temperature
Species Distribution Modeling: Comparison of Fixed and Mixed Effects Models Using INLA
Directory of Open Access Journals (Sweden)
Lara Dutra Silva
2017-12-01
Full Text Available Invasive alien species are among the most important, least controlled, and least reversible of human impacts on the world’s ecosystems, with negative consequences affecting biodiversity and socioeconomic systems. Species distribution models have become a fundamental tool in assessing the potential spread of invasive species in face of their native counterparts. In this study we compared two different modeling techniques: (i fixed effects models accounting for the effect of ecogeographical variables (EGVs; and (ii mixed effects models including also a Gaussian random field (GRF to model spatial correlation (Matérn covariance function. To estimate the potential distribution of Pittosporum undulatum and Morella faya (respectively, invasive and native trees, we used geo-referenced data of their distribution in Pico and São Miguel islands (Azores and topographic, climatic and land use EGVs. Fixed effects models run with maximum likelihood or the INLA (Integrated Nested Laplace Approximation approach provided very similar results, even when reducing the size of the presences data set. The addition of the GRF increased model adjustment (lower Deviance Information Criterion, particularly for the less abundant tree, M. faya. However, the random field parameters were clearly affected by sample size and species distribution pattern. A high degree of spatial autocorrelation was found and should be taken into account when modeling species distribution.
A brief introduction to mixed effects modelling and multi-model inference in ecology.
Harrison, Xavier A; Donaldson, Lynda; Correa-Cano, Maria Eugenia; Evans, Julian; Fisher, David N; Goodwin, Cecily E D; Robinson, Beth S; Hodgson, David J; Inger, Richard
2018-01-01
The use of linear mixed effects models (LMMs) is increasingly common in the analysis of biological data. Whilst LMMs offer a flexible approach to modelling a broad range of data types, ecological data are often complex and require complex model structures, and the fitting and interpretation of such models is not always straightforward. The ability to achieve robust biological inference requires that practitioners know how and when to apply these tools. Here, we provide a general overview of current methods for the application of LMMs to biological data, and highlight the typical pitfalls that can be encountered in the statistical modelling process. We tackle several issues regarding methods of model selection, with particular reference to the use of information theory and multi-model inference in ecology. We offer practical solutions and direct the reader to key references that provide further technical detail for those seeking a deeper understanding. This overview should serve as a widely accessible code of best practice for applying LMMs to complex biological problems and model structures, and in doing so improve the robustness of conclusions drawn from studies investigating ecological and evolutionary questions.
Mixed-Initiative Control of Autonomous Unmanned Units Under Uncertainty
National Research Council Canada - National Science Library
Thrun, Sebvastian; Gordon, Geoffrey; Burstein, Mark; Diller, David; Fox, Dieter
2006-01-01
.... We developed this control model using Partially Observable Markov Decision Processes. The mixed-initiative interactions enabled users to describe constraints at multiple levels of the planning hierarchy...
Dynamic coordinated control laws in multiple agent models
International Nuclear Information System (INIS)
Morgan, David S.; Schwartz, Ira B.
2005-01-01
We present an active control scheme of a kinetic model of swarming. It has been shown previously that the global control scheme for the model, presented in [Systems Control Lett. 52 (2004) 25], gives rise to spontaneous collective organization of agents into a unified coherent swarm, via steering controls and utilizing long-range attractive and short-range repulsive interactions. We extend these results by presenting control laws whereby a single swarm is broken into independently functioning subswarm clusters. The transition between one coordinated swarm and multiple clustered subswarms is managed simply with a homotopy parameter. Additionally, we present as an alternate formulation, a local control law for the same model, which implements dynamic barrier avoidance behavior, and in which swarm coherence emerges spontaneously
Laplace transform analysis of a multiplicative asset transfer model
Sokolov, Andrey; Melatos, Andrew; Kieu, Tien
2010-07-01
We analyze a simple asset transfer model in which the transfer amount is a fixed fraction f of the giver’s wealth. The model is analyzed in a new way by Laplace transforming the master equation, solving it analytically and numerically for the steady-state distribution, and exploring the solutions for various values of f∈(0,1). The Laplace transform analysis is superior to agent-based simulations as it does not depend on the number of agents, enabling us to study entropy and inequality in regimes that are costly to address with simulations. We demonstrate that Boltzmann entropy is not a suitable (e.g. non-monotonic) measure of disorder in a multiplicative asset transfer system and suggest an asymmetric stochastic process that is equivalent to the asset transfer model.
The Mixed Quark-Gluon Condensate from the Global Color Symmetry Model
Institute of Scientific and Technical Information of China (English)
ZONG Hong-Shi; PING Jia-Lun; LU Xiao-Fu; WANG Fan; ZHAO En-Guang
2002-01-01
The mixed quark-gluon condensate from the global color symmetry model is derived. It is shown that themixed quark-gluon condensate depends explicitly on the gluon propagator. This interesting feature may be regarded asan additional constraint on the model of gluon propagator. The values of the mixed quark-gluon condensate from someansatz for the gluon propagator are compared with those determined from QCD sum rules.
Development of two phase turbulent mixing model for subchannel analysis relevant to BWR
International Nuclear Information System (INIS)
Sharma, M.P.; Nayak, A.K.; Kannan, Umasankari
2014-01-01
A two phase flow model is presented, which predicts both liquid and gas phase turbulent mixing rate between adjacent subchannels of reactor rod bundles. The model presented here is for slug churn flow regime, which is dominant as compared to the other regimes like bubbly flow and annular flow regimes, since turbulent mixing rate is the highest in slug churn flow regime. In this paper, we have defined new dimensionless parameters i.e. liquid mixing number and gas mixing number for two phase turbulent mixing. The liquid mixing number is a function of mixture Reynolds number whereas the gas phase mixing number is a function of both mixture Reynolds number and volumetric fraction of gas. The effect of pressure, geometrical influence of subchannel is also included in this model. The present model has been tested against low pressure and temperature air-water and high pressure and temperature steam-water experimental data found that it shows good agreement with available experimental data. (author)
Probability distributions in conservative energy exchange models of multiple interacting agents
International Nuclear Information System (INIS)
Scafetta, Nicola; West, Bruce J
2007-01-01
Herein we study energy exchange models of multiple interacting agents that conserve energy in each interaction. The models differ regarding the rules that regulate the energy exchange and boundary effects. We find a variety of stochastic behaviours that manifest energy equilibrium probability distributions of different types and interaction rules that yield not only the exponential distributions such as the familiar Maxwell-Boltzmann-Gibbs distribution of an elastically colliding ideal particle gas, but also uniform distributions, truncated exponential distributions, Gaussian distributions, Gamma distributions, inverse power law distributions, mixed exponential and inverse power law distributions, and evolving distributions. This wide variety of distributions should be of value in determining the underlying mechanisms generating the statistical properties of complex phenomena including those to be found in complex chemical reactions
Energy Technology Data Exchange (ETDEWEB)
Kamerlin, Shina C. L.; Haranczyk, Maciej; Warshel, Arieh
2009-05-01
Phosphate hydrolysis is ubiquitous in biology. However, despite intensive research on this class of reactions, the precise nature of the reaction mechanism remains controversial. In this work, we have examined the hydrolysis of three homologous phosphate diesters. The solvation free energy was simulated by means of either an implicit solvation model (COSMO), hybrid quantum mechanical / molecular mechanical free energy perturbation (QM/MM-FEP) or a mixed solvation model in which N water molecules were explicitly included in the ab initio description of the reacting system (where N=1-3), with the remainder of the solvent being implicitly modelled as a continuum. Here, both COSMO and QM/MM-FEP reproduce Delta Gobs within an error of about 2kcal/mol. However, we demonstrate that in order to obtain any form of reliable results from a mixed model, it is essential to carefully select the explicit water molecules from short QM/MM runs that act as a model for the true infinite system. Additionally, the mixed models tend to be increasingly inaccurate the more explicit water molecules are placed into the system. Thus, our analysis indicates that this approach provides an unreliable way for modelling phosphate hydrolysis in solution.
Modeling Spatial Dependence of Rainfall Extremes Across Multiple Durations
Le, Phuong Dong; Leonard, Michael; Westra, Seth
2018-03-01
Determining the probability of a flood event in a catchment given that another flood has occurred in a nearby catchment is useful in the design of infrastructure such as road networks that have multiple river crossings. These conditional flood probabilities can be estimated by calculating conditional probabilities of extreme rainfall and then transforming rainfall to runoff through a hydrologic model. Each catchment's hydrological response times are unlikely to be the same, so in order to estimate these conditional probabilities one must consider the dependence of extreme rainfall both across space and across critical storm durations. To represent these types of dependence, this study proposes a new approach for combining extreme rainfall across different durations within a spatial extreme value model using max-stable process theory. This is achieved in a stepwise manner. The first step defines a set of common parameters for the marginal distributions across multiple durations. The parameters are then spatially interpolated to develop a spatial field. Storm-level dependence is represented through the max-stable process for rainfall extremes across different durations. The dependence model shows a reasonable fit between the observed pairwise extremal coefficients and the theoretical pairwise extremal coefficient function across all durations. The study demonstrates how the approach can be applied to develop conditional maps of the return period and return level across different durations.
Protein Structure Classification and Loop Modeling Using Multiple Ramachandran Distributions
Najibi, Seyed Morteza
2017-02-08
Recently, the study of protein structures using angular representations has attracted much attention among structural biologists. The main challenge is how to efficiently model the continuous conformational space of the protein structures based on the differences and similarities between different Ramachandran plots. Despite the presence of statistical methods for modeling angular data of proteins, there is still a substantial need for more sophisticated and faster statistical tools to model the large-scale circular datasets. To address this need, we have developed a nonparametric method for collective estimation of multiple bivariate density functions for a collection of populations of protein backbone angles. The proposed method takes into account the circular nature of the angular data using trigonometric spline which is more efficient compared to existing methods. This collective density estimation approach is widely applicable when there is a need to estimate multiple density functions from different populations with common features. Moreover, the coefficients of adaptive basis expansion for the fitted densities provide a low-dimensional representation that is useful for visualization, clustering, and classification of the densities. The proposed method provides a novel and unique perspective to two important and challenging problems in protein structure research: structure-based protein classification and angular-sampling-based protein loop structure prediction.
A multiple relevance feedback strategy with positive and negative models.
Directory of Open Access Journals (Sweden)
Yunlong Ma
Full Text Available A commonly used strategy to improve search accuracy is through feedback techniques. Most existing work on feedback relies on positive information, and has been extensively studied in information retrieval. However, when a query topic is difficult and the results from the first-pass retrieval are very poor, it is impossible to extract enough useful terms from a few positive documents. Therefore, the positive feedback strategy is incapable to improve retrieval in this situation. Contrarily, there is a relatively large number of negative documents in the top of the result list, and it has been confirmed that negative feedback strategy is an important and useful way for adapting this scenario by several recent studies. In this paper, we consider a scenario when the search results are so poor that there are at most three relevant documents in the top twenty documents. Then, we conduct a novel study of multiple strategies for relevance feedback using both positive and negative examples from the first-pass retrieval to improve retrieval accuracy for such difficult queries. Experimental results on these TREC collections show that the proposed language model based multiple model feedback method which is generally more effective than both the baseline method and the methods using only positive or negative model.
Protein Structure Classification and Loop Modeling Using Multiple Ramachandran Distributions
Najibi, Seyed Morteza; Maadooliat, Mehdi; Zhou, Lan; Huang, Jianhua Z.; Gao, Xin
2017-01-01
Recently, the study of protein structures using angular representations has attracted much attention among structural biologists. The main challenge is how to efficiently model the continuous conformational space of the protein structures based on the differences and similarities between different Ramachandran plots. Despite the presence of statistical methods for modeling angular data of proteins, there is still a substantial need for more sophisticated and faster statistical tools to model the large-scale circular datasets. To address this need, we have developed a nonparametric method for collective estimation of multiple bivariate density functions for a collection of populations of protein backbone angles. The proposed method takes into account the circular nature of the angular data using trigonometric spline which is more efficient compared to existing methods. This collective density estimation approach is widely applicable when there is a need to estimate multiple density functions from different populations with common features. Moreover, the coefficients of adaptive basis expansion for the fitted densities provide a low-dimensional representation that is useful for visualization, clustering, and classification of the densities. The proposed method provides a novel and unique perspective to two important and challenging problems in protein structure research: structure-based protein classification and angular-sampling-based protein loop structure prediction.
Eliciting mixed emotions: a meta-analysis comparing models, types, and measures
Berrios, Raul; Totterdell, Peter; Kellett, Stephen
2015-01-01
The idea that people can experience two oppositely valenced emotions has been controversial ever since early attempts to investigate the construct of mixed emotions. This meta-analysis examined the robustness with which mixed emotions have been elicited experimentally. A systematic literature search identified 63 experimental studies that instigated the experience of mixed emotions. Studies were distinguished according to the structure of the underlying affect model—dimensional or discrete—as well as according to the type of mixed emotions studied (e.g., happy-sad, fearful-happy, positive-negative). The meta-analysis using a random-effects model revealed a moderate to high effect size for the elicitation of mixed emotions (dIG+ = 0.77), which remained consistent regardless of the structure of the affect model, and across different types of mixed emotions. Several methodological and design moderators were tested. Studies using the minimum index (i.e., the minimum value between a pair of opposite valenced affects) resulted in smaller effect sizes, whereas subjective measures of mixed emotions increased the effect sizes. The presence of more women in the samples was also associated with larger effect sizes. The current study indicates that mixed emotions are a robust, measurable and non-artifactual experience. The results are discussed in terms of the implications for an affect system that has greater versatility and flexibility than previously thought. PMID:25926805
Data on copula modeling of mixed discrete and continuous neural time series.
Hu, Meng; Li, Mingyao; Li, Wu; Liang, Hualou
2016-06-01
Copula is an important tool for modeling neural dependence. Recent work on copula has been expanded to jointly model mixed time series in neuroscience ("Hu et al., 2016, Joint Analysis of Spikes and Local Field Potentials using Copula" [1]). Here we present further data for joint analysis of spike and local field potential (LFP) with copula modeling. In particular, the details of different model orders and the influence of possible spike contamination in LFP data from the same and different electrode recordings are presented. To further facilitate the use of our copula model for the analysis of mixed data, we provide the Matlab codes, together with example data.
Many-electron model for multiple ionization in atomic collisions
International Nuclear Information System (INIS)
Archubi, C D; Montanari, C C; Miraglia, J E
2007-01-01
We have developed a many-electron model for multiple ionization of heavy atoms bombarded by bare ions. It is based on the transport equation for an ion in an inhomogeneous electronic density. Ionization probabilities are obtained by employing the shell-to-shell local plasma approximation with the Levine and Louie dielectric function to take into account the binding energy of each shell. Post-collisional contributions due to Auger-like processes are taken into account by employing recent photoemission data. Results for single-to-quadruple ionization of Ne, Ar, Kr and Xe by protons are presented showing a very good agreement with experimental data
Many-electron model for multiple ionization in atomic collisions
Energy Technology Data Exchange (ETDEWEB)
Archubi, C D [Instituto de AstronomIa y Fisica del Espacio, Casilla de Correo 67, Sucursal 28 (C1428EGA) Buenos Aires (Argentina); Montanari, C C [Instituto de AstronomIa y Fisica del Espacio, Casilla de Correo 67, Sucursal 28 (C1428EGA) Buenos Aires (Argentina); Miraglia, J E [Instituto de AstronomIa y Fisica del Espacio, Casilla de Correo 67, Sucursal 28 (C1428EGA) Buenos Aires (Argentina)
2007-03-14
We have developed a many-electron model for multiple ionization of heavy atoms bombarded by bare ions. It is based on the transport equation for an ion in an inhomogeneous electronic density. Ionization probabilities are obtained by employing the shell-to-shell local plasma approximation with the Levine and Louie dielectric function to take into account the binding energy of each shell. Post-collisional contributions due to Auger-like processes are taken into account by employing recent photoemission data. Results for single-to-quadruple ionization of Ne, Ar, Kr and Xe by protons are presented showing a very good agreement with experimental data.
Model selection in Bayesian segmentation of multiple DNA alignments.
Oldmeadow, Christopher; Keith, Jonathan M
2011-03-01
The analysis of multiple sequence alignments is allowing researchers to glean valuable insights into evolution, as well as identify genomic regions that may be functional, or discover novel classes of functional elements. Understanding the distribution of conservation levels that constitutes the evolutionary landscape is crucial to distinguishing functional regions from non-functional. Recent evidence suggests that a binary classification of evolutionary rates is inappropriate for this purpose and finds only highly conserved functional elements. Given that the distribution of evolutionary rates is multi-modal, determining the number of modes is of paramount concern. Through simulation, we evaluate the performance of a number of information criterion approaches derived from MCMC simulations in determining the dimension of a model. We utilize a deviance information criterion (DIC) approximation that is more robust than the approximations from other information criteria, and show our information criteria approximations do not produce superfluous modes when estimating conservation distributions under a variety of circumstances. We analyse the distribution of conservation for a multiple alignment comprising four primate species and mouse, and repeat this on two additional multiple alignments of similar species. We find evidence of six distinct classes of evolutionary rates that appear to be robust to the species used. Source code and data are available at http://dl.dropbox.com/u/477240/changept.zip.
Modeling the Bergeron-Findeisen Process Using PDF Methods With an Explicit Representation of Mixing
Jeffery, C.; Reisner, J.
2005-12-01
Currently, the accurate prediction of cloud droplet and ice crystal number concentration in cloud resolving, numerical weather prediction and climate models is a formidable challenge. The Bergeron-Findeisen process in which ice crystals grow by vapor deposition at the expense of super-cooled droplets is expected to be inhomogeneous in nature--some droplets will evaporate completely in centimeter-scale filaments of sub-saturated air during turbulent mixing while others remain unchanged [Baker et al., QJRMS, 1980]--and is unresolved at even cloud-resolving scales. Despite the large body of observational evidence in support of the inhomogeneous mixing process affecting cloud droplet number [most recently, Brenguier et al., JAS, 2000], it is poorly understood and has yet to be parameterized and incorporated into a numerical model. In this talk, we investigate the Bergeron-Findeisen process using a new approach based on simulations of the probability density function (PDF) of relative humidity during turbulent mixing. PDF methods offer a key advantage over Eulerian (spatial) models of cloud mixing and evaporation: the low probability (cm-scale) filaments of entrained air are explicitly resolved (in probability space) during the mixing event even though their spatial shape, size and location remain unknown. Our PDF approach reveals the following features of the inhomogeneous mixing process during the isobaric turbulent mixing of two parcels containing super-cooled water and ice, respectively: (1) The scavenging of super-cooled droplets is inhomogeneous in nature; some droplets evaporate completely at early times while others remain unchanged. (2) The degree of total droplet evaporation during the initial mixing period depends linearly on the mixing fractions of the two parcels and logarithmically on Damköhler number (Da)---the ratio of turbulent to evaporative time-scales. (3) Our simulations predict that the PDF of Lagrangian (time-integrated) subsaturation (S) goes as
A multiple-location model for natural gas forward curves
International Nuclear Information System (INIS)
Buffington, J.C.
1999-06-01
This thesis presents an approach for financial modelling of natural gas in which connections between locations are incorporated and the complexities of forward curves in natural gas are considered. Apart from electricity, natural gas is the most volatile commodity traded. Its price is often dependent on the weather and price shocks can be felt across several geographic locations. This modelling approach incorporates multiple risk factors that correspond to various locations. One of the objectives was to determine if the model could be used for closed-form option prices. It was suggested that an adequate model for natural gas must consider 3 statistical properties: volatility term structure, backwardation and contango, and stochastic basis. Data from gas forward prices at Chicago, NYMEX and AECO were empirically tested to better understand these 3 statistical properties at each location and to verify if the proposed model truly incorporates these properties. In addition, this study examined the time series property of the difference of two locations (the basis) and determines that these empirical properties are consistent with the model properties. Closed-form option solutions were also developed for call options of forward contracts and call options on forward basis. The options were calibrated and compared to other models. The proposed model is capable of pricing options, but the prices derived did not pass the test of economic reasonableness. However, the model was able to capture the effect of transportation as well as aspects of seasonality which is a benefit over other existing models. It was determined that modifications will be needed regarding the estimation of the convenience yields. 57 refs., 2 tabs., 7 figs., 1 append
Estimating the numerical diapycnal mixing in an eddy-permitting ocean model
Megann, Alex
2018-01-01
Constant-depth (or "z-coordinate") ocean models such as MOM4 and NEMO have become the de facto workhorse in climate applications, having attained a mature stage in their development and are well understood. A generic shortcoming of this model type, however, is a tendency for the advection scheme to produce unphysical numerical diapycnal mixing, which in some cases may exceed the explicitly parameterised mixing based on observed physical processes, and this is likely to have effects on the long-timescale evolution of the simulated climate system. Despite this, few quantitative estimates have been made of the typical magnitude of the effective diapycnal diffusivity due to numerical mixing in these models. GO5.0 is a recent ocean model configuration developed jointly by the UK Met Office and the National Oceanography Centre. It forms the ocean component of the GC2 climate model, and is closely related to the ocean component of the UKESM1 Earth System Model, the UK's contribution to the CMIP6 model intercomparison. GO5.0 uses version 3.4 of the NEMO model, on the ORCA025 global tripolar grid. An approach to quantifying the numerical diapycnal mixing in this model, based on the isopycnal watermass analysis of Lee et al. (2002), is described, and the estimates thereby obtained of the effective diapycnal diffusivity in GO5.0 are compared with the values of the explicit diffusivity used by the model. It is shown that the effective mixing in this model configuration is up to an order of magnitude higher than the explicit mixing in much of the ocean interior, implying that mixing in the model below the mixed layer is largely dominated by numerical mixing. This is likely to have adverse consequences for the representation of heat uptake in climate models intended for decadal climate projections, and in particular is highly relevant to the interpretation of the CMIP6 class of climate models, many of which use constant-depth ocean models at ¼° resolution
Multiplicative point process as a model of trading activity
Gontis, V.; Kaulakys, B.
2004-11-01
Signals consisting of a sequence of pulses show that inherent origin of the 1/ f noise is a Brownian fluctuation of the average interevent time between subsequent pulses of the pulse sequence. In this paper, we generalize the model of interevent time to reproduce a variety of self-affine time series exhibiting power spectral density S( f) scaling as a power of the frequency f. Furthermore, we analyze the relation between the power-law correlations and the origin of the power-law probability distribution of the signal intensity. We introduce a stochastic multiplicative model for the time intervals between point events and analyze the statistical properties of the signal analytically and numerically. Such model system exhibits power-law spectral density S( f)∼1/ fβ for various values of β, including β= {1}/{2}, 1 and {3}/{2}. Explicit expressions for the power spectra in the low-frequency limit and for the distribution density of the interevent time are obtained. The counting statistics of the events is analyzed analytically and numerically, as well. The specific interest of our analysis is related with the financial markets, where long-range correlations of price fluctuations largely depend on the number of transactions. We analyze the spectral density and counting statistics of the number of transactions. The model reproduces spectral properties of the real markets and explains the mechanism of power-law distribution of trading activity. The study provides evidence that the statistical properties of the financial markets are enclosed in the statistics of the time interval between trades. A multiplicative point process serves as a consistent model generating this statistics.
Dowsley, Martha
2010-01-01
The polar bear (Ursus maritimus) is a common pool resource that contributes to both the subsistence and monetary aspects of the Nunavut mixed economy through its use as food, the sale of hides in the fur trade, and sport hunt outfitting. Sport hunting is more financially profitable than subsistence hunting; however, the proportion of the polar bear quota devoted to the sport hunt has become relatively stable at approximately 20% across Nunavut. This ratio suggests local Inuit organizations are not using a neoclassical economic model based on profit maximization. This paper examines local-level hunting organizations and their institutions (as sets of rules) governing the sport and Inuit subsistence hunts from both formalist and substantivist economic perspectives. It concludes that profit maximization is used within the sport hunting sphere, which fits a neoclassical model of economic rationality. A second and parallel system, better viewed through the substantivist perspective, demonstrates that the communities focus on longer-term goals to maintain and reproduce the socio-economic system of the subsistence economy, which is predicated on maintaining social, human-environment, and human-polar bear relations.
Guideline validation in multiple trauma care through business process modeling.
Stausberg, Jürgen; Bilir, Hüseyin; Waydhas, Christian; Ruchholtz, Steffen
2003-07-01
Clinical guidelines can improve the quality of care in multiple trauma. In our Department of Trauma Surgery a specific guideline is available paper-based as a set of flowcharts. This format is appropriate for the use by experienced physicians but insufficient for electronic support of learning, workflow and process optimization. A formal and logically consistent version represented with a standardized meta-model is necessary for automatic processing. In our project we transferred the paper-based into an electronic format and analyzed the structure with respect to formal errors. Several errors were detected in seven error categories. The errors were corrected to reach a formally and logically consistent process model. In a second step the clinical content of the guideline was revised interactively using a process-modeling tool. Our study reveals that guideline development should be assisted by process modeling tools, which check the content in comparison to a meta-model. The meta-model itself could support the domain experts in formulating their knowledge systematically. To assure sustainability of guideline development a representation independent of specific applications or specific provider is necessary. Then, clinical guidelines could be used for eLearning, process optimization and workflow management additionally.
Rank-based model selection for multiple ions quantum tomography
International Nuclear Information System (INIS)
Guţă, Mădălin; Kypraios, Theodore; Dryden, Ian
2012-01-01
The statistical analysis of measurement data has become a key component of many quantum engineering experiments. As standard full state tomography becomes unfeasible for large dimensional quantum systems, one needs to exploit prior information and the ‘sparsity’ properties of the experimental state in order to reduce the dimensionality of the estimation problem. In this paper we propose model selection as a general principle for finding the simplest, or most parsimonious explanation of the data, by fitting different models and choosing the estimator with the best trade-off between likelihood fit and model complexity. We apply two well established model selection methods—the Akaike information criterion (AIC) and the Bayesian information criterion (BIC)—two models consisting of states of fixed rank and datasets such as are currently produced in multiple ions experiments. We test the performance of AIC and BIC on randomly chosen low rank states of four ions, and study the dependence of the selected rank with the number of measurement repetitions for one ion states. We then apply the methods to real data from a four ions experiment aimed at creating a Smolin state of rank 4. By applying the two methods together with the Pearson χ 2 test we conclude that the data can be suitably described with a model whose rank is between 7 and 9. Additionally we find that the mean square error of the maximum likelihood estimator for pure states is close to that of the optimal over all possible measurements. (paper)
Impossibility of Classically Simulating One-Clean-Qubit Model with Multiplicative Error
Fujii, Keisuke; Kobayashi, Hirotada; Morimae, Tomoyuki; Nishimura, Harumichi; Tamate, Shuhei; Tani, Seiichiro
2018-05-01
The one-clean-qubit model (or the deterministic quantum computation with one quantum bit model) is a restricted model of quantum computing where all but a single input qubits are maximally mixed. It is known that the probability distribution of measurement results on three output qubits of the one-clean-qubit model cannot be classically efficiently sampled within a constant multiplicative error unless the polynomial-time hierarchy collapses to the third level [T. Morimae, K. Fujii, and J. F. Fitzsimons, Phys. Rev. Lett. 112, 130502 (2014), 10.1103/PhysRevLett.112.130502]. It was open whether we can keep the no-go result while reducing the number of output qubits from three to one. Here, we solve the open problem affirmatively. We also show that the third-level collapse of the polynomial-time hierarchy can be strengthened to the second-level one. The strengthening of the collapse level from the third to the second also holds for other subuniversal models such as the instantaneous quantum polynomial model [M. Bremner, R. Jozsa, and D. J. Shepherd, Proc. R. Soc. A 467, 459 (2011), 10.1098/rspa.2010.0301] and the boson sampling model [S. Aaronson and A. Arkhipov, STOC 2011, p. 333]. We additionally study the classical simulatability of the one-clean-qubit model with further restrictions on the circuit depth or the gate types.
Mathematical modeling of the mixing zone for getting bimetallic compound
Energy Technology Data Exchange (ETDEWEB)
Kim, Stanislav L. [Institute of Applied Mechanics, Ural Branch, Izhevsk (Russian Federation)
2011-07-01
A mathematical model of the formation of atomic bonds in metals and alloys, based on the electrostatic interaction between the outer electron shells of atoms of chemical elements. Key words: mathematical model, the interatomic bonds, the electron shell of atoms, the potential, the electron density, bimetallic compound.
A Thermodynamic Mixed-Solid Asphaltene Precipitation Model
DEFF Research Database (Denmark)
Lindeloff, Niels; Heidemann, R.A.; Andersen, Simon Ivar
1998-01-01
A simple model for the prediction of asphaltene precipitation is proposed. The model is based on an equation of state and uses standard thermodynamics, thus assuming that the precipitation phenomenon is a reversible process. The solid phase is treated as an ideal multicomponent mixture. An activity...
Thermal hydraulic model validation for HOR mixed core fuel management
International Nuclear Information System (INIS)
Gibcus, H.P.M.; Vries, J.W. de; Leege, P.F.A. de
1997-01-01
A thermal-hydraulic core management model has been developed for the Hoger Onderwijsreactor (HOR), a 2 MW pool-type university research reactor. The model was adopted for safety analysis purposes in the framework of HEU/LEU core conversion studies. It is applied in the thermal-hydraulic computer code SHORT (Steady-state HOR Thermal-hydraulics) which is presently in use in designing core configurations and for in-core fuel management. An elaborate measurement program was performed for establishing the core hydraulic characteristics for a variety of conditions. The hydraulic data were obtained with a dummy fuel element with special equipment allowing a.o. direct measurement of the true core flow rate. Using these data the thermal-hydraulic model was validated experimentally. The model, experimental tests, and model validation are discussed. (author)
Optical model with multiple band couplings using soft rotator structure
Martyanov, Dmitry; Soukhovitskii, Efrem; Capote, Roberto; Quesada, Jose Manuel; Chiba, Satoshi
2017-09-01
A new dispersive coupled-channel optical model (DCCOM) is derived that describes nucleon scattering on 238U and 232Th targets using a soft-rotator-model (SRM) description of the collective levels of the target nucleus. SRM Hamiltonian parameters are adjusted to the observed collective levels of the target nucleus. SRM nuclear wave functions (mixed in K quantum number) have been used to calculate coupling matrix elements of the generalized optical model. Five rotational bands are coupled: the ground-state band, β-, γ-, non-axial- bands, and a negative parity band. Such coupling scheme includes almost all levels below 1.2 MeV of excitation energy of targets. The "effective" deformations that define inter-band couplings are derived from SRM Hamiltonian parameters. Conservation of nuclear volume is enforced by introducing a monopolar deformed potential leading to additional couplings between rotational bands. The present DCCOM describes the total cross section differences between 238U and 232Th targets within experimental uncertainty from 50 keV up to 200 MeV of neutron incident energy. SRM couplings and volume conservation allow a precise calculation of the compound-nucleus (CN) formation cross sections, which is significantly different from the one calculated with rigid-rotor potentials with any number of coupled levels.
Estimating the Numerical Diapycnal Mixing in the GO5.0 Ocean Model
Megann, A.; Nurser, G.
2014-12-01
Constant-depth (or "z-coordinate") ocean models such as MOM4 and NEMO have become the de facto workhorse in climate applications, and have attained a mature stage in their development and are well understood. A generic shortcoming of this model type, however, is a tendency for the advection scheme to produce unphysical numerical diapycnal mixing, which in some cases may exceed the explicitly parameterised mixing based on observed physical processes, and this is likely to have effects on the long-timescale evolution of the simulated climate system. Despite this, few quantitative estimations have been made of the magnitude of the effective diapycnal diffusivity due to numerical mixing in these models. GO5.0 is the latest ocean model configuration developed jointly by the UK Met Office and the National Oceanography Centre (Megann et al, 2014), and forms part of the GC1 and GC2 climate models. It uses version 3.4 of the NEMO model, on the ORCA025 ¼° global tripolar grid. We describe various approaches to quantifying the numerical diapycnal mixing in this model, and present results from analysis of the GO5.0 model based on the isopycnal watermass analysis of Lee et al (2002) that indicate that numerical mixing does indeed form a significant component of the watermass transformation in the ocean interior.
A Hybrid Multiple Criteria Decision Making Model for Supplier Selection
Directory of Open Access Journals (Sweden)
Chung-Min Wu
2013-01-01
Full Text Available The sustainable supplier selection would be the vital part in the management of a sustainable supply chain. In this study, a hybrid multiple criteria decision making (MCDM model is applied to select optimal supplier. The fuzzy Delphi method, which can lead to better criteria selection, is used to modify criteria. Considering the interdependence among the selection criteria, analytic network process (ANP is then used to obtain their weights. To avoid calculation and additional pairwise comparisons of ANP, a technique for order preference by similarity to ideal solution (TOPSIS is used to rank the alternatives. The use of a combination of the fuzzy Delphi method, ANP, and TOPSIS, proposing an MCDM model for supplier selection, and applying these to a real case are the unique features of this study.
Multiple Scattering Model for Optical Coherence Tomography with Rytov Approximation
Li, Muxingzi
2017-04-24
Optical Coherence Tomography (OCT) is a coherence-gated, micrometer-resolution imaging technique that focuses a broadband near-infrared laser beam to penetrate into optical scattering media, e.g. biological tissues. The OCT resolution is split into two parts, with the axial resolution defined by half the coherence length, and the depth-dependent lateral resolution determined by the beam geometry, which is well described by a Gaussian beam model. The depth dependence of lateral resolution directly results in the defocusing effect outside the confocal region and restricts current OCT probes to small numerical aperture (NA) at the expense of lateral resolution near the focus. Another limitation on OCT development is the presence of a mixture of speckles due to multiple scatterers within the coherence length, and other random noise. Motivated by the above two challenges, a multiple scattering model based on Rytov approximation and Gaussian beam optics is proposed for the OCT setup. Some previous papers have adopted the first Born approximation with the assumption of small perturbation of the incident field in inhomogeneous media. The Rytov method of the same order with smooth phase perturbation assumption benefits from a wider spatial range of validity. A deconvolution method for solving the inverse problem associated with the first Rytov approximation is developed, significantly reducing the defocusing effect through depth and therefore extending the feasible range of NA.
Resveratrol Neuroprotection in a Chronic Mouse Model of Multiple Sclerosis
Directory of Open Access Journals (Sweden)
Zoe eFonseca-Kelly
2012-05-01
Full Text Available Resveratrol is a naturally-occurring polyphenol that activates SIRT1, an NAD-dependent deacetylase. SRT501, a pharmaceutical formulation of resveratrol with enhanced systemic absorption, prevents neuronal loss without suppressing inflammation in mice with relapsing experimental autoimmune encephalomyelitis (EAE, a model of multiple sclerosis. In contrast, resveratrol has been reported to suppress inflammation in chronic EAE, although neuroprotective effects were not evaluated. The current studies examine potential neuroprotective and immunomodulatory effects of resveratrol in chronic EAE induced by immunization with myelin oligodendroglial glycoprotein peptide in C57/Bl6 mice. Effects of two distinct formulations of resveratrol administered daily orally were compared. Resveratrol delayed the onset of EAE compared to vehicle-treated EAE mice, but did not prevent or alter the phenotype of inflammation in spinal cords or optic nerves. Significant neuroprotective effects were observed, with higher numbers of retinal ganglion cells found in eyes of resveratrol-treated EAE mice with optic nerve inflammation. Results demonstrate that resveratrol prevents neuronal loss in this chronic demyelinating disease model, similar to its effects in relapsing EAE. Differences in immunosuppression compared with prior studies suggest that immunomodulatory effects may be limited and may depend on specific immunization parameters or timing of treatment. Importantly, neuroprotective effects can occur without immunosuppression, suggesting a potential additive benefit of resveratrol in combination with anti-inflammatory therapies for multiple sclerosis.
Model for CO2 leakage including multiple geological layers and multiple leaky wells.
Nordbotten, Jan M; Kavetski, Dmitri; Celia, Michael A; Bachu, Stefan
2009-02-01
Geological storage of carbon dioxide (CO2) is likely to be an integral component of any realistic plan to reduce anthropogenic greenhouse gas emissions. In conjunction with large-scale deployment of carbon storage as a technology, there is an urgent need for tools which provide reliable and quick assessments of aquifer storage performance. Previously, abandoned wells from over a century of oil and gas exploration and production have been identified as critical potential leakage paths. The practical importance of abandoned wells is emphasized by the correlation of heavy CO2 emitters (typically associated with industrialized areas) to oil and gas producing regions in North America. Herein, we describe a novel framework for predicting the leakage from large numbers of abandoned wells, forming leakage paths connecting multiple subsurface permeable formations. The framework is designed to exploit analytical solutions to various components of the problem and, ultimately, leads to a grid-free approximation to CO2 and brine leakage rates, as well as fluid distributions. We apply our model in a comparison to an established numerical solverforthe underlying governing equations. Thereafter, we demonstrate the capabilities of the model on typical field data taken from the vicinity of Edmonton, Alberta. This data set consists of over 500 wells and 7 permeable formations. Results show the flexibility and utility of the solution methods, and highlight the role that analytical and semianalytical solutions can play in this important problem.
Mixed deterministic statistical modelling of regional ozone air pollution
Kalenderski, Stoitchko; Steyn, Douw G.
2011-01-01
formalism, and explicitly accounts for advection of pollutants, using the advection equation. We apply the model to a specific case of regional ozone pollution-the Lower Fraser valley of British Columbia, Canada. As a predictive tool, we demonstrate
Selecting an optimal mixed products using grey relationship model
Directory of Open Access Journals (Sweden)
Farshad Faezy Razi
2013-06-01
Full Text Available This paper presents an integrated supplier selection and inventory management using grey relationship model (GRM as well as multi-objective decision making process. The proposed model of this paper first ranks different suppliers based on GRM technique and then determines the optimum level of inventory by considering different objectives. To show the implementation of the proposed model, we use some benchmark data presented by Talluri and Baker [Talluri, S., & Baker, R. C. (2002. A multi-phase mathematical programming approach for effective supply chain design. European Journal of Operational Research, 141(3, 544-558.]. The preliminary results indicate that the proposed model of this paper is capable of handling different criteria for supplier selection.
A Situative Space Model for Mobile Mixed-Reality Computing
DEFF Research Database (Denmark)
Pederson, Thomas; Janlert, Lars-Erik; Surie, Dipak
2011-01-01
This article proposes a situative space model that links the physical and virtual realms and sets the stage for complex human-computer interaction defined by what a human agent can see, hear, and touch, at any given point in time.......This article proposes a situative space model that links the physical and virtual realms and sets the stage for complex human-computer interaction defined by what a human agent can see, hear, and touch, at any given point in time....
A Multiple Indicators Multiple Causes (MIMIC) model of internal barriers to drug treatment in China.
Qi, Chang; Kelly, Brian C; Liao, Yanhui; He, Haoyu; Luo, Tao; Deng, Huiqiong; Liu, Tieqiao; Hao, Wei; Wang, Jichuan
2015-03-01
Although evidence exists for distinct barriers to drug abuse treatment (BDATs), investigations of their inter-relationships and the effect of individual characteristics on the barrier factors have been sparse, especially in China. A Multiple Indicators Multiple Causes (MIMIC) model is applied for this target. A sample of 262 drug users were recruited from three drug rehabilitation centers in Hunan Province, China. We applied a MIMIC approach to investigate the effect of gender, age, marital status, education, primary substance use, duration of primary drug use, and drug treatment experience on the internal barrier factors: absence of problem (AP), negative social support (NSS), fear of treatment (FT), and privacy concerns (PC). Drug users of various characteristics were found to report different internal barrier factors. Younger participants were more likely to report NSS (-0.19, p=0.038) and PC (-0.31, p<0.001). Compared to other drug users, ice users were more likely to report AP (0.44, p<0.001) and NSS (0.25, p=0.010). Drug treatment experiences related to AP (0.20, p=0.012). In addition, differential item functioning (DIF) occurred in three items when participant from groups with different duration of drug use, ice use, or marital status. Individual characteristics had significant effects on internal barriers to drug treatment. On this basis, BDAT perceived by different individuals could be assessed before tactics were utilized to successfully remove perceived barriers to drug treatment. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Multiple-relaxation-time lattice Boltzmann model for compressible fluids
International Nuclear Information System (INIS)
Chen Feng; Xu Aiguo; Zhang Guangcai; Li Yingjun
2011-01-01
We present an energy-conserving multiple-relaxation-time finite difference lattice Boltzmann model for compressible flows. The collision step is first calculated in the moment space and then mapped back to the velocity space. The moment space and corresponding transformation matrix are constructed according to the group representation theory. Equilibria of the nonconserved moments are chosen according to the need of recovering compressible Navier-Stokes equations through the Chapman-Enskog expansion. Numerical experiments showed that compressible flows with strong shocks can be well simulated by the present model. The new model works for both low and high speeds compressible flows. It contains more physical information and has better numerical stability and accuracy than its single-relaxation-time version. - Highlights: → We present an energy-conserving MRT finite-difference LB model. → The moment space is constructed according to the group representation theory. → The new model works for both low and high speeds compressible flows. → It has better numerical stability and wider applicable range than its SRT version.
Optimal Retail Price Model for Partial Consignment to Multiple Retailers
Directory of Open Access Journals (Sweden)
Po-Yu Chen
2017-01-01
Full Text Available This paper investigates the product pricing decision-making problem under a consignment stock policy in a two-level supply chain composed of one supplier and multiple retailers. The effects of the supplier’s wholesale prices and its partial inventory cost absorption of the retail prices of retailers with different market shares are investigated. In the partial product consignment model this paper proposes, the seller and the retailers each absorb part of the inventory costs. This model also provides general solutions for the complete product consignment and the traditional policy that adopts no product consignment. In other words, both the complete consignment and nonconsignment models are extensions of the proposed model (i.e., special cases. Research results indicated that the optimal retail price must be between 1/2 (50% and 2/3 (66.67% times the upper limit of the gross profit. This study also explored the results and influence of parameter variations on optimal retail price in the model.
Development of a transverse mixing model for large scale impulsion phenomenon in tight lattice
International Nuclear Information System (INIS)
Liu, Xiaojing; Ren, Shuo; Cheng, Xu
2017-01-01
Highlights: • Experiment data of Krauss is used to validate the feasibility of CFD simulation method. • CFD simulation is performed to simulate the large scale impulsion phenomenon for tight-lattice bundle. • A mixing model to simulate the large scale impulsion phenomenon is proposed based on CFD result fitting. • The new developed mixing model has been added in the subchannel code. - Abstract: Tight-lattice is widely adopted in the innovative reactor fuel bundles design since it can increase the conversion ratio and improve the heat transfer between fuel bundles and coolant. It has been noticed that a large scale impulsion of cross-velocity exists in the gap region, which plays an important role on the transverse mixing flow and heat transfer. Although many experiments and numerical simulation have been carried out to study the impulsion of velocity, a model to describe the wave length, amplitude and frequency of mixing coefficient is still missing. This research work takes advantage of the CFD method to simulate the experiment of Krauss and to compare experiment data and simulation result in order to demonstrate the feasibility of simulation method and turbulence model. Then, based on this verified method and model, several simulations are performed with different Reynolds number and different Pitch-to-Diameter ratio. By fitting the CFD results achieved, a mixing model to simulate the large scale impulsion phenomenon is proposed and adopted in the current subchannel code. The new mixing model is applied to some fuel assembly analysis by subchannel calculation, it can be noticed that the new developed mixing model can reduce the hot channel factor and contribute to a uniform distribution of outlet temperature.
Mixed Emotions: An Incentive Motivational Model of Sexual Deviance.
Smid, Wineke J; Wever, Edwin C
2018-05-01
Sexual offending behavior is a complex and multifaceted phenomenon. Most existing etiological models describe sexual offending behavior as a variant of offending behavior and mostly include factors referring to disinhibition and sexual deviance. In this article, we argue that there is additional value in describing sexual offending behavior as sexual behavior in terms of an incentive model of sexual motivation. The model describes sexual arousal as an emotion, triggered by a competent stimulus signaling potential reward, and comparable to other emotions coupled with strong bodily reactions. Consequently, we describe sexual offending behavior in terms of this new model with emphasis on the development of deviant sexual interests and preferences. Summarized, the model states that because sexual arousal itself is an emotion, there is a bidirectional relationship between sexual self-regulation and emotional self-regulation. Not only can sex be used to regulate emotional states (i.e., sexual coping), emotions can also be used, consciously or automatically, to regulate sexual arousal (i.e., sexual deviance). Preliminary support for the model is drawn from studies in the field of sex offender research as well as sexology and motivation research.
Digital Repository Service at National Institute of Oceanography (India)
Nakamoto, S.; Saito, H.; Muneyama, K.; Sato, T.; PrasannaKumar, S.; Kumar, A.; Frouin, R.
-chemical system that supports steady carbon circulation in geological time scale in the world ocean using Mixed Layer-Isopycnal ocean General Circulation model with remotely sensed Coastal Zone Color Scanner (CZCS) chlorophyll pigment concentration....
Hellemans, V; De Baerdemacker, S; Heyde, K
2008-01-01
The case of U(5)--$\\hat{Q}(\\chi)\\cdot\\hat{Q}(\\chi)$ mixing in the configuration-mixed Interacting Boson Model is studied in its mean-field approximation. Phase diagrams with analytical and numerical solutions are constructed and discussed. Indications for first-order and second-order shape phase transitions can be obtained from binding energies and from critical exponents, respectively.
An Investigation of a Hybrid Mixing Timescale Model for PDF Simulations of Turbulent Premixed Flames
Zhou, Hua; Kuron, Mike; Ren, Zhuyin; Lu, Tianfeng; Chen, Jacqueline H.
2016-11-01
Transported probability density function (TPDF) method features the generality for all combustion regimes, which is attractive for turbulent combustion simulations. However, the modeling of micromixing due to molecular diffusion is still considered to be a primary challenge for TPDF method, especially in turbulent premixed flames. Recently, a hybrid mixing rate model for TPDF simulations of turbulent premixed flames has been proposed, which recovers the correct mixing rates in the limits of flamelet regime and broken reaction zone regime while at the same time aims to properly account for the transition in between. In this work, this model is employed in TPDF simulations of turbulent premixed methane-air slot burner flames. The model performance is assessed by comparing the results from both direct numerical simulation (DNS) and conventional constant mechanical-to-scalar mixing rate model. This work is Granted by NSFC 51476087 and 91441202.
Schemel, L.E.; Cox, M.H.; Runkel, R.L.; Kimball, B.A.
2006-01-01
The acidic discharge from Cement Creek, containing elevated concentrations of dissolved metals and sulphate, mixed with the circumneutral-pH Animas River over a several hundred metre reach (mixing zone) near Silverton, CO, during this study. Differences in concentrations of Ca, Mg, Si, Sr, and SO42- between the creek and the river were sufficiently large for these analytes to be used as natural tracers in the mixing zone. In addition, a sodium chloride (NaCl) tracer was injected into Cement Creek, which provided a Cl- 'reference' tracer in the mixing zone. Conservative transport of the dissolved metals and sulphate through the mixing zone was verified by mass balances and by linear mixing plots relative to the injected reference tracer. At each of seven sites in the mixing zone, five samples were collected at evenly spaced increments of the observed across-channel gradients, as determined by specific conductance. This created sets of samples that adequately covered the ranges of mixtures (mixing ratios, in terms of the fraction of Animas River water, %AR). Concentrations measured in each mixing zone sample and in the upstream Animas River and Cement Creek were used to compute %AR for the reference and natural tracers. Values of %AR from natural tracers generally showed good agreement with values from the reference tracer, but variability in discharge and end-member concentrations and analytical errors contributed to unexpected outlier values for both injected and natural tracers. The median value (MV) %AR (calculated from all of the tracers) reduced scatter in the mixing plots for the dissolved metals, indicating that the MV estimate reduced the effects of various potential errors that could affect any tracer.
Schemel, Laurence E.; Cox, Marisa H.; Runkel, Robert L.; Kimball, Briant A.
2006-08-01
The acidic discharge from Cement Creek, containing elevated concentrations of dissolved metals and sulphate, mixed with the circumneutral-pH Animas River over a several hundred metre reach (mixing zone) near Silverton, CO, during this study. Differences in concentrations of Ca, Mg, Si, Sr, and SO42- between the creek and the river were sufficiently large for these analytes to be used as natural tracers in the mixing zone. In addition, a sodium chloride (NaCl) tracer was injected into Cement Creek, which provided a Cl- reference tracer in the mixing zone. Conservative transport of the dissolved metals and sulphate through the mixing zone was verified by mass balances and by linear mixing plots relative to the injected reference tracer. At each of seven sites in the mixing zone, five samples were collected at evenly spaced increments of the observed across-channel gradients, as determined by specific conductance. This created sets of samples that adequately covered the ranges of mixtures (mixing ratios, in terms of the fraction of Animas River water, %AR). Concentratis measured in each mixing zone sample and in the upstream Animas River and Cement Creek were used to compute %AR for the reference and natural tracers. Values of %AR from natural tracers generally showed good agreement with values from the reference tracer, but variability in discharge and end-member concentrations and analytical errors contributed to unexpected outlier values for both injected and natural tracers. The median value (MV) %AR (calculated from all of the tracers) reduced scatter in the mixing plots for the dissolved metals, indicating that the MV estimate reduced the effects of various potential errors that could affect any tracer.
Modelling Field Bus Communications in Mixed-Signal Embedded Systems
Directory of Open Access Journals (Sweden)
Alassir Mohamad
2008-01-01
Full Text Available Abstract We present a modelling platform using the SystemC-AMS language to simulate field bus communications for embedded systems. Our platform includes the model of an I/O controller IP (in this specific case an C controller that interfaces a master microprocessor with its peripherals on the field bus. Our platform shows the execution of the embedded software and its analog response on the lines of the bus. Moreover, it also takes into account the influence of the circuits's I/O by including their IBIS models in the SystemC-AMS description, as well as the bus lines imperfections. Finally, we present simulation results to validate our platform and measure the overhead introduced by SystemC-AMS over a pure digital SystemC simulation.
Modelling Field Bus Communications in Mixed-Signal Embedded Systems
Directory of Open Access Journals (Sweden)
Patrick Garda
2008-08-01
Full Text Available We present a modelling platform using the SystemC-AMS language to simulate field bus communications for embedded systems. Our platform includes the model of an I/O controller IP (in this specific case an I2C controller that interfaces a master microprocessor with its peripherals on the field bus. Our platform shows the execution of the embedded software and its analog response on the lines of the bus. Moreover, it also takes into account the influence of the circuits's I/O by including their IBIS models in the SystemC-AMS description, as well as the bus lines imperfections. Finally, we present simulation results to validate our platform and measure the overhead introduced by SystemC-AMS over a pure digital SystemC simulation.
Tsuruta, S; Misztal, I; Strandén, I
2001-05-01
Utility of the preconditioned conjugate gradient algorithm with a diagonal preconditioner for solving mixed-model equations in animal breeding applications was evaluated with 16 test problems. The problems included single- and multiple-trait analyses, with data on beef, dairy, and swine ranging from small examples to national data sets. Multiple-trait models considered low and high genetic correlations. Convergence was based on relative differences between left- and right-hand sides. The ordering of equations was fixed effects followed by random effects, with no special ordering within random effects. The preconditioned conjugate gradient program implemented with double precision converged for all models. However, when implemented in single precision, the preconditioned conjugate gradient algorithm did not converge for seven large models. The preconditioned conjugate gradient and successive overrelaxation algorithms were subsequently compared for 13 of the test problems. The preconditioned conjugate gradient algorithm was easy to implement with the iteration on data for general models. However, successive overrelaxation requires specific programming for each set of models. On average, the preconditioned conjugate gradient algorithm converged in three times fewer rounds of iteration than successive overrelaxation. With straightforward implementations, programs using the preconditioned conjugate gradient algorithm may be two or more times faster than those using successive overrelaxation. However, programs using the preconditioned conjugate gradient algorithm would use more memory than would comparable implementations using successive overrelaxation. Extensive optimization of either algorithm can influence rankings. The preconditioned conjugate gradient implemented with iteration on data, a diagonal preconditioner, and in double precision may be the algorithm of choice for solving mixed-model equations when sufficient memory is available and ease of implementation is
Mathematical modeling of a mixed flow spray dryer
International Nuclear Information System (INIS)
Kasiri, N.; Delkhan, F.
2001-01-01
In this paper a mathematical model has been developed to simulate the behavior of spray dryers with an up-flowing spray. The model is based on mass, energy and momentum balance on a single droplet , and mass and energy balances on the drying gas. The system of nonlinear differential equations thus obtained is solved to predict the changes in temperature, humidity, diameter, velocity components and the density of the droplets as well as the temperature and the humidity changes of the drying gas. The predicted results were then compared with an industrially available set of results. A good degree of proximity between the two is reported
Constraints on the mixing angle between ordinary and heavy leptons in a (V - A) model
International Nuclear Information System (INIS)
Hioki, Zenro
1977-01-01
The possibility of the mixing between ordinary and heavy leptons in a pure (V-A) model with SU(2) x U(1) gauge group is investigated. It is shown that to be consistent with the present experimental data on various neutral current reactions, this mixing must be small for any choice of the Weinberg angle in the case M sub(W)=M sub(Z) cos theta sub(W). The tri-muon production from the leptonic vertex through this mixing is also discussed. (auth.)
Right-handed quark mixings in minimal left-right symmetric model with general CP violation
International Nuclear Information System (INIS)
Zhang Yue; Ji Xiangdong; An Haipeng; Mohapatra, R. N.
2007-01-01
We solve systematically for the right-handed quark mixings in the minimal left-right symmetric model which generally has both explicit and spontaneous CP violations. The leading-order result has the same hierarchical structure as the left-handed Cabibbo-Kobayashi-Maskawa mixing, but with additional CP phases originating from a spontaneous CP-violating phase in the Higgs vacuum expectation values. We explore the phenomenology entailed by the new right-handed mixing matrix, particularly the bounds on the mass of W R and the CP phase of the Higgs vacuum expectation values
2016-06-01
This paper develops a microeconomic theory-based multiple discrete continuous choice model that considers: (a) that both goods consumption and time allocations (to work and non-work activities) enter separately as decision variables in the utility fu...
Fukuda, J.; Johnson, K. M.
2009-12-01
Studies utilizing inversions of geodetic data for the spatial distribution of coseismic slip on faults typically present the result as a single fault plane and slip distribution. Commonly the geometry of the fault plane is assumed to be known a priori and the data are inverted for slip. However, sometimes there is not strong a priori information on the geometry of the fault that produced the earthquake and the data is not always strong enough to completely resolve the fault geometry. We develop a method to solve for the full posterior probability distribution of fault slip and fault geometry parameters in a Bayesian framework using Monte Carlo methods. The slip inversion problem is particularly challenging because it often involves multiple data sets with unknown relative weights (e.g. InSAR, GPS), model parameters that are related linearly (slip) and nonlinearly (fault geometry) through the theoretical model to surface observations, prior information on model parameters, and a regularization prior to stabilize the inversion. We present the theoretical framework and solution method for a Bayesian inversion that can handle all of these aspects of the problem. The method handles the mixed linear/nonlinear nature of the problem through combination of both analytical least-squares solutions and Monte Carlo methods. We first illustrate and validate the inversion scheme using synthetic data sets. We then apply the method to inversion of geodetic data from the 2003 M6.6 San Simeon, California earthquake. We show that the uncertainty in strike and dip of the fault plane is over 20 degrees. We characterize the uncertainty in the slip estimate with a volume around the mean fault solution in which the slip most likely occurred. Slip likely occurred somewhere in a volume that extends 5-10 km in either direction normal to the fault plane. We implement slip inversions with both traditional, kinematic smoothing constraints on slip and a simple physical condition of uniform stress
International Nuclear Information System (INIS)
Vold, Erik L.; Scannapieco, Tony J.
2007-01-01
A sub-grid mix model based on a volume-of-fluids (VOF) representation is described for computational simulations of the transient mixing between reactive fluids, in which the atomically mixed components enter into the reactivity. The multi-fluid model allows each fluid species to have independent values for density, energy, pressure and temperature, as well as independent velocities and volume fractions. Fluid volume fractions are further divided into mix components to represent their 'mixedness' for more accurate prediction of reactivity. Time dependent conversion from unmixed volume fractions (denoted cf) to atomically mixed (af) fluids by diffusive processes is represented in resolved scale simulations with the volume fractions (cf, af mix). In unresolved scale simulations, the transition to atomically mixed materials begins with a conversion from unmixed material to a sub-grid volume fraction (pf). This fraction represents the unresolved small scales in the fluids, heterogeneously mixed by turbulent or multi-phase mixing processes, and this fraction then proceeds in a second step to the atomically mixed fraction by diffusion (cf, pf, af mix). Species velocities are evaluated with a species drift flux, ρ i u di = ρ i (u i -u), used to describe the fluid mixing sources in several closure options. A simple example of mixing fluids during 'interfacial deceleration mixing with a small amount of diffusion illustrates the generation of atomically mixed fluids in two cases, for resolved scale simulations and for unresolved scale simulations. Application to reactive mixing, including Inertial Confinement Fusion (ICF), is planned for future work.
Safety of Mixed Model Access Control in a Multilevel System
2014-06-01
42 H. FIREWALL AND IPS LANGUAGES...Research Laboratory AIS automated information system ANOA advance notice of arrival APT advanced persistent threat BFM boundary flow modeling...of Investigation FW firewall GENSER general service xvi GUI graphical user interface HAG high-assurance guard HGS high-grade service H-H-H High
Conflicts Management Model in School: A Mixed Design Study
Dogan, Soner
2016-01-01
The object of this study is to evaluate the reasons for conflicts occurring in school according to perceptions and views of teachers and resolution strategies used for conflicts and to build a model based on the results obtained. In the research, explanatory design including quantitative and qualitative methods has been used. The quantitative part…
Using hidden Markov models to align multiple sequences.
Mount, David W
2009-07-01
A hidden Markov model (HMM) is a probabilistic model of a multiple sequence alignment (msa) of proteins. In the model, each column of symbols in the alignment is represented by a frequency distribution of the symbols (called a "state"), and insertions and deletions are represented by other states. One moves through the model along a particular path from state to state in a Markov chain (i.e., random choice of next move), trying to match a given sequence. The next matching symbol is chosen from each state, recording its probability (frequency) and also the probability of going to that state from a previous one (the transition probability). State and transition probabilities are multiplied to obtain a probability of the given sequence. The hidden nature of the HMM is due to the lack of information about the value of a specific state, which is instead represented by a probability distribution over all possible values. This article discusses the advantages and disadvantages of HMMs in msa and presents algorithms for calculating an HMM and the conditions for producing the best HMM.
Analysis and application of opinion model with multiple topic interactions.
Xiong, Fei; Liu, Yun; Wang, Liang; Wang, Ximeng
2017-08-01
To reveal heterogeneous behaviors of opinion evolution in different scenarios, we propose an opinion model with topic interactions. Individual opinions and topic features are represented by a multidimensional vector. We measure an agent's action towards a specific topic by the product of opinion and topic feature. When pairs of agents interact for a topic, their actions are introduced to opinion updates with bounded confidence. Simulation results show that a transition from a disordered state to a consensus state occurs at a critical point of the tolerance threshold, which depends on the opinion dimension. The critical point increases as the dimension of opinions increases. Multiple topics promote opinion interactions and lead to the formation of macroscopic opinion clusters. In addition, more topics accelerate the evolutionary process and weaken the effect of network topology. We use two sets of large-scale real data to evaluate the model, and the results prove its effectiveness in characterizing a real evolutionary process. Our model achieves high performance in individual action prediction and even outperforms state-of-the-art methods. Meanwhile, our model has much smaller computational complexity. This paper provides a demonstration for possible practical applications of theoretical opinion dynamics.
Impact of Lateral Mixing in the Ocean on El Nino in Fully Coupled Climate Models
Gnanadesikan, A.; Russell, A.; Pradal, M. A. S.; Abernathey, R. P.
2016-02-01
Given the large number of processes that can affect El Nino, it is difficult to understand why different climate models simulate El Nino differently. This paper focusses on the role of lateral mixing by mesoscale eddies. There is significant disagreement about the value of the mixing coefficient ARedi which parameterizes the lateral mixing of tracers. Coupled climate models usually prescribe small values of this coefficient, ranging between a few hundred and a few thousand m2/s. Observations, however, suggest values that are much larger. We present a sensitivity study with a suite of Earth System Models that examines the impact of varying ARedi on the amplitude of El Nino. We examine the effect of varying a spatially constant ARedi over a range of values similar to that seen in the IPCC AR5 models, as well as looking at two spatially varying distributions based on altimetric velocity estimates. While the expectation that higher values of ARedi should damp anomalies is borne out in the model, it is more than compensated by a weaker damping due to vertical mixing and a stronger response of atmospheric winds to SST anomalies. Under higher mixing, a weaker zonal SST gradient causes the center of convection over the Warm pool to shift eastward and to become more sensitive to changes in cold tongue SSTs . Changes in the SST gradient also explain interdecadal ENSO variability within individual model runs.
Decision-case mix model for analyzing variation in cesarean rates.
Eldenburg, L; Waller, W S
2001-01-01
This article contributes a decision-case mix model for analyzing variation in c-section rates. Like recent contributions to the literature, the model systematically takes into account the effect of case mix. Going beyond past research, the model highlights differences in physician decision making in response to obstetric factors. Distinguishing the effects of physician decision making and case mix is important in understanding why c-section rates vary and in developing programs to effect change in physician behavior. The model was applied to a sample of deliveries at a hospital where physicians exhibited considerable variation in their c-section rates. Comparing groups with a low versus high rate, the authors' general conclusion is that the difference in physician decision tendencies (to perform a c-section), in response to specific obstetric factors, is at least as important as case mix in explaining variation in c-section rates. The exact effects of decision making versus case mix depend on how the model application defines the obstetric condition of interest and on the weighting of deliveries by their estimated "risk of Cesarean." The general conclusion is supported by an additional analysis that uses the model's elements to predict individual physicians' annual c-section rates.
International Nuclear Information System (INIS)
Grossman, Y.
1997-10-01
In supersymmetric models with nonvanishing Majorana neutrino masses, the sneutrino and antisneutrino mix. The conditions under which this mixing is experimentally observable are studied, and mass-splitting of the sneutrino mass eigenstates and sneutrino oscillation phenomena are analyzed
Modelling of Wheat-Flour Dough Mixing as an Open-Loop Hysteretic Process
Czech Academy of Sciences Publication Activity Database
Anderssen, R.; Kružík, Martin
2013-01-01
Roč. 18, č. 2 (2013), s. 283-293 ISSN 1531-3492 R&D Projects: GA AV ČR IAA100750802 Keywords : Dissipation * Dough mixing * Rate-independent systems Subject RIV: BA - General Mathematics Impact factor: 0.628, year: 2013 http://library.utia.cas.cz/separaty/2013/MTR/kruzik-modelling of wheat-flour dough mixing as an open-loop hysteretic process.pdf
Suwa , Misako; Fujimoto , Katsuhito
2006-01-01
http://www.suvisoft.com; Color mixing occurs between background and foreground colors when a pattern is post-printed on a colored area because ink is not completely opaque. This paper proposes a new method for the correction of color mixing in line pattern such as characters and stamps, by using a modified particle density model. Parameters of the color correction can be calculated from two sets of foreground and background colors. By employing this method, the colors of foreground patterns o...
Direction of Effects in Multiple Linear Regression Models.
Wiedermann, Wolfgang; von Eye, Alexander
2015-01-01
Previous studies analyzed asymmetric properties of the Pearson correlation coefficient using higher than second order moments. These asymmetric properties can be used to determine the direction of dependence in a linear regression setting (i.e., establish which of two variables is more likely to be on the outcome side) within the framework of cross-sectional observational data. Extant approaches are restricted to the bivariate regression case. The present contribution extends the direction of dependence methodology to a multiple linear regression setting by analyzing distributional properties of residuals of competing multiple regression models. It is shown that, under certain conditions, the third central moments of estimated regression residuals can be used to decide upon direction of effects. In addition, three different approaches for statistical inference are discussed: a combined D'Agostino normality test, a skewness difference test, and a bootstrap difference test. Type I error and power of the procedures are assessed using Monte Carlo simulations, and an empirical example is provided for illustrative purposes. In the discussion, issues concerning the quality of psychological data, possible extensions of the proposed methods to the fourth central moment of regression residuals, and potential applications are addressed.
Investigating multiple solutions in the constrained minimal supersymmetric standard model
Energy Technology Data Exchange (ETDEWEB)
Allanach, B.C. [DAMTP, CMS, University of Cambridge,Wilberforce Road, Cambridge, CB3 0HA (United Kingdom); George, Damien P. [DAMTP, CMS, University of Cambridge,Wilberforce Road, Cambridge, CB3 0HA (United Kingdom); Cavendish Laboratory, University of Cambridge,JJ Thomson Avenue, Cambridge, CB3 0HE (United Kingdom); Nachman, Benjamin [SLAC, Stanford University,2575 Sand Hill Rd, Menlo Park, CA 94025 (United States)
2014-02-07
Recent work has shown that the Constrained Minimal Supersymmetric Standard Model (CMSSM) can possess several distinct solutions for certain values of its parameters. The extra solutions were not previously found by public supersymmetric spectrum generators because fixed point iteration (the algorithm used by the generators) is unstable in the neighbourhood of these solutions. The existence of the additional solutions calls into question the robustness of exclusion limits derived from collider experiments and cosmological observations upon the CMSSM, because limits were only placed on one of the solutions. Here, we map the CMSSM by exploring its multi-dimensional parameter space using the shooting method, which is not subject to the stability issues which can plague fixed point iteration. We are able to find multiple solutions where in all previous literature only one was found. The multiple solutions are of two distinct classes. One class, close to the border of bad electroweak symmetry breaking, is disfavoured by LEP2 searches for neutralinos and charginos. The other class has sparticles that are heavy enough to evade the LEP2 bounds. Chargino masses may differ by up to around 10% between the different solutions, whereas other sparticle masses differ at the sub-percent level. The prediction for the dark matter relic density can vary by a hundred percent or more between the different solutions, so analyses employing the dark matter constraint are incomplete without their inclusion.
Mixed Platoon Flow Dispersion Model Based on Speed-Truncated Gaussian Mixture Distribution
Directory of Open Access Journals (Sweden)
Weitiao Wu
2013-01-01
Full Text Available A mixed traffic flow feature is presented on urban arterials in China due to a large amount of buses. Based on field data, a macroscopic mixed platoon flow dispersion model (MPFDM was proposed to simulate the platoon dispersion process along the road section between two adjacent intersections from the flow view. More close to field observation, truncated Gaussian mixture distribution was adopted as the speed density distribution for mixed platoon. Expectation maximum (EM algorithm was used for parameters estimation. The relationship between the arriving flow distribution at downstream intersection and the departing flow distribution at upstream intersection was investigated using the proposed model. Comparison analysis using virtual flow data was performed between the Robertson model and the MPFDM. The results confirmed the validity of the proposed model.
The effect of turbulent mixing models on the predictions of subchannel codes
International Nuclear Information System (INIS)
Tapucu, A.; Teyssedou, A.; Tye, P.; Troche, N.
1994-01-01
In this paper, the predictions of the COBRA-IV and ASSERT-4 subchannel codes have been compared with experimental data on void fraction, mass flow rate, and pressure drop obtained for two interconnected subchannels. COBRA-IV is based on a one-dimensional separated flow model with the turbulent intersubchannel mixing formulated as an extension of the single-phase mixing model, i.e. fluctuating equal mass exchange. ASSERT-4 is based on a drift flux model with the turbulent mixing modelled by assuming an exchange of equal volumes with different densities thus allowing a net fluctuating transverse mass flux from one subchannel to the other. This feature is implemented in the constitutive relationship for the relative velocity required by the conservation equations. It is observed that the predictions of ASSERT-4 follow the experimental trends better than COBRA-IV; therefore the approach of equal volume exchange constitutes an improvement over that of the equal mass exchange. ((orig.))
Improved Expectation Maximization Algorithm for Gaussian Mixed Model Using the Kernel Method
Directory of Open Access Journals (Sweden)
Mohd Izhan Mohd Yusoff
2013-01-01
Full Text Available Fraud activities have contributed to heavy losses suffered by telecommunication companies. In this paper, we attempt to use Gaussian mixed model, which is a probabilistic model normally used in speech recognition to identify fraud calls in the telecommunication industry. We look at several issues encountered when calculating the maximum likelihood estimates of the Gaussian mixed model using an Expectation Maximization algorithm. Firstly, we look at a mechanism for the determination of the initial number of Gaussian components and the choice of the initial values of the algorithm using the kernel method. We show via simulation that the technique improves the performance of the algorithm. Secondly, we developed a procedure for determining the order of the Gaussian mixed model using the log-likelihood function and the Akaike information criteria. Finally, for illustration, we apply the improved algorithm to real telecommunication data. The modified method will pave the way to introduce a comprehensive method for detecting fraud calls in future work.
Characterising and modelling regolith stratigraphy using multiple geophysical techniques
Thomas, M.; Cremasco, D.; Fotheringham, T.; Hatch, M. A.; Triantifillis, J.; Wilford, J.
2013-12-01
Regolith is the weathered, typically mineral-rich layer from fresh bedrock to land surface. It encompasses soil (A, E and B horizons) that has undergone pedogenesis. Below is the weathered C horizon that retains at least some of the original rocky fabric and structure. At the base of this is the lower regolith boundary of continuous hard bedrock (the R horizon). Regolith may be absent, e.g. at rocky outcrops, or may be many 10's of metres deep. Comparatively little is known about regolith, and critical questions remain regarding composition and characteristics - especially deeper where the challenge of collecting reliable data increases with depth. In Australia research is underway to characterise and map regolith using consistent methods at scales ranging from local (e.g. hillslope) to continental scales. These efforts are driven by many research needs, including Critical Zone modelling and simulation. Pilot research in South Australia using digitally-based environmental correlation techniques modelled the depth to bedrock to 9 m for an upland area of 128 000 ha. One finding was the inability to reliably model local scale depth variations over horizontal distances of 2 - 3 m and vertical distances of 1 - 2 m. The need to better characterise variations in regolith to strengthen models at these fine scales was discussed. Addressing this need, we describe high intensity, ground-based multi-sensor geophysical profiling of three hillslope transects in different regolith-landscape settings to characterise fine resolution (i.e. a number of frequencies; multiple frequency, multiple coil electromagnetic induction; and high resolution resistivity. These were accompanied by georeferenced, closely spaced deep cores to 9 m - or to core refusal. The intact cores were sub-sampled to standard depths and analysed for regolith properties to compile core datasets consisting of: water content; texture; electrical conductivity; and weathered state. After preprocessing (filtering, geo
International Nuclear Information System (INIS)
Rupšys, P.
2015-01-01
A system of stochastic differential equations (SDE) with mixed-effects parameters and multivariate normal copula density function were used to develop tree height model for Scots pine trees in Lithuania. A two-step maximum likelihood parameter estimation method is used and computational guidelines are given. After fitting the conditional probability density functions to outside bark diameter at breast height, and total tree height, a bivariate normal copula distribution model was constructed. Predictions from the mixed-effects parameters SDE tree height model calculated during this research were compared to the regression tree height equations. The results are implemented in the symbolic computational language MAPLE
Energy Technology Data Exchange (ETDEWEB)
Rupšys, P. [Aleksandras Stulginskis University, Studenų g. 11, Akademija, Kaunas district, LT – 53361 Lithuania (Lithuania)
2015-10-28
A system of stochastic differential equations (SDE) with mixed-effects parameters and multivariate normal copula density function were used to develop tree height model for Scots pine trees in Lithuania. A two-step maximum likelihood parameter estimation method is used and computational guidelines are given. After fitting the conditional probability density functions to outside bark diameter at breast height, and total tree height, a bivariate normal copula distribution model was constructed. Predictions from the mixed-effects parameters SDE tree height model calculated during this research were compared to the regression tree height equations. The results are implemented in the symbolic computational language MAPLE.
A model of radiative neutrino masses. Mixing and a possible fourth generation
International Nuclear Information System (INIS)
Babu, K.S.; Ma, E.; Pantaleone, J.
1989-01-01
We consider the phenomenological consequences of a recently proposed model with four lepton generations such that the three known neutrinos have radiatively induced Majorana masses. Mixing among generations in the presence of a heavy fourth neutrino necessitates a reevaluation of the usual experimental tests of the standard model. One interesting possibility is to have a τ lifetime longer than predicted by the standard three-generation model. Another is to have neutrino masses and mixing angles in the range needed for a natural explanation of the solar-neutrino puzzle in terms of the Mikheyev-Smirnov-Wolfenstein effect. (orig.)
An applied model for the height of the daytime mixed layer and the entrainment zone
DEFF Research Database (Denmark)
Batchvarova, E.; Gryning, Sven-Erik
1994-01-01
A model is presented for the height of the mixed layer and the depth of the entrainment zone under near-neutral and unstable atmospheric conditions. It is based on the zero-order mixed layer height model of Batchvarova and Gryning (1991) and the parameterization of the entrainment zone depth......-layer height: friction velocity, kinematic heat flux near the ground and potential temperature gradient in the free atmosphere above the entrainment zone. When information is available on the horizontal divergence of the large-scale flow field, the model also takes into account the effect of subsidence...
Modelling Stochastic Route Choice Behaviours with a Closed-Form Mixed Logit Model
Directory of Open Access Journals (Sweden)
Xinjun Lai
2015-01-01
Full Text Available A closed-form mixed Logit approach is proposed to model the stochastic route choice behaviours. It combines both the advantages of Probit and Logit to provide a flexible form in alternatives correlation and a tractable form in expression; besides, the heterogeneity in alternative variance can also be addressed. Paths are compared by pairs where the superiority of the binary Probit can be fully used. The Probit-based aggregation is also used for a nested Logit structure. Case studies on both numerical and empirical examples demonstrate that the new method is valid and practical. This paper thus provides an operational solution to incorporate the normal distribution in route choice with an analytical expression.
Hoerning, Sebastian; Bardossy, Andras; du Plessis, Jaco
2017-04-01
Most geostatistical inverse groundwater flow and transport modelling approaches utilize a numerical solver to minimize the discrepancy between observed and simulated hydraulic heads and/or hydraulic concentration values. The optimization procedure often requires many model runs, which for complex models lead to long run times. Random Mixing is a promising new geostatistical technique for inverse modelling. The method is an extension of the gradual deformation approach. It works by finding a field which preserves the covariance structure and maintains observed hydraulic conductivities. This field is perturbed by mixing it with new fields that fulfill the homogeneous conditions. This mixing is expressed as an optimization problem which aims to minimize the difference between the observed and simulated hydraulic heads and/or concentration values. To preserve the spatial structure, the mixing weights must lie on the unit hyper-sphere. We present a modification to the Random Mixing algorithm which significantly reduces the number of model runs required. The approach involves taking n equally spaced points on the unit circle as weights for mixing conditional random fields. Each of these mixtures provides a solution to the forward model at the conditioning locations. For each of the locations the solutions are then interpolated around the circle to provide solutions for additional mixing weights at very low computational cost. The interpolated solutions are used to search for a mixture which maximally reduces the objective function. This is in contrast to other approaches which evaluate the objective function for the n mixtures and then interpolate the obtained values. Keeping the mixture on the unit circle makes it easy to generate equidistant sampling points in the space; however, this means that only two fields are mixed at a time. Once the optimal mixture for two fields has been found, they are combined to form the input to the next iteration of the algorithm. This
Reliability Estimation of Aero-engine Based on Mixed Weibull Distribution Model
Yuan, Zhongda; Deng, Junxiang; Wang, Dawei
2018-02-01
Aero-engine is a complex mechanical electronic system, based on analysis of reliability of mechanical electronic system, Weibull distribution model has an irreplaceable role. Till now, only two-parameter Weibull distribution model and three-parameter Weibull distribution are widely used. Due to diversity of engine failure modes, there is a big error with single Weibull distribution model. By contrast, a variety of engine failure modes can be taken into account with mixed Weibull distribution model, so it is a good statistical analysis model. Except the concept of dynamic weight coefficient, in order to make reliability estimation result more accurately, three-parameter correlation coefficient optimization method is applied to enhance Weibull distribution model, thus precision of mixed distribution reliability model is improved greatly. All of these are advantageous to popularize Weibull distribution model in engineering applications.
Semiparametric mixed-effects analysis of PK/PD models using differential equations.
Wang, Yi; Eskridge, Kent M; Zhang, Shunpu
2008-08-01
Motivated by the use of semiparametric nonlinear mixed-effects modeling on longitudinal data, we develop a new semiparametric modeling approach to address potential structural model misspecification for population pharmacokinetic/pharmacodynamic (PK/PD) analysis. Specifically, we use a set of ordinary differential equations (ODEs) with form dx/dt = A(t)x + B(t) where B(t) is a nonparametric function that is estimated using penalized splines. The inclusion of a nonparametric function in the ODEs makes identification of structural model misspecification feasible by quantifying the model uncertainty and provides flexibility for accommodating possible structural model deficiencies. The resulting model will be implemented in a nonlinear mixed-effects modeling setup for population analysis. We illustrate the method with an application to cefamandole data and evaluate its performance through simulations.
Interaction of multiple biomimetic antimicrobial polymers with model bacterial membranes
Energy Technology Data Exchange (ETDEWEB)
Baul, Upayan, E-mail: upayanb@imsc.res.in; Vemparala, Satyavani, E-mail: vani@imsc.res.in [The Institute of Mathematical Sciences, C.I.T. Campus, Taramani, Chennai 600113 (India); Kuroda, Kenichi, E-mail: kkuroda@umich.edu [Department of Biologic and Materials Sciences, University of Michigan School of Dentistry, Ann Arbor, Michigan 48109 (United States)
2014-08-28
Using atomistic molecular dynamics simulations, interaction of multiple synthetic random copolymers based on methacrylates on prototypical bacterial membranes is investigated. The simulations show that the cationic polymers form a micellar aggregate in water phase and the aggregate, when interacting with the bacterial membrane, induces clustering of oppositely charged anionic lipid molecules to form clusters and enhances ordering of lipid chains. The model bacterial membrane, consequently, develops lateral inhomogeneity in membrane thickness profile compared to polymer-free system. The individual polymers in the aggregate are released into the bacterial membrane in a phased manner and the simulations suggest that the most probable location of the partitioned polymers is near the 1-palmitoyl-2-oleoyl-phosphatidylglycerol (POPG) clusters. The partitioned polymers preferentially adopt facially amphiphilic conformations at lipid-water interface, despite lacking intrinsic secondary structures such as α-helix or β-sheet found in naturally occurring antimicrobial peptides.
A consistency assessment of coupled cohesive zone models for mixed-mode debonding problems
Directory of Open Access Journals (Sweden)
R. Dimitri
2014-07-01
Full Text Available Due to their simplicity, cohesive zone models (CZMs are very attractive to describe mixed-mode failure and debonding processes of materials and interfaces. Although a large number of coupled CZMs have been proposed, and despite the extensive related literature, little attention has been devoted to ensuring the consistency of these models for mixed-mode conditions, primarily in a thermodynamical sense. A lack of consistency may affect the local or global response of a mechanical system. This contribution deals with the consistency check for some widely used exponential and bilinear mixed-mode CZMs. The coupling effect on stresses and energy dissipation is first investigated and the path-dependance of the mixed-mode debonding work of separation is analitically evaluated. Analytical predictions are also compared with results from numerical implementations, where the interface is described with zero-thickness contact elements. A node-to-segment strategy is here adopted, which incorporates decohesion and contact within a unified framework. A new thermodynamically consistent mixed-mode CZ model based on a reformulation of the Xu-Needleman model as modified by van den Bosch et al. is finally proposed and derived by applying the Coleman and Noll procedure in accordance with the second law of thermodynamics. The model holds monolithically for loading and unloading processes, as well as for decohesion and contact, and its performance is demonstrated through suitable examples.
The intergenerational multiple deficit model and the case of dyslexia
Directory of Open Access Journals (Sweden)
Elsje evan Bergen
2014-06-01
Full Text Available Which children go on to develop dyslexia? Since dyslexia has a multifactorial aetiology, this question can be restated as: What are the factors that put children at high risk for developing dyslexia? It is argued that a useful theoretical framework to address this question is Pennington’s (2006 multiple deficit model (MDM. This model replaces models that attribute dyslexia to a single underlying cause. Subsequently, the generalist genes hypothesis for learning (disabilities (Plomin & Kovas, 2005 is described and integrated with the MDM. Finally, findings are presented from a longitudinal study with children at family risk for dyslexia. Such studies can contribute to testing and specifying the MDM. In this study, risk factors at both the child and family level were investigated. This led to the proposed intergenerational MDM, in which both parents confer liability via intertwined genetic and environmental pathways. Future scientific directions are discussed to investigate parent-offspring resemblance and transmission patterns, which will shed new light on disorder aetiology.
An Advanced N -body Model for Interacting Multiple Stellar Systems
Energy Technology Data Exchange (ETDEWEB)
Brož, Miroslav [Astronomical Institute of the Charles University, Faculty of Mathematics and Physics, V Holešovičkách 2, CZ-18000 Praha 8 (Czech Republic)
2017-06-01
We construct an advanced model for interacting multiple stellar systems in which we compute all trajectories with a numerical N -body integrator, namely the Bulirsch–Stoer from the SWIFT package. We can then derive various observables: astrometric positions, radial velocities, minima timings (TTVs), eclipse durations, interferometric visibilities, closure phases, synthetic spectra, spectral energy distribution, and even complete light curves. We use a modified version of the Wilson–Devinney code for the latter, in which the instantaneous true phase and inclination of the eclipsing binary are governed by the N -body integration. If all of these types of observations are at one’s disposal, a joint χ {sup 2} metric and an optimization algorithm (a simplex or simulated annealing) allow one to search for a global minimum and construct very robust models of stellar systems. At the same time, our N -body model is free from artifacts that may arise if mutual gravitational interactions among all components are not self-consistently accounted for. Finally, we present a number of examples showing dynamical effects that can be studied with our code and we discuss how systematic errors may affect the results (and how to prevent this from happening).
Negative binomial models for abundance estimation of multiple closed populations
Boyce, Mark S.; MacKenzie, Darry I.; Manly, Bryan F.J.; Haroldson, Mark A.; Moody, David W.
2001-01-01
Counts of uniquely identified individuals in a population offer opportunities to estimate abundance. However, for various reasons such counts may be burdened by heterogeneity in the probability of being detected. Theoretical arguments and empirical evidence demonstrate that the negative binomial distribution (NBD) is a useful characterization for counts from biological populations with heterogeneity. We propose a method that focuses on estimating multiple populations by simultaneously using a suite of models derived from the NBD. We used this approach to estimate the number of female grizzly bears (Ursus arctos) with cubs-of-the-year in the Yellowstone ecosystem, for each year, 1986-1998. Akaike's Information Criteria (AIC) indicated that a negative binomial model with a constant level of heterogeneity across all years was best for characterizing the sighting frequencies of female grizzly bears. A lack-of-fit test indicated the model adequately described the collected data. Bootstrap techniques were used to estimate standard errors and 95% confidence intervals. We provide a Monte Carlo technique, which confirms that the Yellowstone ecosystem grizzly bear population increased during the period 1986-1998.
A diagnostic tree model for polytomous responses with multiple strategies.
Ma, Wenchao
2018-04-23
Constructed-response items have been shown to be appropriate for cognitively diagnostic assessments because students' problem-solving procedures can be observed, providing direct evidence for making inferences about their proficiency. However, multiple strategies used by students make item scoring and psychometric analyses challenging. This study introduces the so-called two-digit scoring scheme into diagnostic assessments to record both students' partial credits and their strategies. This study also proposes a diagnostic tree model (DTM) by integrating the cognitive diagnosis models with the tree model to analyse the items scored using the two-digit rubrics. Both convergent and divergent tree structures are considered to accommodate various scoring rules. The MMLE/EM algorithm is used for item parameter estimation of the DTM, and has been shown to provide good parameter recovery under varied conditions in a simulation study. A set of data from TIMSS 2007 mathematics assessment is analysed to illustrate the use of the two-digit scoring scheme and the DTM. © 2018 The British Psychological Society.
A minimal model for multiple epidemics and immunity spreading.
Directory of Open Access Journals (Sweden)
Kim Sneppen
Full Text Available Pathogens and parasites are ubiquitous in the living world, being limited only by availability of suitable hosts. The ability to transmit a particular disease depends on competing infections as well as on the status of host immunity. Multiple diseases compete for the same resource and their fate is coupled to each other. Such couplings have many facets, for example cross-immunization between related influenza strains, mutual inhibition by killing the host, or possible even a mutual catalytic effect if host immunity is impaired. We here introduce a minimal model for an unlimited number of unrelated pathogens whose interaction is simplified to simple mutual exclusion. The model incorporates an ongoing development of host immunity to past diseases, while leaving the system open for emergence of new diseases. The model exhibits a rich dynamical behavior with interacting infection waves, leaving broad trails of immunization in the host population. This obtained immunization pattern depends only on the system size and on the mutation rate that initiates new diseases.
A minimal model for multiple epidemics and immunity spreading.
Sneppen, Kim; Trusina, Ala; Jensen, Mogens H; Bornholdt, Stefan
2010-10-18
Pathogens and parasites are ubiquitous in the living world, being limited only by availability of suitable hosts. The ability to transmit a particular disease depends on competing infections as well as on the status of host immunity. Multiple diseases compete for the same resource and their fate is coupled to each other. Such couplings have many facets, for example cross-immunization between related influenza strains, mutual inhibition by killing the host, or possible even a mutual catalytic effect if host immunity is impaired. We here introduce a minimal model for an unlimited number of unrelated pathogens whose interaction is simplified to simple mutual exclusion. The model incorporates an ongoing development of host immunity to past diseases, while leaving the system open for emergence of new diseases. The model exhibits a rich dynamical behavior with interacting infection waves, leaving broad trails of immunization in the host population. This obtained immunization pattern depends only on the system size and on the mutation rate that initiates new diseases.
Modeling Pan Evaporation for Kuwait by Multiple Linear Regression
Almedeij, Jaber
2012-01-01
Evaporation is an important parameter for many projects related to hydrology and water resources systems. This paper constitutes the first study conducted in Kuwait to obtain empirical relations for the estimation of daily and monthly pan evaporation as functions of available meteorological data of temperature, relative humidity, and wind speed. The data used here for the modeling are daily measurements of substantial continuity coverage, within a period of 17 years between January 1993 and December 2009, which can be considered representative of the desert climate of the urban zone of the country. Multiple linear regression technique is used with a procedure of variable selection for fitting the best model forms. The correlations of evaporation with temperature and relative humidity are also transformed in order to linearize the existing curvilinear patterns of the data by using power and exponential functions, respectively. The evaporation models suggested with the best variable combinations were shown to produce results that are in a reasonable agreement with observation values. PMID:23226984
Evaluation of a Linear Mixing Model to Retrieve Soil and Vegetation Temperatures of Land Targets
Yang, J.; Jia, L.; Cui, Y.; Zhou, J.; Menenti, M.
2014-01-01
A simple linear mixing model of heterogeneous soil-vegetation system and retrieval of component temperatures from directional remote sensing measurements by inverting this model is evaluated in this paper using observations by a thermal camera. The thermal camera was used to obtain multi-angular TIR
Testing the family replication-model through Bsup(O)-Bsup(-O) mixing
International Nuclear Information System (INIS)
Datta, A.; Pati, J.C.
1985-07-01
It is observed that the family-replication idea, proposed in the context of a minimal preon-model, necessarily implies a maximal mixing (i.e. ΔM>>GAMMA) either in the Bsub(s)sup(O)-B-barsub(s)sup(O) or the Bsub(d)sup(O)-B-barsub(d)sup(O) system, in contrast to the standard model. (author)
Bayesian prediction of spatial count data using generalized linear mixed models
DEFF Research Database (Denmark)
Christensen, Ole Fredslund; Waagepetersen, Rasmus Plenge
2002-01-01
Spatial weed count data are modeled and predicted using a generalized linear mixed model combined with a Bayesian approach and Markov chain Monte Carlo. Informative priors for a data set with sparse sampling are elicited using a previously collected data set with extensive sampling. Furthermore, ...
Examples of mixed-effects modeling with crossed random effects and with binomial data
Quené, H.; van den Bergh, H.
2008-01-01
Psycholinguistic data are often analyzed with repeated-measures analyses of variance (ANOVA), but this paper argues that mixed-effects (multilevel) models provide a better alternative method. First, models are discussed in which the two random factors of participants and items are crossed, and not