Geometrical model of multiple production
International Nuclear Information System (INIS)
Chikovani, Z.E.; Jenkovszky, L.L.; Kvaratshelia, T.M.; Struminskij, B.V.
1988-01-01
The relation between geometrical and KNO-scaling and their violation is studied in a geometrical model of multiple production of hadrons. Predictions concerning the behaviour of correlation coefficients at future accelerators are given
Multiple Indicator Stationary Time Series Models.
Sivo, Stephen A.
2001-01-01
Discusses the propriety and practical advantages of specifying multivariate time series models in the context of structural equation modeling for time series and longitudinal panel data. For time series data, the multiple indicator model specification improves on classical time series analysis. For panel data, the multiple indicator model…
Modeling Multiple Causes of Carcinogenesis
Energy Technology Data Exchange (ETDEWEB)
Jones, T D
1999-01-24
An array of epidemiological results and databases on test animal indicate that risk of cancer and atherosclerosis can be up- or down-regulated by diet through a range of 200%. Other factors contribute incrementally and include the natural terrestrial environment and various human activities that jointly produce complex exposures to endotoxin-producing microorganisms, ionizing radiations, and chemicals. Ordinary personal habits and simple physical irritants have been demonstrated to affect the immune response and risk of disease. There tends to be poor statistical correlation of long-term risk with single agent exposures incurred throughout working careers. However, Agency recommendations for control of hazardous exposures to humans has been substance-specific instead of contextually realistic even though there is consistent evidence for common mechanisms of toxicological and carcinogenic action. That behavior seems to be best explained by molecular stresses from cellular oxygen metabolism and phagocytosis of antigenic invasion as well as breakdown of normal metabolic compounds associated with homeostatic- and injury-related renewal of cells. There is continually mounting evidence that marrow stroma, comprised largely of monocyte-macrophages and fibroblasts, is important to phagocytic and cytokinetic response, but the complex action of the immune process is difficult to infer from first-principle logic or biomarkers of toxic injury. The many diverse database studies all seem to implicate two important processes, i.e., the univalent reduction of molecular oxygen and breakdown of aginuine, an amino acid, by hydrolysis or digestion of protein which is attendant to normal antigen-antibody action. This behavior indicates that protection guidelines and risk coefficients should be context dependent to include reference considerations of the composite action of parameters that mediate oxygen metabolism. A logic of this type permits the realistic common-scale modeling of
Multiplicity Control in Structural Equation Modeling
Cribbie, Robert A.
2007-01-01
Researchers conducting structural equation modeling analyses rarely, if ever, control for the inflated probability of Type I errors when evaluating the statistical significance of multiple parameters in a model. In this study, the Type I error control, power and true model rates of famsilywise and false discovery rate controlling procedures were…
Design of Xen Hybrid Multiple Police Model
Sun, Lei; Lin, Renhao; Zhu, Xianwei
2017-10-01
Virtualization Technology has attracted more and more attention. As a popular open-source virtualization tools, XEN is used more and more frequently. Xsm, XEN security model, has also been widespread concern. The safety status classification has not been established in the XSM, and it uses the virtual machine as a managed object to make Dom0 a unique administrative domain that does not meet the minimum privilege. According to these questions, we design a Hybrid multiple police model named SV_HMPMD that organically integrates multiple single security policy models include DTE,RBAC,BLP. It can fullfill the requirement of confidentiality and integrity for security model and use different particle size to different domain. In order to improve BLP’s practicability, the model introduce multi-level security labels. In order to divide the privilege in detail, we combine DTE with RBAC. In order to oversize privilege, we limit the privilege of domain0.
Multiple Model Approaches to Modelling and Control,
DEFF Research Database (Denmark)
on the ease with which prior knowledge can be incorporated. It is interesting to note that researchers in Control Theory, Neural Networks,Statistics, Artificial Intelligence and Fuzzy Logic have more or less independently developed very similar modelling methods, calling them Local ModelNetworks, Operating......, and allows direct incorporation of high-level and qualitative plant knowledge into themodel. These advantages have proven to be very appealing for industrial applications, and the practical, intuitively appealing nature of the framework isdemonstrated in chapters describing applications of local methods...... to problems in the process industries, biomedical applications and autonomoussystems. The successful application of the ideas to demanding problems is already encouraging, but creative development of the basic framework isneeded to better allow the integration of human knowledge with automated learning...
Multiple model cardinalized probability hypothesis density filter
Georgescu, Ramona; Willett, Peter
2011-09-01
The Probability Hypothesis Density (PHD) filter propagates the first-moment approximation to the multi-target Bayesian posterior distribution while the Cardinalized PHD (CPHD) filter propagates both the posterior likelihood of (an unlabeled) target state and the posterior probability mass function of the number of targets. Extensions of the PHD filter to the multiple model (MM) framework have been published and were implemented either with a Sequential Monte Carlo or a Gaussian Mixture approach. In this work, we introduce the multiple model version of the more elaborate CPHD filter. We present the derivation of the prediction and update steps of the MMCPHD particularized for the case of two target motion models and proceed to show that in the case of a single model, the new MMCPHD equations reduce to the original CPHD equations.
Predictive performance models and multiple task performance
Wickens, Christopher D.; Larish, Inge; Contorer, Aaron
1989-01-01
Five models that predict how performance of multiple tasks will interact in complex task scenarios are discussed. The models are shown in terms of the assumptions they make about human operator divided attention. The different assumptions about attention are then empirically validated in a multitask helicopter flight simulation. It is concluded from this simulation that the most important assumption relates to the coding of demand level of different component tasks.
Application of Multiple Evaluation Models in Brazil
Directory of Open Access Journals (Sweden)
Rafael Victal Saliba
2008-07-01
Full Text Available Based on two different samples, this article tests the performance of a number of Value Drivers commonly used for evaluating companies by ﬁnance practitioners, through simple regression models of cross-section type which estimate the parameters associated to each Value Driver, denominated Market Multiples. We are able to diagnose the behavior of several multiples in the period 1994-2004, with an outlook also on the particularities of the economic activities performed by the sample companies (and their impacts on the performance through a subsequent analysis with segregation of companies in the sample by sectors. Extrapolating simple multiples evaluation standards from analysts of the main ﬁnancial institutions in Brazil, we ﬁnd that adjusting the ratio formulation to allow for an intercept does not provide satisfactory results in terms of pricing errors reduction. Results found, in spite of evidencing certain relative and absolute superiority among the multiples, may not be generically representative, given samples limitation.
Multiple model adaptive control with mixing
Kuipers, Matthew
Despite the remarkable theoretical accomplishments and successful applications of adaptive control, the field is not sufficiently mature to solve challenging control problems requiring strict performance and safety guarantees. Towards addressing these issues, a novel deterministic multiple-model adaptive control approach called adaptive mixing control is proposed. In this approach, adaptation comes from a high-level system called the supervisor that mixes into feedback a number of candidate controllers, each finely-tuned to a subset of the parameter space. The mixing signal, the supervisor's output, is generated by estimating the unknown parameters and, at every instant of time, calculating the contribution level of each candidate controller based on certainty equivalence. The proposed architecture provides two characteristics relevant to solving stringent, performance-driven applications. First, the full-suite of linear time invariant control tools is available. A disadvantage of conventional adaptive control is its restriction to utilizing only those control laws whose solutions can be feasibly computed in real-time, such as model reference and pole-placement type controllers. Because its candidate controllers are computed off line, the proposed approach suffers no such restriction. Second, the supervisor's output is smooth and does not necessarily depend on explicit a priori knowledge of the disturbance model. These characteristics can lead to improved performance by avoiding the unnecessary switching and chattering behaviors associated with some other multiple adaptive control approaches. The stability and robustness properties of the adaptive scheme are analyzed. It is shown that the mean-square regulation error is of the order of the modeling error. And when the parameter estimate converges to its true value, which is guaranteed if a persistence of excitation condition is satisfied, the adaptive closed-loop system converges exponentially fast to a closed
Model Pembelajaran Berbasis Penstimulasian Multiple Intelligences Siswa
Edy Legowo
2017-01-01
Tulisan ini membahas mengenai penerapan teori multiple intelligences dalam pembelajaran di sekolah. Pembahasan diawali dengan menguraikan perkembangan konsep inteligensi dan multiple intelligences. Diikuti dengan menjelaskan dampak teori multiple intelligences dalam bidang pendidikan dan pembelajaran di sekolah. Bagian selanjutnya menguraikan tentang implementasi teori multiple intelligences dalam praktik pembelajaran di kelas yaitu bagaimana pemberian pengalaman belajar siswa yang difasilita...
Model Pembelajaran Berbasis Penstimulasian Multiple Intelligences Siswa
Directory of Open Access Journals (Sweden)
Edy Legowo
2017-03-01
Full Text Available Tulisan ini membahas mengenai penerapan teori multiple intelligences dalam pembelajaran di sekolah. Pembahasan diawali dengan menguraikan perkembangan konsep inteligensi dan multiple intelligences. Diikuti dengan menjelaskan dampak teori multiple intelligences dalam bidang pendidikan dan pembelajaran di sekolah. Bagian selanjutnya menguraikan tentang implementasi teori multiple intelligences dalam praktik pembelajaran di kelas yaitu bagaimana pemberian pengalaman belajar siswa yang difasilitasi guru dapat menstimulasi multiple intelligences siswa. Evaluasi hasil belajar siswa dari pandangan penerapan teori multiple intelligences seharusnya dilakukan menggunakan authentic assessment dan portofolio yang lebih memfasilitasi para siswa mengungkapkan atau mengaktualisasikan hasil belajarnya melalui berbagai cara sesuai dengan kekuatan jenis inteligensinya.
Multiple Temperature Model for Near Continuum Flows
International Nuclear Information System (INIS)
XU, Kun; Liu, Hongwei; Jiang, Jianzheng
2007-01-01
In the near continuum flow regime, the flow may have different translational temperatures in different directions. It is well known that for increasingly rarefied flow fields, the predictions from continuum formulation, such as the Navier-Stokes equations, lose accuracy. These inaccuracies may be partially due to the single temperature assumption in the Navier-Stokes equations. Here, based on the gas-kinetic Bhatnagar-Gross-Krook (BGK) equation, a multitranslational temperature model is proposed and used in the flow calculations. In order to fix all three translational temperatures, two constraints are additionally proposed to model the energy exchange in different directions. Based on the multiple temperature assumption, the Navier-Stokes relation between the stress and strain is replaced by the temperature relaxation term, and the Navier-Stokes assumption is recovered only in the limiting case when the flow is close to the equilibrium with the same temperature in different directions. In order to validate the current model, both the Couette and Poiseuille flows are studied in the transition flow regime
Multiple Scattering Model for Optical Coherence Tomography with Rytov Approximation
Li, Muxingzi
2017-01-01
of speckles due to multiple scatterers within the coherence length, and other random noise. Motivated by the above two challenges, a multiple scattering model based on Rytov approximation and Gaussian beam optics is proposed for the OCT setup. Some previous
Testing for Nonuniform Differential Item Functioning with Multiple Indicator Multiple Cause Models
Woods, Carol M.; Grimm, Kevin J.
2011-01-01
In extant literature, multiple indicator multiple cause (MIMIC) models have been presented for identifying items that display uniform differential item functioning (DIF) only, not nonuniform DIF. This article addresses, for apparently the first time, the use of MIMIC models for testing both uniform and nonuniform DIF with categorical indicators. A…
Multiplicity distributions in the dual parton model
International Nuclear Information System (INIS)
Batunin, A.V.; Tolstenkov, A.N.
1985-01-01
Multiplicity distributions are calculated by means of a new mechanism of production of hadrons in a string, which was proposed previously by the authors and takes into account explicitly the valence character of the ends of the string. It is shown that allowance for this greatly improves the description of the low-energy multiplicity distributions. At superhigh energies, the contribution of the ends of the strings becomes negligibly small, but in this case multi-Pomeron contributions must be taken into account
Evolving Four Part Harmony Using a Multiple Worlds Model
DEFF Research Database (Denmark)
Scirea, Marco; Brown, Joseph Alexander
2015-01-01
This application of the Multiple Worlds Model examines a collaborative fitness model for generating four part harmonies. In this model we have multiple populations and the fitness of the individuals is based on the ability of a member from each population to work with the members of other...
Explaining clinical behaviors using multiple theoretical models
Directory of Open Access Journals (Sweden)
Eccles Martin P
2012-10-01
the five surveys. For the predictor variables, the mean construct scores were above the mid-point on the scale with median values across the five behaviors generally being above four out of seven and the range being from 1.53 to 6.01. Across all of the theories, the highest proportion of the variance explained was always for intention and the lowest was for behavior. The Knowledge-Attitudes-Behavior Model performed poorly across all behaviors and dependent variables; CSSRM also performed poorly. For TPB, SCT, II, and LT across the five behaviors, we predicted median R2 of 25% to 42.6% for intention, 6.2% to 16% for behavioral simulation, and 2.4% to 6.3% for behavior. Conclusions We operationalized multiple theories measuring across five behaviors. Continuing challenges that emerge from our work are: better specification of behaviors, better operationalization of theories; how best to appropriately extend the range of theories; further assessment of the value of theories in different settings and groups; exploring the implications of these methods for the management of chronic diseases; and moving to experimental designs to allow an understanding of behavior change.
Explaining clinical behaviors using multiple theoretical models.
Eccles, Martin P; Grimshaw, Jeremy M; MacLennan, Graeme; Bonetti, Debbie; Glidewell, Liz; Pitts, Nigel B; Steen, Nick; Thomas, Ruth; Walker, Anne; Johnston, Marie
2012-10-17
, the mean construct scores were above the mid-point on the scale with median values across the five behaviors generally being above four out of seven and the range being from 1.53 to 6.01. Across all of the theories, the highest proportion of the variance explained was always for intention and the lowest was for behavior. The Knowledge-Attitudes-Behavior Model performed poorly across all behaviors and dependent variables; CSSRM also performed poorly. For TPB, SCT, II, and LT across the five behaviors, we predicted median R2 of 25% to 42.6% for intention, 6.2% to 16% for behavioral simulation, and 2.4% to 6.3% for behavior. We operationalized multiple theories measuring across five behaviors. Continuing challenges that emerge from our work are: better specification of behaviors, better operationalization of theories; how best to appropriately extend the range of theories; further assessment of the value of theories in different settings and groups; exploring the implications of these methods for the management of chronic diseases; and moving to experimental designs to allow an understanding of behavior change.
Integration of multiple, excess, backup, and expected covering models
M S Daskin; K Hogan; C ReVelle
1988-01-01
The concepts of multiple, excess, backup, and expected coverage are defined. Model formulations using these constructs are reviewed and contrasted to illustrate the relationships between them. Several new formulations are presented as is a new derivation of the expected covering model which indicates more clearly the relationship of the model to other multi-state covering models. An expected covering model with multiple time standards is also presented.
A test for the parameters of multiple linear regression models ...
African Journals Online (AJOL)
A test for the parameters of multiple linear regression models is developed for conducting tests simultaneously on all the parameters of multiple linear regression models. The test is robust relative to the assumptions of homogeneity of variances and absence of serial correlation of the classical F-test. Under certain null and ...
Medicare capitation model, functional status, and multiple comorbidities: model accuracy
Noyes, Katia; Liu, Hangsheng; Temkin-Greener, Helena
2012-01-01
Objective This study examined financial implications of CMS-Hierarchical Condition Categories (HCC) risk-adjustment model on Medicare payments for individuals with comorbid chronic conditions. Study Design The study used 1992-2000 data from the Medicare Current Beneficiary Survey and corresponding Medicare claims. The pairs of comorbidities were formed based on the prior evidence about possible synergy between these conditions and activities of daily living (ADL) deficiencies and included heart disease and cancer, lung disease and cancer, stroke and hypertension, stroke and arthritis, congestive heart failure (CHF) and osteoporosis, diabetes and coronary artery disease, CHF and dementia. Methods For each beneficiary, we calculated the actual Medicare cost ratio as the ratio of the individual’s annualized costs to the mean annual Medicare cost of all people in the study. The actual Medicare cost ratios, by ADLs, were compared to the HCC ratios under the CMS-HCC payment model. Using multivariate regression models, we tested whether having the identified pairs of comorbidities affects the accuracy of CMS-HCC model predictions. Results The CMS-HCC model underpredicted Medicare capitation payments for patients with hypertension, lung disease, congestive heart failure and dementia. The difference between the actual costs and predicted payments was partially explained by beneficiary functional status and less than optimal adjustment for these chronic conditions. Conclusions Information about beneficiary functional status should be incorporated in reimbursement models since underpaying providers for caring for population with multiple comorbidities may provide severe disincentives for managed care plans to enroll such individuals and to appropriately manage their complex and costly conditions. PMID:18837646
Multiple Scenario Generation of Subsurface Models
DEFF Research Database (Denmark)
Cordua, Knud Skou
of information is obeyed such that no unknown assumptions and biases influence the solution to the inverse problem. This involves a definition of the probabilistically formulated inverse problem, a discussion about how prior models can be established based on statistical information from sample models...... of the probabilistic formulation of the inverse problem. This function is based on an uncertainty model that describes the uncertainties related to the observed data. In a similar way, a formulation of the prior probability distribution that takes into account uncertainties related to the sample model statistics...... similar to observation uncertainties. We refer to the effect of these approximations as modeling errors. Examples that show how the modeling error is estimated are provided. Moreover, it is shown how these effects can be taken into account in the formulation of the posterior probability distribution...
Multiple system modelling of waste management
International Nuclear Information System (INIS)
Eriksson, Ola; Bisaillon, Mattias
2011-01-01
Highlights: → Linking of models will provide a more complete, correct and credible picture of the systems. → The linking procedure is easy to perform and also leads to activation of project partners. → The simulation procedure is a bit more complicated and calls for the ability to run both models. - Abstract: Due to increased environmental awareness, planning and performance of waste management has become more and more complex. Therefore waste management has early been subject to different types of modelling. Another field with long experience of modelling and systems perspective is energy systems. The two modelling traditions have developed side by side, but so far there are very few attempts to combine them. Waste management systems can be linked together with energy systems through incineration plants. The models for waste management can be modelled on a quite detailed level whereas surrounding systems are modelled in a more simplistic way. This is a problem, as previous studies have shown that assumptions on the surrounding system often tend to be important for the conclusions. In this paper it is shown how two models, one for the district heating system (MARTES) and another one for the waste management system (ORWARE), can be linked together. The strengths and weaknesses with model linking are discussed when compared to simplistic assumptions on effects in the energy and waste management systems. It is concluded that the linking of models will provide a more complete, correct and credible picture of the consequences of different simultaneous changes in the systems. The linking procedure is easy to perform and also leads to activation of project partners. However, the simulation procedure is a bit more complicated and calls for the ability to run both models.
Mean multiplicity in the Regge models with rising cross sections
International Nuclear Information System (INIS)
Chikovani, Z.E.; Kobylisky, N.A.; Martynov, E.S.
1979-01-01
Behaviour of the mean multiplicity and the total cross section σsub(t) of hadron-hadron interactions is considered in the framework of the Regge models at high energies. Generating function was plotted for models of dipole and froissaron, and the mean multiplicity and multiplicity moments were calculated. It is shown that approximately ln 2 S (energy square) in the dipole model, which is in good agreement with the experiment. It is also found that in various Regge models approximately σsub(t)lnS
Discrete choice models with multiplicative error terms
DEFF Research Database (Denmark)
Fosgerau, Mogens; Bierlaire, Michel
2009-01-01
The conditional indirect utility of many random utility maximization (RUM) discrete choice models is specified as a sum of an index V depending on observables and an independent random term ε. In general, the universe of RUM consistent models is much larger, even fixing some specification of V due...
Structural model analysis of multiple quantitative traits.
Directory of Open Access Journals (Sweden)
Renhua Li
2006-07-01
Full Text Available We introduce a method for the analysis of multilocus, multitrait genetic data that provides an intuitive and precise characterization of genetic architecture. We show that it is possible to infer the magnitude and direction of causal relationships among multiple correlated phenotypes and illustrate the technique using body composition and bone density data from mouse intercross populations. Using these techniques we are able to distinguish genetic loci that affect adiposity from those that affect overall body size and thus reveal a shortcoming of standardized measures such as body mass index that are widely used in obesity research. The identification of causal networks sheds light on the nature of genetic heterogeneity and pleiotropy in complex genetic systems.
Multiple-lesion track-structure model
International Nuclear Information System (INIS)
Wilson, J.W.; Cucinotta, F.A.; Shinn, J.L.
1992-03-01
A multilesion cell kinetic model is derived, and radiation kinetic coefficients are related to the Katz track structure model. The repair-related coefficients are determined from the delayed plating experiments of Yang et al. for the C3H10T1/2 cell system. The model agrees well with the x ray and heavy ion experiments of Yang et al. for the immediate plating, delaying plating, and fractionated exposure protocols employed by Yang. A study is made of the effects of target fragments in energetic proton exposures and of the repair-deficient target-fragment-induced lesions
Affine LIBOR Models with Multiple Curves
DEFF Research Database (Denmark)
Grbac, Zorana; Papapantoleon, Antonis; Schoenmakers, John
2015-01-01
are specified following the methodology of the affine LIBOR models and are driven by the wide and flexible class of affine processes. The affine property is preserved under forward measures, which allows us to derive Fourier pricing formulas for caps, swaptions, and basis swaptions. A model specification...... with dependent LIBOR rates is developed that allows for an efficient and accurate calibration to a system of caplet prices....
SDG and qualitative trend based model multiple scale validation
Gao, Dong; Xu, Xin; Yin, Jianjin; Zhang, Hongyu; Zhang, Beike
2017-09-01
Verification, Validation and Accreditation (VV&A) is key technology of simulation and modelling. For the traditional model validation methods, the completeness is weak; it is carried out in one scale; it depends on human experience. The SDG (Signed Directed Graph) and qualitative trend based multiple scale validation is proposed. First the SDG model is built and qualitative trends are added to the model. And then complete testing scenarios are produced by positive inference. The multiple scale validation is carried out by comparing the testing scenarios with outputs of simulation model in different scales. Finally, the effectiveness is proved by carrying out validation for a reactor model.
Modelling of rate effects at multiple scales
DEFF Research Database (Denmark)
Pedersen, R.R.; Simone, A.; Sluys, L. J.
2008-01-01
, the length scale in the meso-model and the macro-model can be coupled. In this fashion, a bridging of length scales can be established. A computational analysis of a Split Hopkinson bar test at medium and high impact load is carried out at macro-scale and meso-scale including information from the micro-scale.......At the macro- and meso-scales a rate dependent constitutive model is used in which visco-elasticity is coupled to visco-plasticity and damage. A viscous length scale effect is introduced to control the size of the fracture process zone. By comparison of the widths of the fracture process zone...
New experimental model of multiple myeloma.
Telegin, G B; Kalinina, A R; Ponomarenko, N A; Ovsepyan, A A; Smirnov, S V; Tsybenko, V V; Homeriki, S G
2001-06-01
NSO/1 (P3x63Ay 8Ut) and SP20 myeloma cells were inoculated to BALB/c OlaHsd mice. NSO/1 cells allowed adequate stage-by-stage monitoring of tumor development. The adequacy of this model was confirmed in experiments with conventional cytostatics: prospidium and cytarabine caused necrosis of tumor cells and reduced animal mortality.
Animal model of human disease. Multiple myeloma
Radl, J.; Croese, J.W.; Zurcher, C.; Enden-Vieveen, M.H.M. van den; Leeuw, A.M. de
1988-01-01
Animal models of spontaneous and induced plasmacytomas in some inbred strains of mice have proven to be useful tools for different studies on tumorigenesis and immunoregulation. Their wide applicability and the fact that after their intravenous transplantation, the recipient mice developed bone
Multiple Social Networks, Data Models and Measures for
DEFF Research Database (Denmark)
Magnani, Matteo; Rossi, Luca
2017-01-01
Multiple Social Network Analysis is a discipline defining models, measures, methodologies, and algorithms to study multiple social networks together as a single social system. It is particularly valuable when the networks are interconnected, e.g., the same actors are present in more than one...
Modeling Rabbit Responses to Single and Multiple Aerosol ...
Journal Article Survival models are developed here to predict response and time-to-response for mortality in rabbits following exposures to single or multiple aerosol doses of Bacillus anthracis spores. Hazard function models were developed for a multiple dose dataset to predict the probability of death through specifying dose-response functions and the time between exposure and the time-to-death (TTD). Among the models developed, the best-fitting survival model (baseline model) has an exponential dose-response model with a Weibull TTD distribution. Alternative models assessed employ different underlying dose-response functions and use the assumption that, in a multiple dose scenario, earlier doses affect the hazard functions of each subsequent dose. In addition, published mechanistic models are analyzed and compared with models developed in this paper. None of the alternative models that were assessed provided a statistically significant improvement in fit over the baseline model. The general approach utilizes simple empirical data analysis to develop parsimonious models with limited reliance on mechanistic assumptions. The baseline model predicts TTDs consistent with reported results from three independent high-dose rabbit datasets. More accurate survival models depend upon future development of dose-response datasets specifically designed to assess potential multiple dose effects on response and time-to-response. The process used in this paper to dev
Explaining clinical behaviors using multiple theoretical models
Eccles, Martin P; Grimshaw, Jeremy M; MacLennan, Graeme; Bonetti, Debbie; Glidewell, Liz; Pitts, Nigel B; Steen, Nick; Thomas, Ruth; Walker, Anne; Johnston, Marie
2012-01-01
Abstract Background In the field of implementation research, there is an increased interest in use of theory when designing implementation research studies involving behavior change. In 2003, we initiated a series of five studies to establish a scientific rationale for interventions to translate research findings into clinical practice by exploring the performance of a number of different, commonly used, overlapping behavioral theories and models. We reflect on the strengths and weaknesses of...
Airport choice model in multiple airport regions
Directory of Open Access Journals (Sweden)
Claudia Muñoz
2017-02-01
Full Text Available Purpose: This study aims to analyze travel choices made by air transportation users in multi airport regions because it is a crucial component when planning passenger redistribution policies. The purpose of this study is to find a utility function which makes it possible to know the variables that influence users’ choice of the airports on routes to the main cities in the Colombian territory. Design/methodology/approach: This research generates a Multinomial Logit Model (MNL, which is based on the theory of maximizing utility, and it is based on the data obtained on revealed and stated preference surveys applied to users who reside in the metropolitan area of Aburrá Valley (Colombia. This zone is the only one in the Colombian territory which has two neighboring airports for domestic flights. The airports included in the modeling process were Enrique Olaya Herrera (EOH Airport and José María Córdova (JMC Airport. Several structure models were tested, and the MNL proved to be the most significant revealing the common variables that affect passenger airport choice include the airfare, the price to travel the airport, and the time to get to the airport. Findings and Originality/value: The airport choice model which was calibrated corresponds to a valid powerful tool used to calculate the probability of each analyzed airport of being chosen for domestic flights in the Colombian territory. This is done bearing in mind specific characteristic of each of the attributes contained in the utility function. In addition, these probabilities will be used to calculate future market shares of the two airports considered in this study, and this will be done generating a support tool for airport and airline marketing policies.
Multiple simultaneous event model for radiation carcinogenesis
International Nuclear Information System (INIS)
Baum, J.W.
1976-01-01
A mathematical model is proposed which postulates that cancer induction is a multi-event process, that these events occur naturally, usually one at a time in any cell, and that radiation frequently causes two of these events to occur simultaneously. Microdosimetric considerations dictate that for high LET radiations the simultaneous events are associated with a single particle or track. The model predicts: (a) linear dose-effect relations for early times after irradiation with small doses, (b) approximate power functions of dose (i.e. Dsup(x)) having exponent less than one for populations of mixed age examined at short times after irradiation with small doses, (c) saturation of effect at either long times after irradiation with small doses or for all times after irradiation with large doses, and (d) a net increase in incidence which is dependent on age of observation but independent of age at irradiation. Data of Vogel, for neutron induced mammary tumors in rats, are used to illustrate the validity of the formulation. This model provides a quantitative framework to explain several unexpected results obtained by Vogel. It also provides a logical framework to explain the dose-effect relations observed in the Japanese survivors of the atomic bombs. (author)
Multiple Imputation of Predictor Variables Using Generalized Additive Models
de Jong, Roel; van Buuren, Stef; Spiess, Martin
2016-01-01
The sensitivity of multiple imputation methods to deviations from their distributional assumptions is investigated using simulations, where the parameters of scientific interest are the coefficients of a linear regression model, and values in predictor variables are missing at random. The
Entrepreneurial intention modeling using hierarchical multiple regression
Directory of Open Access Journals (Sweden)
Marina Jeger
2014-12-01
Full Text Available The goal of this study is to identify the contribution of effectuation dimensions to the predictive power of the entrepreneurial intention model over and above that which can be accounted for by other predictors selected and confirmed in previous studies. As is often the case in social and behavioral studies, some variables are likely to be highly correlated with each other. Therefore, the relative amount of variance in the criterion variable explained by each of the predictors depends on several factors such as the order of variable entry and sample specifics. The results show the modest predictive power of two dimensions of effectuation prior to the introduction of the theory of planned behavior elements. The article highlights the main advantages of applying hierarchical regression in social sciences as well as in the specific context of entrepreneurial intention formation, and addresses some of the potential pitfalls that this type of analysis entails.
Multiple Time Series Ising Model for Financial Market Simulations
International Nuclear Information System (INIS)
Takaishi, Tetsuya
2015-01-01
In this paper we propose an Ising model which simulates multiple financial time series. Our model introduces the interaction which couples to spins of other systems. Simulations from our model show that time series exhibit the volatility clustering that is often observed in the real financial markets. Furthermore we also find non-zero cross correlations between the volatilities from our model. Thus our model can simulate stock markets where volatilities of stocks are mutually correlated
Correlations in multiple production on nuclei and Glauber model of multiple scattering
International Nuclear Information System (INIS)
Zoller, V.R.; Nikolaev, N.N.
1982-01-01
Critical analysis of possibility for describing correlation phenomena during multiple production on nuclei within the framework of the Glauber multiple seattering model generalized for particle production processes with Capella, Krziwinski and Shabelsky has been performed. It was mainly concluded that the suggested generalization of the Glauber model gives dependences on Ng(Np) (where Ng-the number of ''grey'' tracess, and Np-the number of protons flying out of nucleus) and, eventually, on #betta# (where #betta#-the number of intranuclear interactions) contradicting experience. Independent of choice of relation between #betta# and Ng(Np) in the model the rapidity corrletor Rsub(eta) is overstated in the central region and understated in the region of nucleus fragmentation. In mean multiplicities these two contradictions of experience are disguised with random compensation and agreement with experience in Nsub(S) (function of Ng) cannot be an argument in favour of the model. It is concluded that eiconal model doesn't permit to quantitatively describe correlation phenomena during the multiple production on nuclei
Multiple Response Regression for Gaussian Mixture Models with Known Labels.
Lee, Wonyul; Du, Ying; Sun, Wei; Hayes, D Neil; Liu, Yufeng
2012-12-01
Multiple response regression is a useful regression technique to model multiple response variables using the same set of predictor variables. Most existing methods for multiple response regression are designed for modeling homogeneous data. In many applications, however, one may have heterogeneous data where the samples are divided into multiple groups. Our motivating example is a cancer dataset where the samples belong to multiple cancer subtypes. In this paper, we consider modeling the data coming from a mixture of several Gaussian distributions with known group labels. A naive approach is to split the data into several groups according to the labels and model each group separately. Although it is simple, this approach ignores potential common structures across different groups. We propose new penalized methods to model all groups jointly in which the common and unique structures can be identified. The proposed methods estimate the regression coefficient matrix, as well as the conditional inverse covariance matrix of response variables. Asymptotic properties of the proposed methods are explored. Through numerical examples, we demonstrate that both estimation and prediction can be improved by modeling all groups jointly using the proposed methods. An application to a glioblastoma cancer dataset reveals some interesting common and unique gene relationships across different cancer subtypes.
Adaptive Active Noise Suppression Using Multiple Model Switching Strategy
Directory of Open Access Journals (Sweden)
Quanzhen Huang
2017-01-01
Full Text Available Active noise suppression for applications where the system response varies with time is a difficult problem. The computation burden for the existing control algorithms with online identification is heavy and easy to cause control system instability. A new active noise control algorithm is proposed in this paper by employing multiple model switching strategy for secondary path varying. The computation is significantly reduced. Firstly, a noise control system modeling method is proposed for duct-like applications. Then a multiple model adaptive control algorithm is proposed with a new multiple model switching strategy based on filter-u least mean square (FULMS algorithm. Finally, the proposed algorithm was implemented on Texas Instruments digital signal processor (DSP TMS320F28335 and real time experiments were done to test the proposed algorithm and FULMS algorithm with online identification. Experimental verification tests show that the proposed algorithm is effective with good noise suppression performance.
Efficient Adoption and Assessment of Multiple Process Improvement Reference Models
Directory of Open Access Journals (Sweden)
Simona Jeners
2013-06-01
Full Text Available A variety of reference models such as CMMI, COBIT or ITIL support IT organizations to improve their processes. These process improvement reference models (IRMs cover different domains such as IT development, IT Services or IT Governance but also share some similarities. As there are organizations that address multiple domains and need to coordinate their processes in their improvement we present MoSaIC, an approach to support organizations to efficiently adopt and conform to multiple IRMs. Our solution realizes a semantic integration of IRMs based on common meta-models. The resulting IRM integration model enables organizations to efficiently implement and asses multiple IRMs and to benefit from synergy effects.
Model Seleksi Premi Asuransi Jiwa Dwiguna untuk Kasus Multiple Decrement
Cita, Devi Ramana; Pane, Rolan; ', Harison
2015-01-01
This article discusses a select survival model for the case of multiple decrements in evaluating endowment life insurance premium for person currently aged ( + ) years, who is selected at age with ℎ years selection period. The case of multiple decrements in this case is limited to two cases. The calculation of the annual premium is done by prior evaluating of the single premium, and the present value of annuity depends on theconstant force assumption.
AgMIP Training in Multiple Crop Models and Tools
Boote, Kenneth J.; Porter, Cheryl H.; Hargreaves, John; Hoogenboom, Gerrit; Thornburn, Peter; Mutter, Carolyn
2015-01-01
The Agricultural Model Intercomparison and Improvement Project (AgMIP) has the goal of using multiple crop models to evaluate climate impacts on agricultural production and food security in developed and developing countries. There are several major limitations that must be overcome to achieve this goal, including the need to train AgMIP regional research team (RRT) crop modelers to use models other than the ones they are currently familiar with, plus the need to harmonize and interconvert the disparate input file formats used for the various models. Two activities were followed to address these shortcomings among AgMIP RRTs to enable them to use multiple models to evaluate climate impacts on crop production and food security. We designed and conducted courses in which participants trained on two different sets of crop models, with emphasis on the model of least experience. In a second activity, the AgMIP IT group created templates for inputting data on soils, management, weather, and crops into AgMIP harmonized databases, and developed translation tools for converting the harmonized data into files that are ready for multiple crop model simulations. The strategies for creating and conducting the multi-model course and developing entry and translation tools are reviewed in this chapter.
A Bayesian Hierarchical Model for Relating Multiple SNPs within Multiple Genes to Disease Risk
Directory of Open Access Journals (Sweden)
Lewei Duan
2013-01-01
Full Text Available A variety of methods have been proposed for studying the association of multiple genes thought to be involved in a common pathway for a particular disease. Here, we present an extension of a Bayesian hierarchical modeling strategy that allows for multiple SNPs within each gene, with external prior information at either the SNP or gene level. The model involves variable selection at the SNP level through latent indicator variables and Bayesian shrinkage at the gene level towards a prior mean vector and covariance matrix that depend on external information. The entire model is fitted using Markov chain Monte Carlo methods. Simulation studies show that the approach is capable of recovering many of the truly causal SNPs and genes, depending upon their frequency and size of their effects. The method is applied to data on 504 SNPs in 38 candidate genes involved in DNA damage response in the WECARE study of second breast cancers in relation to radiotherapy exposure.
Parametric modeling for damped sinusoids from multiple channels
DEFF Research Database (Denmark)
Zhou, Zhenhua; So, Hing Cheung; Christensen, Mads Græsbøll
2013-01-01
frequencies and damping factors are then computed with the multi-channel weighted linear prediction method. The estimated sinusoidal poles are then matched to each channel according to the extreme value theory of distribution of random fields. Simulations are performed to show the performance advantages......The problem of parametric modeling for noisy damped sinusoidal signals from multiple channels is addressed. Utilizing the shift invariance property of the signal subspace, the number of distinct sinusoidal poles in the multiple channels is first determined. With the estimated number, the distinct...... of the proposed multi-channel sinusoidal modeling methodology compared with existing methods....
A Multiple Model Prediction Algorithm for CNC Machine Wear PHM
Directory of Open Access Journals (Sweden)
Huimin Chen
2011-01-01
Full Text Available The 2010 PHM data challenge focuses on the remaining useful life (RUL estimation for cutters of a high speed CNC milling machine using measurements from dynamometer, accelerometer, and acoustic emission sensors. We present a multiple model approach for wear depth estimation of milling machine cutters using the provided data. The feature selection, initial wear estimation and multiple model fusion components of the proposed algorithm are explained in details and compared with several alternative methods using the training data. The final submission ranked #2 among professional and student participants and the method is applicable to other data driven PHM problems.
Keith, Timothy Z
2014-01-01
Multiple Regression and Beyond offers a conceptually oriented introduction to multiple regression (MR) analysis and structural equation modeling (SEM), along with analyses that flow naturally from those methods. By focusing on the concepts and purposes of MR and related methods, rather than the derivation and calculation of formulae, this book introduces material to students more clearly, and in a less threatening way. In addition to illuminating content necessary for coursework, the accessibility of this approach means students are more likely to be able to conduct research using MR or SEM--and more likely to use the methods wisely. Covers both MR and SEM, while explaining their relevance to one another Also includes path analysis, confirmatory factor analysis, and latent growth modeling Figures and tables throughout provide examples and illustrate key concepts and techniques For additional resources, please visit: http://tzkeith.com/.
Li, Guo; Lv, Fei; Guan, Xu
2014-01-01
This paper investigates a collaborative scheduling model in the assembly system, wherein multiple suppliers have to deliver their components to the multiple manufacturers under the operation of Supply-Hub. We first develop two different scenarios to examine the impact of Supply-Hub. One is that suppliers and manufacturers make their decisions separately, and the other is that the Supply-Hub makes joint decisions with collaborative scheduling. The results show that our scheduling model with the Supply-Hub is a NP-complete problem, therefore, we propose an auto-adapted differential evolution algorithm to solve this problem. Moreover, we illustrate that the performance of collaborative scheduling by the Supply-Hub is superior to separate decision made by each manufacturer and supplier. Furthermore, we also show that the algorithm proposed has good convergence and reliability, which can be applicable to more complicated supply chain environment.
Directory of Open Access Journals (Sweden)
Guo Li
2014-01-01
Full Text Available This paper investigates a collaborative scheduling model in the assembly system, wherein multiple suppliers have to deliver their components to the multiple manufacturers under the operation of Supply-Hub. We first develop two different scenarios to examine the impact of Supply-Hub. One is that suppliers and manufacturers make their decisions separately, and the other is that the Supply-Hub makes joint decisions with collaborative scheduling. The results show that our scheduling model with the Supply-Hub is a NP-complete problem, therefore, we propose an auto-adapted differential evolution algorithm to solve this problem. Moreover, we illustrate that the performance of collaborative scheduling by the Supply-Hub is superior to separate decision made by each manufacturer and supplier. Furthermore, we also show that the algorithm proposed has good convergence and reliability, which can be applicable to more complicated supply chain environment.
Lv, Fei; Guan, Xu
2014-01-01
This paper investigates a collaborative scheduling model in the assembly system, wherein multiple suppliers have to deliver their components to the multiple manufacturers under the operation of Supply-Hub. We first develop two different scenarios to examine the impact of Supply-Hub. One is that suppliers and manufacturers make their decisions separately, and the other is that the Supply-Hub makes joint decisions with collaborative scheduling. The results show that our scheduling model with the Supply-Hub is a NP-complete problem, therefore, we propose an auto-adapted differential evolution algorithm to solve this problem. Moreover, we illustrate that the performance of collaborative scheduling by the Supply-Hub is superior to separate decision made by each manufacturer and supplier. Furthermore, we also show that the algorithm proposed has good convergence and reliability, which can be applicable to more complicated supply chain environment. PMID:24892104
Double-multiple streamtube model for Darrieus in turbines
Paraschivoiu, I.
1981-01-01
An analytical model is proposed for calculating the rotor performance and aerodynamic blade forces for Darrieus wind turbines with curved blades. The method of analysis uses a multiple-streamtube model, divided into two parts: one modeling the upstream half-cycle of the rotor and the other, the downstream half-cycle. The upwind and downwind components of the induced velocities at each level of the rotor were obtained using the principle of two actuator disks in tandem. Variation of the induced velocities in the two parts of the rotor produces larger forces in the upstream zone and smaller forces in the downstream zone. Comparisons of the overall rotor performance with previous methods and field test data show the important improvement obtained with the present model. The calculations were made using the computer code CARDAA developed at IREQ. The double-multiple streamtube model presented has two major advantages: it requires a much shorter computer time than the three-dimensional vortex model and is more accurate than multiple-streamtube model in predicting the aerodynamic blade loads.
Multiple commodities in statistical microeconomics: Model and market
Baaquie, Belal E.; Yu, Miao; Du, Xin
2016-11-01
A statistical generalization of microeconomics has been made in Baaquie (2013). In Baaquie et al. (2015), the market behavior of single commodities was analyzed and it was shown that market data provides strong support for the statistical microeconomic description of commodity prices. The case of multiple commodities is studied and a parsimonious generalization of the single commodity model is made for the multiple commodities case. Market data shows that the generalization can accurately model the simultaneous correlation functions of up to four commodities. To accurately model five or more commodities, further terms have to be included in the model. This study shows that the statistical microeconomics approach is a comprehensive and complete formulation of microeconomics, and which is independent to the mainstream formulation of microeconomics.
Risk Prediction Models for Other Cancers or Multiple Sites
Developing statistical models that estimate the probability of developing other multiple cancers over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.
An extension of the multiple-trapping model
International Nuclear Information System (INIS)
Shkilev, V. P.
2012-01-01
The hopping charge transport in disordered semiconductors is considered. Using the concept of the transport energy level, macroscopic equations are derived that extend a multiple-trapping model to the case of semiconductors with both energy and spatial disorders. It is shown that, although both types of disorder can cause dispersive transport, the frequency dependence of conductivity is determined exclusively by the spatial disorder.
Selecting Tools to Model Integer and Binomial Multiplication
Pratt, Sarah Smitherman; Eddy, Colleen M.
2017-01-01
Mathematics teachers frequently provide concrete manipulatives to students during instruction; however, the rationale for using certain manipulatives in conjunction with concepts may not be explored. This article focuses on area models that are currently used in classrooms to provide concrete examples of integer and binomial multiplication. The…
Modeling single versus multiple systems in implicit and explicit memory.
Starns, Jeffrey J; Ratcliff, Roger; McKoon, Gail
2012-04-01
It is currently controversial whether priming on implicit tasks and discrimination on explicit recognition tests are supported by a single memory system or by multiple, independent systems. In a Psychological Review article, Berry and colleagues used mathematical modeling to address this question and provide compelling evidence against the independent-systems approach. Copyright © 2012 Elsevier Ltd. All rights reserved.
Green communication: The enabler to multiple business models
DEFF Research Database (Denmark)
Lindgren, Peter; Clemmensen, Suberia; Taran, Yariv
2010-01-01
Companies stand at the forefront of a new business model reality with new potentials - that will change their basic understanding and practice of running their business models radically. One of the drivers to this change is green communication, its strong relation to green business models and its...... possibility to enable lower energy consumption. This paper shows how green communication enables innovation of green business models and multiple business models running simultaneously in different markets to different customers.......Companies stand at the forefront of a new business model reality with new potentials - that will change their basic understanding and practice of running their business models radically. One of the drivers to this change is green communication, its strong relation to green business models and its...
Infinite Multiple Membership Relational Modeling for Complex Networks
DEFF Research Database (Denmark)
Mørup, Morten; Schmidt, Mikkel Nørgaard; Hansen, Lars Kai
Learning latent structure in complex networks has become an important problem fueled by many types of networked data originating from practically all fields of science. In this paper, we propose a new non-parametric Bayesian multiplemembership latent feature model for networks. Contrary to existing...... multiplemembership models that scale quadratically in the number of vertices the proposedmodel scales linearly in the number of links admittingmultiple-membership analysis in large scale networks. We demonstrate a connection between the single membership relational model and multiple membership models and show...
Vehicle coordinated transportation dispatching model base on multiple crisis locations
Tian, Ran; Li, Shanwei; Yang, Guoying
2018-05-01
Many disastrous events are often caused after unconventional emergencies occur, and the requirements of disasters are often different. It is difficult for a single emergency resource center to satisfy such requirements at the same time. Therefore, how to coordinate the emergency resources stored by multiple emergency resource centers to various disaster sites requires the coordinated transportation of emergency vehicles. In this paper, according to the problem of emergency logistics coordination scheduling, based on the related constraints of emergency logistics transportation, an emergency resource scheduling model based on multiple disasters is established.
Multiple Surrogate Modeling for Wire-Wrapped Fuel Assembly Optimization
International Nuclear Information System (INIS)
Raza, Wasim; Kim, Kwang-Yong
2007-01-01
In this work, shape optimization of seven pin wire wrapped fuel assembly has been carried out in conjunction with RANS analysis in order to evaluate the performances of surrogate models. Previously, Ahmad and Kim performed the flow and heat transfer analysis based on the three-dimensional RANS analysis. But numerical optimization has not been applied to the design of wire-wrapped fuel assembly, yet. Surrogate models are being widely used in multidisciplinary optimization. Queipo et al. reviewed various surrogates based models used in aerospace applications. Goel et al. developed weighted average surrogate model based on response surface approximation (RSA), radial basis neural network (RBNN) and Krigging (KRG) models. In addition to the three basic models, RSA, RBNN and KRG, the multiple surrogate model, PBA also has been employed. Two geometric design variables and a multi-objective function with a weighting factor have been considered for this problem
A model for diagnosing and explaining multiple disorders.
Jamieson, P W
1991-08-01
The ability to diagnose multiple interacting disorders and explain them in a coherent causal framework has only partially been achieved in medical expert systems. This paper proposes a causal model for diagnosing and explaining multiple disorders whose key elements are: physician-directed hypotheses generation, object-oriented knowledge representation, and novel explanation heuristics. The heuristics modify and link the explanations to make the physician aware of diagnostic complexities. A computer program incorporating the model currently is in use for diagnosing peripheral nerve and muscle disorders. The program successfully diagnoses and explains interactions between diseases in terms of underlying pathophysiologic concepts. The model offers a new architecture for medical domains where reasoning from first principles is difficult but explanation of disease interactions is crucial for the system's operation.
MFAM: Multiple Frequency Adaptive Model-Based Indoor Localization Method.
Tuta, Jure; Juric, Matjaz B
2018-03-24
This paper presents MFAM (Multiple Frequency Adaptive Model-based localization method), a novel model-based indoor localization method that is capable of using multiple wireless signal frequencies simultaneously. It utilizes indoor architectural model and physical properties of wireless signal propagation through objects and space. The motivation for developing multiple frequency localization method lies in the future Wi-Fi standards (e.g., 802.11ah) and the growing number of various wireless signals present in the buildings (e.g., Wi-Fi, Bluetooth, ZigBee, etc.). Current indoor localization methods mostly rely on a single wireless signal type and often require many devices to achieve the necessary accuracy. MFAM utilizes multiple wireless signal types and improves the localization accuracy over the usage of a single frequency. It continuously monitors signal propagation through space and adapts the model according to the changes indoors. Using multiple signal sources lowers the required number of access points for a specific signal type while utilizing signals, already present in the indoors. Due to the unavailability of the 802.11ah hardware, we have evaluated proposed method with similar signals; we have used 2.4 GHz Wi-Fi and 868 MHz HomeMatic home automation signals. We have performed the evaluation in a modern two-bedroom apartment and measured mean localization error 2.0 to 2.3 m and median error of 2.0 to 2.2 m. Based on our evaluation results, using two different signals improves the localization accuracy by 18% in comparison to 2.4 GHz Wi-Fi-only approach. Additional signals would improve the accuracy even further. We have shown that MFAM provides better accuracy than competing methods, while having several advantages for real-world usage.
MFAM: Multiple Frequency Adaptive Model-Based Indoor Localization Method
Directory of Open Access Journals (Sweden)
Jure Tuta
2018-03-01
Full Text Available This paper presents MFAM (Multiple Frequency Adaptive Model-based localization method, a novel model-based indoor localization method that is capable of using multiple wireless signal frequencies simultaneously. It utilizes indoor architectural model and physical properties of wireless signal propagation through objects and space. The motivation for developing multiple frequency localization method lies in the future Wi-Fi standards (e.g., 802.11ah and the growing number of various wireless signals present in the buildings (e.g., Wi-Fi, Bluetooth, ZigBee, etc.. Current indoor localization methods mostly rely on a single wireless signal type and often require many devices to achieve the necessary accuracy. MFAM utilizes multiple wireless signal types and improves the localization accuracy over the usage of a single frequency. It continuously monitors signal propagation through space and adapts the model according to the changes indoors. Using multiple signal sources lowers the required number of access points for a specific signal type while utilizing signals, already present in the indoors. Due to the unavailability of the 802.11ah hardware, we have evaluated proposed method with similar signals; we have used 2.4 GHz Wi-Fi and 868 MHz HomeMatic home automation signals. We have performed the evaluation in a modern two-bedroom apartment and measured mean localization error 2.0 to 2.3 m and median error of 2.0 to 2.2 m. Based on our evaluation results, using two different signals improves the localization accuracy by 18% in comparison to 2.4 GHz Wi-Fi-only approach. Additional signals would improve the accuracy even further. We have shown that MFAM provides better accuracy than competing methods, while having several advantages for real-world usage.
A PDP model of the simultaneous perception of multiple objects
Henderson, Cynthia M.; McClelland, James L.
2011-06-01
Illusory conjunctions in normal and simultanagnosic subjects are two instances where the visual features of multiple objects are incorrectly 'bound' together. A connectionist model explores how multiple objects could be perceived correctly in normal subjects given sufficient time, but could give rise to illusory conjunctions with damage or time pressure. In this model, perception of two objects benefits from lateral connections between hidden layers modelling aspects of the ventral and dorsal visual pathways. As with simultanagnosia, simulations of dorsal lesions impair multi-object recognition. In contrast, a large ventral lesion has minimal effect on dorsal functioning, akin to dissociations between simple object manipulation (retained in visual form agnosia and semantic dementia) and object discrimination (impaired in these disorders) [Hodges, J.R., Bozeat, S., Lambon Ralph, M.A., Patterson, K., and Spatt, J. (2000), 'The Role of Conceptual Knowledge: Evidence from Semantic Dementia', Brain, 123, 1913-1925; Milner, A.D., and Goodale, M.A. (2006), The Visual Brain in Action (2nd ed.), New York: Oxford]. It is hoped that the functioning of this model might suggest potential processes underlying dorsal and ventral contributions to the correct perception of multiple objects.
Supersymmetric U(1)' model with multiple dark matters
International Nuclear Information System (INIS)
Hur, Taeil; Lee, Hye-Sung; Nasri, Salah
2008-01-01
We consider a scenario where a supersymmetric model has multiple dark matter particles. Adding a U(1) ' gauge symmetry is a well-motivated extension of the minimal supersymmetric standard model (MSSM). It can cure the problems of the MSSM such as the μ problem or the proton decay problem with high-dimensional lepton number and baryon number violating operators which R parity allows. An extra parity (U parity) may arise as a residual discrete symmetry after U(1) ' gauge symmetry is spontaneously broken. The lightest U-parity particle (LUP) is stable under the new parity becoming a new dark matter candidate. Up to three massive particles can be stable in the presence of the R parity and the U parity. We numerically illustrate that multiple stable particles in our model can satisfy both constraints from the relic density and the direct detection, thus providing a specific scenario where a supersymmetric model has well-motivated multiple dark matters consistent with experimental constraints. The scenario provides new possibilities in the present and upcoming dark matter searches in the direct detection and collider experiments
Challenges in LCA modelling of multiple loops for aluminium cans
DEFF Research Database (Denmark)
Niero, Monia; Olsen, Stig Irving
considered the case of closed-loop recycling for aluminium cans, where body and lid are different alloys, and discussed the abovementioned challenge. The Life Cycle Inventory (LCI) modelling of aluminium processes is traditionally based on a pure aluminium flow, therefore neglecting the presence of alloying...... elements. We included the effect of alloying elements on the LCA modelling of aluminium can recycling. First, we performed a mass balance of the main alloying elements (Mn, Fe, Si, Cu) in aluminium can recycling at increasing levels of recycling rate. The analysis distinguished between different aluminium...... packaging scrap sources (i.e. used beverage can and mixed aluminium packaging) to understand the limiting factors for multiple loop aluminium can recycling. Secondly, we performed a comparative LCA of aluminium can production and recycling in multiple loops considering the two aluminium packaging scrap...
Dealing with Multiple Solutions in Structural Vector Autoregressive Models.
Beltz, Adriene M; Molenaar, Peter C M
2016-01-01
Structural vector autoregressive models (VARs) hold great potential for psychological science, particularly for time series data analysis. They capture the magnitude, direction of influence, and temporal (lagged and contemporaneous) nature of relations among variables. Unified structural equation modeling (uSEM) is an optimal structural VAR instantiation, according to large-scale simulation studies, and it is implemented within an SEM framework. However, little is known about the uniqueness of uSEM results. Thus, the goal of this study was to investigate whether multiple solutions result from uSEM analysis and, if so, to demonstrate ways to select an optimal solution. This was accomplished with two simulated data sets, an empirical data set concerning children's dyadic play, and modifications to the group iterative multiple model estimation (GIMME) program, which implements uSEMs with group- and individual-level relations in a data-driven manner. Results revealed multiple solutions when there were large contemporaneous relations among variables. Results also verified several ways to select the correct solution when the complete solution set was generated, such as the use of cross-validation, maximum standardized residuals, and information criteria. This work has immediate and direct implications for the analysis of time series data and for the inferences drawn from those data concerning human behavior.
Automatic Generation of 3D Building Models with Multiple Roofs
Institute of Scientific and Technical Information of China (English)
Kenichi Sugihara; Yoshitugu Hayashi
2008-01-01
Based on building footprints (building polygons) on digital maps, we are proposing the GIS and CG integrated system that automatically generates 3D building models with multiple roofs. Most building polygons' edges meet at right angles (orthogonal polygon). The integrated system partitions orthogonal building polygons into a set of rectangles and places rectangular roofs and box-shaped building bodies on these rectangles. In order to partition an orthogonal polygon, we proposed a useful polygon expression in deciding from which vertex a dividing line is drawn. In this paper, we propose a new scheme for partitioning building polygons and show the process of creating 3D roof models.
A tactical supply chain planning model with multiple flexibility options
DEFF Research Database (Denmark)
Esmaeilikia, Masoud; Fahimnia, Behnam; Sarkis, Joeseph
2016-01-01
Supply chain flexibility is widely recognized as an approach to manage uncertainty. Uncertainty in the supply chain may arise from a number of sources such as demand and supply interruptions and lead time variability. A tactical supply chain planning model with multiple flexibility options...... incorporated in sourcing, manufacturing and logistics functions can be used for the analysis of flexibility adjustment in an existing supply chain. This paper develops such a tactical supply chain planning model incorporating a realistic range of flexibility options. A novel solution method is designed...
Hierarchical Multiple Markov Chain Model for Unsupervised Texture Segmentation
Czech Academy of Sciences Publication Activity Database
Scarpa, G.; Gaetano, R.; Haindl, Michal; Zerubia, J.
2009-01-01
Roč. 18, č. 8 (2009), s. 1830-1843 ISSN 1057-7149 R&D Projects: GA ČR GA102/08/0593 EU Projects: European Commission(XE) 507752 - MUSCLE Institutional research plan: CEZ:AV0Z10750506 Keywords : Classification * texture analysis * segmentation * hierarchical image models * Markov process Subject RIV: BD - Theory of Information Impact factor: 2.848, year: 2009 http://library.utia.cas.cz/separaty/2009/RO/haindl-hierarchical multiple markov chain model for unsupervised texture segmentation.pdf
Feedback structure based entropy approach for multiple-model estimation
Institute of Scientific and Technical Information of China (English)
Shen-tu Han; Xue Anke; Guo Yunfei
2013-01-01
The variable-structure multiple-model (VSMM) approach, one of the multiple-model (MM) methods, is a popular and effective approach in handling problems with mode uncertainties. The model sequence set adaptation (MSA) is the key to design a better VSMM. However, MSA methods in the literature have big room to improve both theoretically and practically. To this end, we propose a feedback structure based entropy approach that could find the model sequence sets with the smallest size under certain conditions. The filtered data are fed back in real time and can be used by the minimum entropy (ME) based VSMM algorithms, i.e., MEVSMM. Firstly, the full Markov chains are used to achieve optimal solutions. Secondly, the myopic method together with particle filter (PF) and the challenge match algorithm are also used to achieve sub-optimal solutions, a trade-off between practicability and optimality. The numerical results show that the proposed algorithm provides not only refined model sets but also a good robustness margin and very high accuracy.
Modelling of diffuse solar fraction with multiple predictors
Energy Technology Data Exchange (ETDEWEB)
Ridley, Barbara; Boland, John [Centre for Industrial and Applied Mathematics, University of South Australia, Mawson Lakes Boulevard, Mawson Lakes, SA 5095 (Australia); Lauret, Philippe [Laboratoire de Physique du Batiment et des Systemes, University of La Reunion, Reunion (France)
2010-02-15
For some locations both global and diffuse solar radiation are measured. However, for many locations, only global radiation is measured, or inferred from satellite data. For modelling solar energy applications, the amount of radiation on a tilted surface is needed. Since only the direct component on a tilted surface can be calculated from direct on some other plane using trigonometry, we need to have diffuse radiation on the horizontal plane available. There are regression relationships for estimating the diffuse on a tilted surface from diffuse on the horizontal. Models for estimating the diffuse on the horizontal from horizontal global that have been developed in Europe or North America have proved to be inadequate for Australia. Boland et al. developed a validated model for Australian conditions. Boland et al. detailed our recent advances in developing the theoretical framework for the use of the logistic function instead of piecewise linear or simple nonlinear functions and was the first step in identifying the means for developing a generic model for estimating diffuse from global and other predictors. We have developed a multiple predictor model, which is much simpler than previous models, and uses hourly clearness index, daily clearness index, solar altitude, apparent solar time and a measure of persistence of global radiation level as predictors. This model performs marginally better than currently used models for locations in the Northern Hemisphere and substantially better for Southern Hemisphere locations. We suggest it can be used as a universal model. (author)
Rapidity correlations at fixed multiplicity in cluster emission models
Berger, M C
1975-01-01
Rapidity correlations in the central region among hadrons produced in proton-proton collisions of fixed final state multiplicity n at NAL and ISR energies are investigated in a two-step framework in which clusters of hadrons are emitted essentially independently, via a multiperipheral-like model, and decay isotropically. For n>or approximately=/sup 1///sub 2/(n), these semi-inclusive distributions are controlled by the reaction mechanism which dominates production in the central region. Thus, data offer cleaner insight into the properties of this mechanism than can be obtained from fully inclusive spectra. A method of experimental analysis is suggested to facilitate the extraction of new dynamical information. It is shown that the n independence of the magnitude of semi-inclusive correlation functions reflects directly the structure of the internal cluster multiplicity distribution. This conclusion is independent of certain assumptions concerning the form of the single cluster density in rapidity space. (23 r...
Multiplicative Attribute Graph Model of Real-World Networks
Energy Technology Data Exchange (ETDEWEB)
Kim, Myunghwan [Stanford Univ., CA (United States); Leskovec, Jure [Stanford Univ., CA (United States)
2010-10-20
Large scale real-world network data, such as social networks, Internet andWeb graphs, is ubiquitous in a variety of scientific domains. The study of such social and information networks commonly finds patterns and explain their emergence through tractable models. In most networks, especially in social networks, nodes also have a rich set of attributes (e.g., age, gender) associatedwith them. However, most of the existing network models focus only on modeling the network structure while ignoring the features of nodes in the network. Here we present a class of network models that we refer to as the Multiplicative Attribute Graphs (MAG), which naturally captures the interactions between the network structure and node attributes. We consider a model where each node has a vector of categorical features associated with it. The probability of an edge between a pair of nodes then depends on the product of individual attributeattribute similarities. The model yields itself to mathematical analysis as well as fit to real data. We derive thresholds for the connectivity, the emergence of the giant connected component, and show that the model gives rise to graphs with a constant diameter. Moreover, we analyze the degree distribution to show that the model can produce networks with either lognormal or power-law degree distribution depending on certain conditions.
Dynamic coordinated control laws in multiple agent models
International Nuclear Information System (INIS)
Morgan, David S.; Schwartz, Ira B.
2005-01-01
We present an active control scheme of a kinetic model of swarming. It has been shown previously that the global control scheme for the model, presented in [Systems Control Lett. 52 (2004) 25], gives rise to spontaneous collective organization of agents into a unified coherent swarm, via steering controls and utilizing long-range attractive and short-range repulsive interactions. We extend these results by presenting control laws whereby a single swarm is broken into independently functioning subswarm clusters. The transition between one coordinated swarm and multiple clustered subswarms is managed simply with a homotopy parameter. Additionally, we present as an alternate formulation, a local control law for the same model, which implements dynamic barrier avoidance behavior, and in which swarm coherence emerges spontaneously
Laplace transform analysis of a multiplicative asset transfer model
Sokolov, Andrey; Melatos, Andrew; Kieu, Tien
2010-07-01
We analyze a simple asset transfer model in which the transfer amount is a fixed fraction f of the giver’s wealth. The model is analyzed in a new way by Laplace transforming the master equation, solving it analytically and numerically for the steady-state distribution, and exploring the solutions for various values of f∈(0,1). The Laplace transform analysis is superior to agent-based simulations as it does not depend on the number of agents, enabling us to study entropy and inequality in regimes that are costly to address with simulations. We demonstrate that Boltzmann entropy is not a suitable (e.g. non-monotonic) measure of disorder in a multiplicative asset transfer system and suggest an asymmetric stochastic process that is equivalent to the asset transfer model.
Modeling Spatial Dependence of Rainfall Extremes Across Multiple Durations
Le, Phuong Dong; Leonard, Michael; Westra, Seth
2018-03-01
Determining the probability of a flood event in a catchment given that another flood has occurred in a nearby catchment is useful in the design of infrastructure such as road networks that have multiple river crossings. These conditional flood probabilities can be estimated by calculating conditional probabilities of extreme rainfall and then transforming rainfall to runoff through a hydrologic model. Each catchment's hydrological response times are unlikely to be the same, so in order to estimate these conditional probabilities one must consider the dependence of extreme rainfall both across space and across critical storm durations. To represent these types of dependence, this study proposes a new approach for combining extreme rainfall across different durations within a spatial extreme value model using max-stable process theory. This is achieved in a stepwise manner. The first step defines a set of common parameters for the marginal distributions across multiple durations. The parameters are then spatially interpolated to develop a spatial field. Storm-level dependence is represented through the max-stable process for rainfall extremes across different durations. The dependence model shows a reasonable fit between the observed pairwise extremal coefficients and the theoretical pairwise extremal coefficient function across all durations. The study demonstrates how the approach can be applied to develop conditional maps of the return period and return level across different durations.
Protein Structure Classification and Loop Modeling Using Multiple Ramachandran Distributions
Najibi, Seyed Morteza
2017-02-08
Recently, the study of protein structures using angular representations has attracted much attention among structural biologists. The main challenge is how to efficiently model the continuous conformational space of the protein structures based on the differences and similarities between different Ramachandran plots. Despite the presence of statistical methods for modeling angular data of proteins, there is still a substantial need for more sophisticated and faster statistical tools to model the large-scale circular datasets. To address this need, we have developed a nonparametric method for collective estimation of multiple bivariate density functions for a collection of populations of protein backbone angles. The proposed method takes into account the circular nature of the angular data using trigonometric spline which is more efficient compared to existing methods. This collective density estimation approach is widely applicable when there is a need to estimate multiple density functions from different populations with common features. Moreover, the coefficients of adaptive basis expansion for the fitted densities provide a low-dimensional representation that is useful for visualization, clustering, and classification of the densities. The proposed method provides a novel and unique perspective to two important and challenging problems in protein structure research: structure-based protein classification and angular-sampling-based protein loop structure prediction.
A multiple relevance feedback strategy with positive and negative models.
Directory of Open Access Journals (Sweden)
Yunlong Ma
Full Text Available A commonly used strategy to improve search accuracy is through feedback techniques. Most existing work on feedback relies on positive information, and has been extensively studied in information retrieval. However, when a query topic is difficult and the results from the first-pass retrieval are very poor, it is impossible to extract enough useful terms from a few positive documents. Therefore, the positive feedback strategy is incapable to improve retrieval in this situation. Contrarily, there is a relatively large number of negative documents in the top of the result list, and it has been confirmed that negative feedback strategy is an important and useful way for adapting this scenario by several recent studies. In this paper, we consider a scenario when the search results are so poor that there are at most three relevant documents in the top twenty documents. Then, we conduct a novel study of multiple strategies for relevance feedback using both positive and negative examples from the first-pass retrieval to improve retrieval accuracy for such difficult queries. Experimental results on these TREC collections show that the proposed language model based multiple model feedback method which is generally more effective than both the baseline method and the methods using only positive or negative model.
Protein Structure Classification and Loop Modeling Using Multiple Ramachandran Distributions
Najibi, Seyed Morteza; Maadooliat, Mehdi; Zhou, Lan; Huang, Jianhua Z.; Gao, Xin
2017-01-01
Recently, the study of protein structures using angular representations has attracted much attention among structural biologists. The main challenge is how to efficiently model the continuous conformational space of the protein structures based on the differences and similarities between different Ramachandran plots. Despite the presence of statistical methods for modeling angular data of proteins, there is still a substantial need for more sophisticated and faster statistical tools to model the large-scale circular datasets. To address this need, we have developed a nonparametric method for collective estimation of multiple bivariate density functions for a collection of populations of protein backbone angles. The proposed method takes into account the circular nature of the angular data using trigonometric spline which is more efficient compared to existing methods. This collective density estimation approach is widely applicable when there is a need to estimate multiple density functions from different populations with common features. Moreover, the coefficients of adaptive basis expansion for the fitted densities provide a low-dimensional representation that is useful for visualization, clustering, and classification of the densities. The proposed method provides a novel and unique perspective to two important and challenging problems in protein structure research: structure-based protein classification and angular-sampling-based protein loop structure prediction.
Many-electron model for multiple ionization in atomic collisions
International Nuclear Information System (INIS)
Archubi, C D; Montanari, C C; Miraglia, J E
2007-01-01
We have developed a many-electron model for multiple ionization of heavy atoms bombarded by bare ions. It is based on the transport equation for an ion in an inhomogeneous electronic density. Ionization probabilities are obtained by employing the shell-to-shell local plasma approximation with the Levine and Louie dielectric function to take into account the binding energy of each shell. Post-collisional contributions due to Auger-like processes are taken into account by employing recent photoemission data. Results for single-to-quadruple ionization of Ne, Ar, Kr and Xe by protons are presented showing a very good agreement with experimental data
Many-electron model for multiple ionization in atomic collisions
Energy Technology Data Exchange (ETDEWEB)
Archubi, C D [Instituto de AstronomIa y Fisica del Espacio, Casilla de Correo 67, Sucursal 28 (C1428EGA) Buenos Aires (Argentina); Montanari, C C [Instituto de AstronomIa y Fisica del Espacio, Casilla de Correo 67, Sucursal 28 (C1428EGA) Buenos Aires (Argentina); Miraglia, J E [Instituto de AstronomIa y Fisica del Espacio, Casilla de Correo 67, Sucursal 28 (C1428EGA) Buenos Aires (Argentina)
2007-03-14
We have developed a many-electron model for multiple ionization of heavy atoms bombarded by bare ions. It is based on the transport equation for an ion in an inhomogeneous electronic density. Ionization probabilities are obtained by employing the shell-to-shell local plasma approximation with the Levine and Louie dielectric function to take into account the binding energy of each shell. Post-collisional contributions due to Auger-like processes are taken into account by employing recent photoemission data. Results for single-to-quadruple ionization of Ne, Ar, Kr and Xe by protons are presented showing a very good agreement with experimental data.
Model selection in Bayesian segmentation of multiple DNA alignments.
Oldmeadow, Christopher; Keith, Jonathan M
2011-03-01
The analysis of multiple sequence alignments is allowing researchers to glean valuable insights into evolution, as well as identify genomic regions that may be functional, or discover novel classes of functional elements. Understanding the distribution of conservation levels that constitutes the evolutionary landscape is crucial to distinguishing functional regions from non-functional. Recent evidence suggests that a binary classification of evolutionary rates is inappropriate for this purpose and finds only highly conserved functional elements. Given that the distribution of evolutionary rates is multi-modal, determining the number of modes is of paramount concern. Through simulation, we evaluate the performance of a number of information criterion approaches derived from MCMC simulations in determining the dimension of a model. We utilize a deviance information criterion (DIC) approximation that is more robust than the approximations from other information criteria, and show our information criteria approximations do not produce superfluous modes when estimating conservation distributions under a variety of circumstances. We analyse the distribution of conservation for a multiple alignment comprising four primate species and mouse, and repeat this on two additional multiple alignments of similar species. We find evidence of six distinct classes of evolutionary rates that appear to be robust to the species used. Source code and data are available at http://dl.dropbox.com/u/477240/changept.zip.
A multiple-location model for natural gas forward curves
International Nuclear Information System (INIS)
Buffington, J.C.
1999-06-01
This thesis presents an approach for financial modelling of natural gas in which connections between locations are incorporated and the complexities of forward curves in natural gas are considered. Apart from electricity, natural gas is the most volatile commodity traded. Its price is often dependent on the weather and price shocks can be felt across several geographic locations. This modelling approach incorporates multiple risk factors that correspond to various locations. One of the objectives was to determine if the model could be used for closed-form option prices. It was suggested that an adequate model for natural gas must consider 3 statistical properties: volatility term structure, backwardation and contango, and stochastic basis. Data from gas forward prices at Chicago, NYMEX and AECO were empirically tested to better understand these 3 statistical properties at each location and to verify if the proposed model truly incorporates these properties. In addition, this study examined the time series property of the difference of two locations (the basis) and determines that these empirical properties are consistent with the model properties. Closed-form option solutions were also developed for call options of forward contracts and call options on forward basis. The options were calibrated and compared to other models. The proposed model is capable of pricing options, but the prices derived did not pass the test of economic reasonableness. However, the model was able to capture the effect of transportation as well as aspects of seasonality which is a benefit over other existing models. It was determined that modifications will be needed regarding the estimation of the convenience yields. 57 refs., 2 tabs., 7 figs., 1 append
Multiplicative point process as a model of trading activity
Gontis, V.; Kaulakys, B.
2004-11-01
Signals consisting of a sequence of pulses show that inherent origin of the 1/ f noise is a Brownian fluctuation of the average interevent time between subsequent pulses of the pulse sequence. In this paper, we generalize the model of interevent time to reproduce a variety of self-affine time series exhibiting power spectral density S( f) scaling as a power of the frequency f. Furthermore, we analyze the relation between the power-law correlations and the origin of the power-law probability distribution of the signal intensity. We introduce a stochastic multiplicative model for the time intervals between point events and analyze the statistical properties of the signal analytically and numerically. Such model system exhibits power-law spectral density S( f)∼1/ fβ for various values of β, including β= {1}/{2}, 1 and {3}/{2}. Explicit expressions for the power spectra in the low-frequency limit and for the distribution density of the interevent time are obtained. The counting statistics of the events is analyzed analytically and numerically, as well. The specific interest of our analysis is related with the financial markets, where long-range correlations of price fluctuations largely depend on the number of transactions. We analyze the spectral density and counting statistics of the number of transactions. The model reproduces spectral properties of the real markets and explains the mechanism of power-law distribution of trading activity. The study provides evidence that the statistical properties of the financial markets are enclosed in the statistics of the time interval between trades. A multiplicative point process serves as a consistent model generating this statistics.
Guideline validation in multiple trauma care through business process modeling.
Stausberg, Jürgen; Bilir, Hüseyin; Waydhas, Christian; Ruchholtz, Steffen
2003-07-01
Clinical guidelines can improve the quality of care in multiple trauma. In our Department of Trauma Surgery a specific guideline is available paper-based as a set of flowcharts. This format is appropriate for the use by experienced physicians but insufficient for electronic support of learning, workflow and process optimization. A formal and logically consistent version represented with a standardized meta-model is necessary for automatic processing. In our project we transferred the paper-based into an electronic format and analyzed the structure with respect to formal errors. Several errors were detected in seven error categories. The errors were corrected to reach a formally and logically consistent process model. In a second step the clinical content of the guideline was revised interactively using a process-modeling tool. Our study reveals that guideline development should be assisted by process modeling tools, which check the content in comparison to a meta-model. The meta-model itself could support the domain experts in formulating their knowledge systematically. To assure sustainability of guideline development a representation independent of specific applications or specific provider is necessary. Then, clinical guidelines could be used for eLearning, process optimization and workflow management additionally.
Rank-based model selection for multiple ions quantum tomography
International Nuclear Information System (INIS)
Guţă, Mădălin; Kypraios, Theodore; Dryden, Ian
2012-01-01
The statistical analysis of measurement data has become a key component of many quantum engineering experiments. As standard full state tomography becomes unfeasible for large dimensional quantum systems, one needs to exploit prior information and the ‘sparsity’ properties of the experimental state in order to reduce the dimensionality of the estimation problem. In this paper we propose model selection as a general principle for finding the simplest, or most parsimonious explanation of the data, by fitting different models and choosing the estimator with the best trade-off between likelihood fit and model complexity. We apply two well established model selection methods—the Akaike information criterion (AIC) and the Bayesian information criterion (BIC)—two models consisting of states of fixed rank and datasets such as are currently produced in multiple ions experiments. We test the performance of AIC and BIC on randomly chosen low rank states of four ions, and study the dependence of the selected rank with the number of measurement repetitions for one ion states. We then apply the methods to real data from a four ions experiment aimed at creating a Smolin state of rank 4. By applying the two methods together with the Pearson χ 2 test we conclude that the data can be suitably described with a model whose rank is between 7 and 9. Additionally we find that the mean square error of the maximum likelihood estimator for pure states is close to that of the optimal over all possible measurements. (paper)
A Hybrid Multiple Criteria Decision Making Model for Supplier Selection
Directory of Open Access Journals (Sweden)
Chung-Min Wu
2013-01-01
Full Text Available The sustainable supplier selection would be the vital part in the management of a sustainable supply chain. In this study, a hybrid multiple criteria decision making (MCDM model is applied to select optimal supplier. The fuzzy Delphi method, which can lead to better criteria selection, is used to modify criteria. Considering the interdependence among the selection criteria, analytic network process (ANP is then used to obtain their weights. To avoid calculation and additional pairwise comparisons of ANP, a technique for order preference by similarity to ideal solution (TOPSIS is used to rank the alternatives. The use of a combination of the fuzzy Delphi method, ANP, and TOPSIS, proposing an MCDM model for supplier selection, and applying these to a real case are the unique features of this study.
Multiple Scattering Model for Optical Coherence Tomography with Rytov Approximation
Li, Muxingzi
2017-04-24
Optical Coherence Tomography (OCT) is a coherence-gated, micrometer-resolution imaging technique that focuses a broadband near-infrared laser beam to penetrate into optical scattering media, e.g. biological tissues. The OCT resolution is split into two parts, with the axial resolution defined by half the coherence length, and the depth-dependent lateral resolution determined by the beam geometry, which is well described by a Gaussian beam model. The depth dependence of lateral resolution directly results in the defocusing effect outside the confocal region and restricts current OCT probes to small numerical aperture (NA) at the expense of lateral resolution near the focus. Another limitation on OCT development is the presence of a mixture of speckles due to multiple scatterers within the coherence length, and other random noise. Motivated by the above two challenges, a multiple scattering model based on Rytov approximation and Gaussian beam optics is proposed for the OCT setup. Some previous papers have adopted the first Born approximation with the assumption of small perturbation of the incident field in inhomogeneous media. The Rytov method of the same order with smooth phase perturbation assumption benefits from a wider spatial range of validity. A deconvolution method for solving the inverse problem associated with the first Rytov approximation is developed, significantly reducing the defocusing effect through depth and therefore extending the feasible range of NA.
Resveratrol Neuroprotection in a Chronic Mouse Model of Multiple Sclerosis
Directory of Open Access Journals (Sweden)
Zoe eFonseca-Kelly
2012-05-01
Full Text Available Resveratrol is a naturally-occurring polyphenol that activates SIRT1, an NAD-dependent deacetylase. SRT501, a pharmaceutical formulation of resveratrol with enhanced systemic absorption, prevents neuronal loss without suppressing inflammation in mice with relapsing experimental autoimmune encephalomyelitis (EAE, a model of multiple sclerosis. In contrast, resveratrol has been reported to suppress inflammation in chronic EAE, although neuroprotective effects were not evaluated. The current studies examine potential neuroprotective and immunomodulatory effects of resveratrol in chronic EAE induced by immunization with myelin oligodendroglial glycoprotein peptide in C57/Bl6 mice. Effects of two distinct formulations of resveratrol administered daily orally were compared. Resveratrol delayed the onset of EAE compared to vehicle-treated EAE mice, but did not prevent or alter the phenotype of inflammation in spinal cords or optic nerves. Significant neuroprotective effects were observed, with higher numbers of retinal ganglion cells found in eyes of resveratrol-treated EAE mice with optic nerve inflammation. Results demonstrate that resveratrol prevents neuronal loss in this chronic demyelinating disease model, similar to its effects in relapsing EAE. Differences in immunosuppression compared with prior studies suggest that immunomodulatory effects may be limited and may depend on specific immunization parameters or timing of treatment. Importantly, neuroprotective effects can occur without immunosuppression, suggesting a potential additive benefit of resveratrol in combination with anti-inflammatory therapies for multiple sclerosis.
Model for CO2 leakage including multiple geological layers and multiple leaky wells.
Nordbotten, Jan M; Kavetski, Dmitri; Celia, Michael A; Bachu, Stefan
2009-02-01
Geological storage of carbon dioxide (CO2) is likely to be an integral component of any realistic plan to reduce anthropogenic greenhouse gas emissions. In conjunction with large-scale deployment of carbon storage as a technology, there is an urgent need for tools which provide reliable and quick assessments of aquifer storage performance. Previously, abandoned wells from over a century of oil and gas exploration and production have been identified as critical potential leakage paths. The practical importance of abandoned wells is emphasized by the correlation of heavy CO2 emitters (typically associated with industrialized areas) to oil and gas producing regions in North America. Herein, we describe a novel framework for predicting the leakage from large numbers of abandoned wells, forming leakage paths connecting multiple subsurface permeable formations. The framework is designed to exploit analytical solutions to various components of the problem and, ultimately, leads to a grid-free approximation to CO2 and brine leakage rates, as well as fluid distributions. We apply our model in a comparison to an established numerical solverforthe underlying governing equations. Thereafter, we demonstrate the capabilities of the model on typical field data taken from the vicinity of Edmonton, Alberta. This data set consists of over 500 wells and 7 permeable formations. Results show the flexibility and utility of the solution methods, and highlight the role that analytical and semianalytical solutions can play in this important problem.
A Multiple Indicators Multiple Causes (MIMIC) model of internal barriers to drug treatment in China.
Qi, Chang; Kelly, Brian C; Liao, Yanhui; He, Haoyu; Luo, Tao; Deng, Huiqiong; Liu, Tieqiao; Hao, Wei; Wang, Jichuan
2015-03-01
Although evidence exists for distinct barriers to drug abuse treatment (BDATs), investigations of their inter-relationships and the effect of individual characteristics on the barrier factors have been sparse, especially in China. A Multiple Indicators Multiple Causes (MIMIC) model is applied for this target. A sample of 262 drug users were recruited from three drug rehabilitation centers in Hunan Province, China. We applied a MIMIC approach to investigate the effect of gender, age, marital status, education, primary substance use, duration of primary drug use, and drug treatment experience on the internal barrier factors: absence of problem (AP), negative social support (NSS), fear of treatment (FT), and privacy concerns (PC). Drug users of various characteristics were found to report different internal barrier factors. Younger participants were more likely to report NSS (-0.19, p=0.038) and PC (-0.31, p<0.001). Compared to other drug users, ice users were more likely to report AP (0.44, p<0.001) and NSS (0.25, p=0.010). Drug treatment experiences related to AP (0.20, p=0.012). In addition, differential item functioning (DIF) occurred in three items when participant from groups with different duration of drug use, ice use, or marital status. Individual characteristics had significant effects on internal barriers to drug treatment. On this basis, BDAT perceived by different individuals could be assessed before tactics were utilized to successfully remove perceived barriers to drug treatment. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Multiple-relaxation-time lattice Boltzmann model for compressible fluids
International Nuclear Information System (INIS)
Chen Feng; Xu Aiguo; Zhang Guangcai; Li Yingjun
2011-01-01
We present an energy-conserving multiple-relaxation-time finite difference lattice Boltzmann model for compressible flows. The collision step is first calculated in the moment space and then mapped back to the velocity space. The moment space and corresponding transformation matrix are constructed according to the group representation theory. Equilibria of the nonconserved moments are chosen according to the need of recovering compressible Navier-Stokes equations through the Chapman-Enskog expansion. Numerical experiments showed that compressible flows with strong shocks can be well simulated by the present model. The new model works for both low and high speeds compressible flows. It contains more physical information and has better numerical stability and accuracy than its single-relaxation-time version. - Highlights: → We present an energy-conserving MRT finite-difference LB model. → The moment space is constructed according to the group representation theory. → The new model works for both low and high speeds compressible flows. → It has better numerical stability and wider applicable range than its SRT version.
Optimal Retail Price Model for Partial Consignment to Multiple Retailers
Directory of Open Access Journals (Sweden)
Po-Yu Chen
2017-01-01
Full Text Available This paper investigates the product pricing decision-making problem under a consignment stock policy in a two-level supply chain composed of one supplier and multiple retailers. The effects of the supplier’s wholesale prices and its partial inventory cost absorption of the retail prices of retailers with different market shares are investigated. In the partial product consignment model this paper proposes, the seller and the retailers each absorb part of the inventory costs. This model also provides general solutions for the complete product consignment and the traditional policy that adopts no product consignment. In other words, both the complete consignment and nonconsignment models are extensions of the proposed model (i.e., special cases. Research results indicated that the optimal retail price must be between 1/2 (50% and 2/3 (66.67% times the upper limit of the gross profit. This study also explored the results and influence of parameter variations on optimal retail price in the model.
2016-06-01
This paper develops a microeconomic theory-based multiple discrete continuous choice model that considers: (a) that both goods consumption and time allocations (to work and non-work activities) enter separately as decision variables in the utility fu...
Using hidden Markov models to align multiple sequences.
Mount, David W
2009-07-01
A hidden Markov model (HMM) is a probabilistic model of a multiple sequence alignment (msa) of proteins. In the model, each column of symbols in the alignment is represented by a frequency distribution of the symbols (called a "state"), and insertions and deletions are represented by other states. One moves through the model along a particular path from state to state in a Markov chain (i.e., random choice of next move), trying to match a given sequence. The next matching symbol is chosen from each state, recording its probability (frequency) and also the probability of going to that state from a previous one (the transition probability). State and transition probabilities are multiplied to obtain a probability of the given sequence. The hidden nature of the HMM is due to the lack of information about the value of a specific state, which is instead represented by a probability distribution over all possible values. This article discusses the advantages and disadvantages of HMMs in msa and presents algorithms for calculating an HMM and the conditions for producing the best HMM.
Analysis and application of opinion model with multiple topic interactions.
Xiong, Fei; Liu, Yun; Wang, Liang; Wang, Ximeng
2017-08-01
To reveal heterogeneous behaviors of opinion evolution in different scenarios, we propose an opinion model with topic interactions. Individual opinions and topic features are represented by a multidimensional vector. We measure an agent's action towards a specific topic by the product of opinion and topic feature. When pairs of agents interact for a topic, their actions are introduced to opinion updates with bounded confidence. Simulation results show that a transition from a disordered state to a consensus state occurs at a critical point of the tolerance threshold, which depends on the opinion dimension. The critical point increases as the dimension of opinions increases. Multiple topics promote opinion interactions and lead to the formation of macroscopic opinion clusters. In addition, more topics accelerate the evolutionary process and weaken the effect of network topology. We use two sets of large-scale real data to evaluate the model, and the results prove its effectiveness in characterizing a real evolutionary process. Our model achieves high performance in individual action prediction and even outperforms state-of-the-art methods. Meanwhile, our model has much smaller computational complexity. This paper provides a demonstration for possible practical applications of theoretical opinion dynamics.
Direction of Effects in Multiple Linear Regression Models.
Wiedermann, Wolfgang; von Eye, Alexander
2015-01-01
Previous studies analyzed asymmetric properties of the Pearson correlation coefficient using higher than second order moments. These asymmetric properties can be used to determine the direction of dependence in a linear regression setting (i.e., establish which of two variables is more likely to be on the outcome side) within the framework of cross-sectional observational data. Extant approaches are restricted to the bivariate regression case. The present contribution extends the direction of dependence methodology to a multiple linear regression setting by analyzing distributional properties of residuals of competing multiple regression models. It is shown that, under certain conditions, the third central moments of estimated regression residuals can be used to decide upon direction of effects. In addition, three different approaches for statistical inference are discussed: a combined D'Agostino normality test, a skewness difference test, and a bootstrap difference test. Type I error and power of the procedures are assessed using Monte Carlo simulations, and an empirical example is provided for illustrative purposes. In the discussion, issues concerning the quality of psychological data, possible extensions of the proposed methods to the fourth central moment of regression residuals, and potential applications are addressed.
Investigating multiple solutions in the constrained minimal supersymmetric standard model
Energy Technology Data Exchange (ETDEWEB)
Allanach, B.C. [DAMTP, CMS, University of Cambridge,Wilberforce Road, Cambridge, CB3 0HA (United Kingdom); George, Damien P. [DAMTP, CMS, University of Cambridge,Wilberforce Road, Cambridge, CB3 0HA (United Kingdom); Cavendish Laboratory, University of Cambridge,JJ Thomson Avenue, Cambridge, CB3 0HE (United Kingdom); Nachman, Benjamin [SLAC, Stanford University,2575 Sand Hill Rd, Menlo Park, CA 94025 (United States)
2014-02-07
Recent work has shown that the Constrained Minimal Supersymmetric Standard Model (CMSSM) can possess several distinct solutions for certain values of its parameters. The extra solutions were not previously found by public supersymmetric spectrum generators because fixed point iteration (the algorithm used by the generators) is unstable in the neighbourhood of these solutions. The existence of the additional solutions calls into question the robustness of exclusion limits derived from collider experiments and cosmological observations upon the CMSSM, because limits were only placed on one of the solutions. Here, we map the CMSSM by exploring its multi-dimensional parameter space using the shooting method, which is not subject to the stability issues which can plague fixed point iteration. We are able to find multiple solutions where in all previous literature only one was found. The multiple solutions are of two distinct classes. One class, close to the border of bad electroweak symmetry breaking, is disfavoured by LEP2 searches for neutralinos and charginos. The other class has sparticles that are heavy enough to evade the LEP2 bounds. Chargino masses may differ by up to around 10% between the different solutions, whereas other sparticle masses differ at the sub-percent level. The prediction for the dark matter relic density can vary by a hundred percent or more between the different solutions, so analyses employing the dark matter constraint are incomplete without their inclusion.
Characterising and modelling regolith stratigraphy using multiple geophysical techniques
Thomas, M.; Cremasco, D.; Fotheringham, T.; Hatch, M. A.; Triantifillis, J.; Wilford, J.
2013-12-01
Regolith is the weathered, typically mineral-rich layer from fresh bedrock to land surface. It encompasses soil (A, E and B horizons) that has undergone pedogenesis. Below is the weathered C horizon that retains at least some of the original rocky fabric and structure. At the base of this is the lower regolith boundary of continuous hard bedrock (the R horizon). Regolith may be absent, e.g. at rocky outcrops, or may be many 10's of metres deep. Comparatively little is known about regolith, and critical questions remain regarding composition and characteristics - especially deeper where the challenge of collecting reliable data increases with depth. In Australia research is underway to characterise and map regolith using consistent methods at scales ranging from local (e.g. hillslope) to continental scales. These efforts are driven by many research needs, including Critical Zone modelling and simulation. Pilot research in South Australia using digitally-based environmental correlation techniques modelled the depth to bedrock to 9 m for an upland area of 128 000 ha. One finding was the inability to reliably model local scale depth variations over horizontal distances of 2 - 3 m and vertical distances of 1 - 2 m. The need to better characterise variations in regolith to strengthen models at these fine scales was discussed. Addressing this need, we describe high intensity, ground-based multi-sensor geophysical profiling of three hillslope transects in different regolith-landscape settings to characterise fine resolution (i.e. a number of frequencies; multiple frequency, multiple coil electromagnetic induction; and high resolution resistivity. These were accompanied by georeferenced, closely spaced deep cores to 9 m - or to core refusal. The intact cores were sub-sampled to standard depths and analysed for regolith properties to compile core datasets consisting of: water content; texture; electrical conductivity; and weathered state. After preprocessing (filtering, geo
Interaction of multiple biomimetic antimicrobial polymers with model bacterial membranes
Energy Technology Data Exchange (ETDEWEB)
Baul, Upayan, E-mail: upayanb@imsc.res.in; Vemparala, Satyavani, E-mail: vani@imsc.res.in [The Institute of Mathematical Sciences, C.I.T. Campus, Taramani, Chennai 600113 (India); Kuroda, Kenichi, E-mail: kkuroda@umich.edu [Department of Biologic and Materials Sciences, University of Michigan School of Dentistry, Ann Arbor, Michigan 48109 (United States)
2014-08-28
Using atomistic molecular dynamics simulations, interaction of multiple synthetic random copolymers based on methacrylates on prototypical bacterial membranes is investigated. The simulations show that the cationic polymers form a micellar aggregate in water phase and the aggregate, when interacting with the bacterial membrane, induces clustering of oppositely charged anionic lipid molecules to form clusters and enhances ordering of lipid chains. The model bacterial membrane, consequently, develops lateral inhomogeneity in membrane thickness profile compared to polymer-free system. The individual polymers in the aggregate are released into the bacterial membrane in a phased manner and the simulations suggest that the most probable location of the partitioned polymers is near the 1-palmitoyl-2-oleoyl-phosphatidylglycerol (POPG) clusters. The partitioned polymers preferentially adopt facially amphiphilic conformations at lipid-water interface, despite lacking intrinsic secondary structures such as α-helix or β-sheet found in naturally occurring antimicrobial peptides.
The intergenerational multiple deficit model and the case of dyslexia
Directory of Open Access Journals (Sweden)
Elsje evan Bergen
2014-06-01
Full Text Available Which children go on to develop dyslexia? Since dyslexia has a multifactorial aetiology, this question can be restated as: What are the factors that put children at high risk for developing dyslexia? It is argued that a useful theoretical framework to address this question is Pennington’s (2006 multiple deficit model (MDM. This model replaces models that attribute dyslexia to a single underlying cause. Subsequently, the generalist genes hypothesis for learning (disabilities (Plomin & Kovas, 2005 is described and integrated with the MDM. Finally, findings are presented from a longitudinal study with children at family risk for dyslexia. Such studies can contribute to testing and specifying the MDM. In this study, risk factors at both the child and family level were investigated. This led to the proposed intergenerational MDM, in which both parents confer liability via intertwined genetic and environmental pathways. Future scientific directions are discussed to investigate parent-offspring resemblance and transmission patterns, which will shed new light on disorder aetiology.
An Advanced N -body Model for Interacting Multiple Stellar Systems
Energy Technology Data Exchange (ETDEWEB)
Brož, Miroslav [Astronomical Institute of the Charles University, Faculty of Mathematics and Physics, V Holešovičkách 2, CZ-18000 Praha 8 (Czech Republic)
2017-06-01
We construct an advanced model for interacting multiple stellar systems in which we compute all trajectories with a numerical N -body integrator, namely the Bulirsch–Stoer from the SWIFT package. We can then derive various observables: astrometric positions, radial velocities, minima timings (TTVs), eclipse durations, interferometric visibilities, closure phases, synthetic spectra, spectral energy distribution, and even complete light curves. We use a modified version of the Wilson–Devinney code for the latter, in which the instantaneous true phase and inclination of the eclipsing binary are governed by the N -body integration. If all of these types of observations are at one’s disposal, a joint χ {sup 2} metric and an optimization algorithm (a simplex or simulated annealing) allow one to search for a global minimum and construct very robust models of stellar systems. At the same time, our N -body model is free from artifacts that may arise if mutual gravitational interactions among all components are not self-consistently accounted for. Finally, we present a number of examples showing dynamical effects that can be studied with our code and we discuss how systematic errors may affect the results (and how to prevent this from happening).
Negative binomial models for abundance estimation of multiple closed populations
Boyce, Mark S.; MacKenzie, Darry I.; Manly, Bryan F.J.; Haroldson, Mark A.; Moody, David W.
2001-01-01
Counts of uniquely identified individuals in a population offer opportunities to estimate abundance. However, for various reasons such counts may be burdened by heterogeneity in the probability of being detected. Theoretical arguments and empirical evidence demonstrate that the negative binomial distribution (NBD) is a useful characterization for counts from biological populations with heterogeneity. We propose a method that focuses on estimating multiple populations by simultaneously using a suite of models derived from the NBD. We used this approach to estimate the number of female grizzly bears (Ursus arctos) with cubs-of-the-year in the Yellowstone ecosystem, for each year, 1986-1998. Akaike's Information Criteria (AIC) indicated that a negative binomial model with a constant level of heterogeneity across all years was best for characterizing the sighting frequencies of female grizzly bears. A lack-of-fit test indicated the model adequately described the collected data. Bootstrap techniques were used to estimate standard errors and 95% confidence intervals. We provide a Monte Carlo technique, which confirms that the Yellowstone ecosystem grizzly bear population increased during the period 1986-1998.
A diagnostic tree model for polytomous responses with multiple strategies.
Ma, Wenchao
2018-04-23
Constructed-response items have been shown to be appropriate for cognitively diagnostic assessments because students' problem-solving procedures can be observed, providing direct evidence for making inferences about their proficiency. However, multiple strategies used by students make item scoring and psychometric analyses challenging. This study introduces the so-called two-digit scoring scheme into diagnostic assessments to record both students' partial credits and their strategies. This study also proposes a diagnostic tree model (DTM) by integrating the cognitive diagnosis models with the tree model to analyse the items scored using the two-digit rubrics. Both convergent and divergent tree structures are considered to accommodate various scoring rules. The MMLE/EM algorithm is used for item parameter estimation of the DTM, and has been shown to provide good parameter recovery under varied conditions in a simulation study. A set of data from TIMSS 2007 mathematics assessment is analysed to illustrate the use of the two-digit scoring scheme and the DTM. © 2018 The British Psychological Society.
A minimal model for multiple epidemics and immunity spreading.
Directory of Open Access Journals (Sweden)
Kim Sneppen
Full Text Available Pathogens and parasites are ubiquitous in the living world, being limited only by availability of suitable hosts. The ability to transmit a particular disease depends on competing infections as well as on the status of host immunity. Multiple diseases compete for the same resource and their fate is coupled to each other. Such couplings have many facets, for example cross-immunization between related influenza strains, mutual inhibition by killing the host, or possible even a mutual catalytic effect if host immunity is impaired. We here introduce a minimal model for an unlimited number of unrelated pathogens whose interaction is simplified to simple mutual exclusion. The model incorporates an ongoing development of host immunity to past diseases, while leaving the system open for emergence of new diseases. The model exhibits a rich dynamical behavior with interacting infection waves, leaving broad trails of immunization in the host population. This obtained immunization pattern depends only on the system size and on the mutation rate that initiates new diseases.
A minimal model for multiple epidemics and immunity spreading.
Sneppen, Kim; Trusina, Ala; Jensen, Mogens H; Bornholdt, Stefan
2010-10-18
Pathogens and parasites are ubiquitous in the living world, being limited only by availability of suitable hosts. The ability to transmit a particular disease depends on competing infections as well as on the status of host immunity. Multiple diseases compete for the same resource and their fate is coupled to each other. Such couplings have many facets, for example cross-immunization between related influenza strains, mutual inhibition by killing the host, or possible even a mutual catalytic effect if host immunity is impaired. We here introduce a minimal model for an unlimited number of unrelated pathogens whose interaction is simplified to simple mutual exclusion. The model incorporates an ongoing development of host immunity to past diseases, while leaving the system open for emergence of new diseases. The model exhibits a rich dynamical behavior with interacting infection waves, leaving broad trails of immunization in the host population. This obtained immunization pattern depends only on the system size and on the mutation rate that initiates new diseases.
Modeling Pan Evaporation for Kuwait by Multiple Linear Regression
Almedeij, Jaber
2012-01-01
Evaporation is an important parameter for many projects related to hydrology and water resources systems. This paper constitutes the first study conducted in Kuwait to obtain empirical relations for the estimation of daily and monthly pan evaporation as functions of available meteorological data of temperature, relative humidity, and wind speed. The data used here for the modeling are daily measurements of substantial continuity coverage, within a period of 17 years between January 1993 and December 2009, which can be considered representative of the desert climate of the urban zone of the country. Multiple linear regression technique is used with a procedure of variable selection for fitting the best model forms. The correlations of evaporation with temperature and relative humidity are also transformed in order to linearize the existing curvilinear patterns of the data by using power and exponential functions, respectively. The evaporation models suggested with the best variable combinations were shown to produce results that are in a reasonable agreement with observation values. PMID:23226984
Decreasing Multicollinearity: A Method for Models with Multiplicative Functions.
Smith, Kent W.; Sasaki, M. S.
1979-01-01
A method is proposed for overcoming the problem of multicollinearity in multiple regression equations where multiplicative independent terms are entered. The method is not a ridge regression solution. (JKS)
System health monitoring using multiple-model adaptive estimation techniques
Sifford, Stanley Ryan
Monitoring system health for fault detection and diagnosis by tracking system parameters concurrently with state estimates is approached using a new multiple-model adaptive estimation (MMAE) method. This novel method is called GRid-based Adaptive Parameter Estimation (GRAPE). GRAPE expands existing MMAE methods by using new techniques to sample the parameter space. GRAPE expands on MMAE with the hypothesis that sample models can be applied and resampled without relying on a predefined set of models. GRAPE is initially implemented in a linear framework using Kalman filter models. A more generalized GRAPE formulation is presented using extended Kalman filter (EKF) models to represent nonlinear systems. GRAPE can handle both time invariant and time varying systems as it is designed to track parameter changes. Two techniques are presented to generate parameter samples for the parallel filter models. The first approach is called selected grid-based stratification (SGBS). SGBS divides the parameter space into equally spaced strata. The second approach uses Latin Hypercube Sampling (LHS) to determine the parameter locations and minimize the total number of required models. LHS is particularly useful when the parameter dimensions grow. Adding more parameters does not require the model count to increase for LHS. Each resample is independent of the prior sample set other than the location of the parameter estimate. SGBS and LHS can be used for both the initial sample and subsequent resamples. Furthermore, resamples are not required to use the same technique. Both techniques are demonstrated for both linear and nonlinear frameworks. The GRAPE framework further formalizes the parameter tracking process through a general approach for nonlinear systems. These additional methods allow GRAPE to either narrow the focus to converged values within a parameter range or expand the range in the appropriate direction to track the parameters outside the current parameter range boundary
Shared mental models of integrated care: aligning multiple stakeholder perspectives.
Evans, Jenna M; Baker, G Ross
2012-01-01
Health service organizations and professionals are under increasing pressure to work together to deliver integrated patient care. A common understanding of integration strategies may facilitate the delivery of integrated care across inter-organizational and inter-professional boundaries. This paper aims to build a framework for exploring and potentially aligning multiple stakeholder perspectives of systems integration. The authors draw from the literature on shared mental models, strategic management and change, framing, stakeholder management, and systems theory to develop a new construct, Mental Models of Integrated Care (MMIC), which consists of three types of mental models, i.e. integration-task, system-role, and integration-belief. The MMIC construct encompasses many of the known barriers and enablers to integrating care while also providing a comprehensive, theory-based framework of psychological factors that may influence inter-organizational and inter-professional relations. While the existing literature on integration focuses on optimizing structures and processes, the MMIC construct emphasizes the convergence and divergence of stakeholders' knowledge and beliefs, and how these underlying cognitions influence interactions (or lack thereof) across the continuum of care. MMIC may help to: explain what differentiates effective from ineffective integration initiatives; determine system readiness to integrate; diagnose integration problems; and develop interventions for enhancing integrative processes and ultimately the delivery of integrated care. Global interest and ongoing challenges in integrating care underline the need for research on the mental models that characterize the behaviors of actors within health systems; the proposed framework offers a starting point for applying a cognitive perspective to health systems integration.
Eye Movement Abnormalities in Multiple Sclerosis: Pathogenesis, Modeling, and Treatment
Directory of Open Access Journals (Sweden)
Alessandro Serra
2018-02-01
Full Text Available Multiple sclerosis (MS commonly causes eye movement abnormalities that may have a significant impact on patients’ disability. Inflammatory demyelinating lesions, especially occurring in the posterior fossa, result in a wide range of disorders, spanning from acquired pendular nystagmus (APN to internuclear ophthalmoplegia (INO, among the most common. As the control of eye movements is well understood in terms of anatomical substrate and underlying physiological network, studying ocular motor abnormalities in MS provides a unique opportunity to gain insights into mechanisms of disease. Quantitative measurement and modeling of eye movement disorders, such as INO, may lead to a better understanding of common symptoms encountered in MS, such as Uhthoff’s phenomenon and fatigue. In turn, the pathophysiology of a range of eye movement abnormalities, such as APN, has been clarified based on correlation of experimental model with lesion localization by neuroimaging in MS. Eye movement disorders have the potential of being utilized as structural and functional biomarkers of early cognitive deficit, and possibly help in assessing disease status and progression, and to serve as platform and functional outcome to test novel therapeutic agents for MS. Knowledge of neuropharmacology applied to eye movement dysfunction has guided testing and use of a number of pharmacological agents to treat some eye movement disorders found in MS, such as APN and other forms of central nystagmus.
Hoang, Triem T.; OConnell, Tamara; Ku, Jentung
2004-01-01
Loop Heat Pipes (LHPs) have proven themselves as reliable and robust heat transport devices for spacecraft thermal control systems. So far, the LHPs in earth-orbit satellites perform very well as expected. Conventional LHPs usually consist of a single capillary pump for heat acquisition and a single condenser for heat rejection. Multiple pump/multiple condenser LHPs have shown to function very well in ground testing. Nevertheless, the test results of a dual pump/condenser LHP also revealed that the dual LHP behaved in a complicated manner due to the interaction between the pumps and condensers. Thus it is redundant to say that more research is needed before they are ready for 0-g deployment. One research area that perhaps compels immediate attention is the analytical modeling of LHPs, particularly the transient phenomena. Modeling a single pump/single condenser LHP is difficult enough. Only a handful of computer codes are available for both steady state and transient simulations of conventional LHPs. No previous effort was made to develop an analytical model (or even a complete theory) to predict the operational behavior of the multiple pump/multiple condenser LHP systems. The current research project offered a basic theory of the multiple pump/multiple condenser LHP operation. From it, a computer code was developed to predict the LHP saturation temperature in accordance with the system operating and environmental conditions.
Sang, Huiyan; Jun, Mikyoung; Huang, Jianhua Z.
2011-01-01
This paper investigates the cross-correlations across multiple climate model errors. We build a Bayesian hierarchical model that accounts for the spatial dependence of individual models as well as cross-covariances across different climate models
Stabilization of multiple rib fractures in a canine model.
Huang, Ke-Nan; Xu, Zhi-Fei; Sun, Ju-Xian; Ding, Xin-Yu; Wu, Bin; Li, Wei; Qin, Xiong; Tang, Hua
2014-12-01
Operative stabilization is frequently used in the clinical treatment of multiple rib fractures (MRF); however, no ideal material exists for use in this fixation. This study investigates a newly developed biodegradable plate system for the stabilization of MRF. Silk fiber-reinforced polycaprolactone (SF/PCL) plates were developed for rib fracture stabilization and studied using a canine flail chest model. Adult mongrel dogs were divided into three groups: one group received the SF/PCL plates, one group received standard clinical steel plates, and the final group did not undergo operative fracture stabilization (n = 6 for each group). Radiographic, mechanical, and histologic examination was performed to evaluate the effectiveness of the biodegradable material for the stabilization of the rib fractures. No nonunion and no infections were found when using SF-PCL plates. The fracture sites collapsed in the untreated control group, leading to obvious chest wall deformity not encountered in the two groups that underwent operative stabilization. Our experimental study shows that the SF/PCL plate has the biocompatibility and mechanical strength suitable for fixation of MRF and is potentially ideal for the treatment of these injuries. Copyright © 2014 Elsevier Inc. All rights reserved.
Multiplicative multifractal modeling and discrimination of human neuronal activity
International Nuclear Information System (INIS)
Zheng Yi; Gao Jianbo; Sanchez, Justin C.; Principe, Jose C.; Okun, Michael S.
2005-01-01
Understanding neuronal firing patterns is one of the most important problems in theoretical neuroscience. It is also very important for clinical neurosurgery. In this Letter, we introduce a computational procedure to examine whether neuronal firing recordings could be characterized by cascade multiplicative multifractals. By analyzing raw recording data as well as generated spike train data from 3 patients collected in two brain areas, the globus pallidus externa (GPe) and the globus pallidus interna (GPi), we show that the neural firings are consistent with a multifractal process over certain time scale range (t 1 ,t 2 ), where t 1 is argued to be not smaller than the mean inter-spike-interval of neuronal firings, while t 2 may be related to the time that neuronal signals propagate in the major neural branching structures pertinent to GPi and GPe. The generalized dimension spectrum D q effectively differentiates the two brain areas, both intra- and inter-patients. For distinguishing between GPe and GPi, it is further shown that the cascade model is more effective than the methods recently examined by Schiff et al. as well as the Fano factor analysis. Therefore, the methodology may be useful in developing computer aided tools to help clinicians perform precision neurosurgery in the operating room
A Fuzzy Logic Framework for Integrating Multiple Learned Models
Energy Technology Data Exchange (ETDEWEB)
Hartog, Bobi Kai Den [Univ. of Nebraska, Lincoln, NE (United States)
1999-03-01
The Artificial Intelligence field of Integrating Multiple Learned Models (IMLM) explores ways to combine results from sets of trained programs. Aroclor Interpretation is an ill-conditioned problem in which trained programs must operate in scenarios outside their training ranges because it is intractable to train them completely. Consequently, they fail in ways related to the scenarios. We developed a general-purpose IMLM solution, the Combiner, and applied it to Aroclor Interpretation. The Combiner's first step, Scenario Identification (M), learns rules from very sparse, synthetic training data consisting of results from a suite of trained programs called Methods. S1 produces fuzzy belief weights for each scenario by approximately matching the rules. The Combiner's second step, Aroclor Presence Detection (AP), classifies each of three Aroclors as present or absent in a sample. The third step, Aroclor Quantification (AQ), produces quantitative values for the concentration of each Aroclor in a sample. AP and AQ use automatically learned empirical biases for each of the Methods in each scenario. Through fuzzy logic, AP and AQ combine scenario weights, automatically learned biases for each of the Methods in each scenario, and Methods' results to determine results for a sample.
Multiple models guide strategies for agricultural nutrient reductions
Scavia, Donald; Kalcic, Margaret; Muenich, Rebecca Logsdon; Read, Jennifer; Aloysius, Noel; Bertani, Isabella; Boles, Chelsie; Confesor, Remegio; DePinto, Joseph; Gildow, Marie; Martin, Jay; Redder, Todd; Robertson, Dale M.; Sowa, Scott P.; Wang, Yu-Chen; Yen, Haw
2017-01-01
In response to degraded water quality, federal policy makers in the US and Canada called for a 40% reduction in phosphorus (P) loads to Lake Erie, and state and provincial policy makers in the Great Lakes region set a load-reduction target for the year 2025. Here, we configured five separate SWAT (US Department of Agriculture's Soil and Water Assessment Tool) models to assess load reduction strategies for the agriculturally dominated Maumee River watershed, the largest P source contributing to toxic algal blooms in Lake Erie. Although several potential pathways may achieve the target loads, our results show that any successful pathway will require large-scale implementation of multiple practices. For example, one successful pathway involved targeting 50% of row cropland that has the highest P loss in the watershed with a combination of three practices: subsurface application of P fertilizers, planting cereal rye as a winter cover crop, and installing buffer strips. Achieving these levels of implementation will require local, state/provincial, and federal agencies to collaborate with the private sector to set shared implementation goals and to demand innovation and honest assessments of water quality-related programs, policies, and partnerships.
Calcium Intervention Ameliorates Experimental Model of Multiple Sclerosis
Directory of Open Access Journals (Sweden)
Dariush Haghmorad
2014-05-01
Full Text Available Objective: Multiple sclerosis (MS is the most common inflammatory disease of the CNS. Experimental autoimmune encephalomyelitis (EAE is a widely used model for MS. In the present research, our aim was to test the therapeutic efficacy of Calcium (Ca in an experimental model of MS. Methods: In this study the experiment was done on C57BL/6 mice. EAE was induced using 200 μg of the MOG35-55 peptide emulsified in CFA and injected subcutaneously on day 0 over two flank areas. In addition, 250 ng of pertussis toxin was injected on days 0 and 2. In the treatment group, 30 mg/kg Ca was administered intraperitoneally four times at regular 48 hour intervals. The mice were sacrificed 21 days after EAE induction and blood samples were taken from their hearts. The brains of mice were removed for histological analysis and their isolated splenocytes were cultured. Results: Our results showed that treatment with Ca caused a significant reduction in the severity of the EAE. Histological analysis indicated that there was no plaque in brain sections of Ca treated group of mice whereas 4 ± 1 plaques were detected in brain sections of controls. The density of mononuclear infiltration in the CNS of Ca treated mice was lower than in controls. The serum level of Nitric Oxide in the treatment group was lower than in the control group but was not significant. Moreover, the levels of IFN-γ in cell culture supernatant of splenocytes in treated mice were significantly lower than in the control group. Conclusion: The data indicates that Ca intervention can effectively attenuate EAE progression.
A ¤flexible additive multiplicative hazard model
DEFF Research Database (Denmark)
Martinussen, T.; Scheike, T. H.
2002-01-01
Aalen's additive model; Counting process; Cox regression; Hazard model; Proportional excess harzard model; Time-varying effect......Aalen's additive model; Counting process; Cox regression; Hazard model; Proportional excess harzard model; Time-varying effect...
Nakamura, Yasuyuki; Nishi, Shinnosuke; Muramatsu, Yuta; Yasutake, Koichi; Yamakawa, Osamu; Tagawa, Takahiro
2014-01-01
In this paper, we introduce a mathematical model for collaborative learning and the answering process for multiple-choice questions. The collaborative learning model is inspired by the Ising spin model and the model for answering multiple-choice questions is based on their difficulty level. An intensive simulation study predicts the possibility of…
International Nuclear Information System (INIS)
Levchenko, B.B.; Nikolaev, N.N.
1985-01-01
In the framework of the additive quark model of multiple production on nuclei we calculate the multiplicity distributions of secondary particles and the correlations between secondary particles in πA and pA interactions with heavy nuclei. We show that intranuclear cascades are responsible for up to 50% of the nuclear increase of the multiplicity of fast particles. We analyze the sensitivity of the multiplicities and their correlations to the choice of the quark-hadronization function. We show that with good accuracy the yield of relativistic secondary particles from heavy and intermediate nuclei depends only on the number N/sub p/ of protons knocked out of the nucleus, and not on the mass number of the nucleus (N/sub p/ scaling)
Comparative study between a QCD inspired model and a multiple diffraction model
International Nuclear Information System (INIS)
Luna, E.G.S.; Martini, A.F.; Menon, M.J.
2003-01-01
A comparative study between a QCD Inspired Model (QCDIM) and a Multiple Diffraction Model (MDM) is presented, with focus on the results for pp differential cross section at √s = 52.8 GeV. It is shown that the MDM predictions are in agreement with experimental data, except for the dip region and that the QCDIM describes only the diffraction peak region. Interpretations in terms of the corresponding eikonals are also discussed. (author)
Haili, Hasnawati; Maknun, Johar; Siahaan, Parsaoran
2017-08-01
Physics is a lessons that related to students' daily experience. Therefore, before the students studying in class formally, actually they have already have a visualization and prior knowledge about natural phenomenon and could wide it themselves. The learning process in class should be aimed to detect, process, construct, and use students' mental model. So, students' mental model agree with and builds in the right concept. The previous study held in MAN 1 Muna informs that in learning process the teacher did not pay attention students' mental model. As a consequence, the learning process has not tried to build students' mental modelling ability (MMA). The purpose of this study is to describe the improvement of students' MMA as a effect of problem solving based learning model with multiple representations approach. This study is pre experimental design with one group pre post. It is conducted in XI IPA MAN 1 Muna 2016/2017. Data collection uses problem solving test concept the kinetic theory of gasses and interview to get students' MMA. The result of this study is clarification students' MMA which is categorized in 3 category; High Mental Modelling Ability (H-MMA) for 7Mental Modelling Ability (M-MMA) for 3Mental Modelling Ability (L-MMA) for 0 ≤ x ≤ 3 score. The result shows that problem solving based learning model with multiple representations approach can be an alternative to be applied in improving students' MMA.
Numerical modelling of multiple scattering between two elastical particles
DEFF Research Database (Denmark)
Bjørnø, Irina; Jensen, Leif Bjørnø
1998-01-01
in suspension have been studied extensively since Foldy's formulation of his theory for isotropic scattering by randomly distributed scatterers. However, a number of important problems related to multiple scattering are still far from finding their solutions. A particular, but still unsolved, problem......Multiple acoustical signal interactions with sediment particles in the vicinity of the seabed may significantly change the course of sediment concentration profiles determined by inversion from acoustical backscattering measurements. The scattering properties of high concentrations of sediments...... is the question of proximity thresholds for influence of multiple scattering in terms of particle properties like volume fraction, average distance between particles or other related parameters. A few available experimental data indicate a significance of multiple scattering in suspensions where the concentration...
231 Using Multiple Regression Analysis in Modelling the Role of ...
African Journals Online (AJOL)
User
of Internal Revenue, Tourism Bureau and hotel records. The multiple regression .... additional guest facilities such as restaurant, a swimming pool or child care and social function ... and provide good quality service to the public. Conclusion.
Modeling a Single SEP Event from Multiple Vantage Points Using the iPATH Model
Hu, Junxiang; Li, Gang; Fu, Shuai; Zank, Gary; Ao, Xianzhi
2018-02-01
Using the recently extended 2D improved Particle Acceleration and Transport in the Heliosphere (iPATH) model, we model an example gradual solar energetic particle event as observed at multiple locations. Protons and ions that are energized via the diffusive shock acceleration mechanism are followed at a 2D coronal mass ejection-driven shock where the shock geometry varies across the shock front. The subsequent transport of energetic particles, including cross-field diffusion, is modeled by a Monte Carlo code that is based on a stochastic differential equation method. Time intensity profiles and particle spectra at multiple locations and different radial distances, separated in longitudes, are presented. The results shown here are relevant to the upcoming Parker Solar Probe mission.
An Additive-Multiplicative Cox-Aalen Regression Model
DEFF Research Database (Denmark)
Scheike, Thomas H.; Zhang, Mei-Jie
2002-01-01
Aalen model; additive risk model; counting processes; Cox regression; survival analysis; time-varying effects......Aalen model; additive risk model; counting processes; Cox regression; survival analysis; time-varying effects...
Tools and Models for Integrating Multiple Cellular Networks
Energy Technology Data Exchange (ETDEWEB)
Gerstein, Mark [Yale Univ., New Haven, CT (United States). Gerstein Lab.
2015-11-06
In this grant, we have systematically investigated the integrated networks, which are responsible for the coordination of activity between metabolic pathways in prokaryotes. We have developed several computational tools to analyze the topology of the integrated networks consisting of metabolic, regulatory, and physical interaction networks. The tools are all open-source, and they are available to download from Github, and can be incorporated in the Knowledgebase. Here, we summarize our work as follow. Understanding the topology of the integrated networks is the first step toward understanding its dynamics and evolution. For Aim 1 of this grant, we have developed a novel algorithm to determine and measure the hierarchical structure of transcriptional regulatory networks [1]. The hierarchy captures the direction of information flow in the network. The algorithm is generally applicable to regulatory networks in prokaryotes, yeast and higher organisms. Integrated datasets are extremely beneficial in understanding the biology of a system in a compact manner due to the conflation of multiple layers of information. Therefore for Aim 2 of this grant, we have developed several tools and carried out analysis for integrating system-wide genomic information. To make use of the structural data, we have developed DynaSIN for protein-protein interactions networks with various dynamical interfaces [2]. We then examined the association between network topology with phenotypic effects such as gene essentiality. In particular, we have organized E. coli and S. cerevisiae transcriptional regulatory networks into hierarchies. We then correlated gene phenotypic effects by tinkering with different layers to elucidate which layers were more tolerant to perturbations [3]. In the context of evolution, we also developed a workflow to guide the comparison between different types of biological networks across various species using the concept of rewiring [4], and Furthermore, we have developed
Lyssavirus infection: 'low dose, multiple exposure' in the mouse model.
Banyard, Ashley C; Healy, Derek M; Brookes, Sharon M; Voller, Katja; Hicks, Daniel J; Núñez, Alejandro; Fooks, Anthony R
2014-03-06
The European bat lyssaviruses (EBLV-1 and EBLV-2) are zoonotic pathogens present within bat populations across Europe. The maintenance and transmission of lyssaviruses within bat colonies is poorly understood. Cases of repeated isolation of lyssaviruses from bat roosts have raised questions regarding the maintenance and intraspecies transmissibility of these viruses within colonies. Furthermore, the significance of seropositive bats in colonies remains unclear. Due to the protected nature of European bat species, and hence restrictions to working with the natural host for lyssaviruses, this study analysed the outcome following repeat inoculation of low doses of lyssaviruses in a murine model. A standardized dose of virus, EBLV-1, EBLV-2 or a 'street strain' of rabies (RABV), was administered via a peripheral route to attempt to mimic what is hypothesized as natural infection. Each mouse (n=10/virus/group/dilution) received four inoculations, two doses in each footpad over a period of four months, alternating footpad with each inoculation. Mice were tail bled between inoculations to evaluate antibody responses to infection. Mice succumbed to infection after each inoculation with 26.6% of mice developing clinical disease following the initial exposure across all dilutions (RABV, 32.5% (n=13/40); EBLV-1, 35% (n=13/40); EBLV-2, 12.5% (n=5/40)). Interestingly, the lowest dose caused clinical disease in some mice upon first exposure ((RABV, 20% (n=2/10) after first inoculation; RABV, 12.5% (n=1/8) after second inoculation; EBLV-2, 10% (n=1/10) after primary inoculation). Furthermore, five mice developed clinical disease following the second exposure to live virus (RABV, n=1; EBLV-1, n=1; EBLV-2, n=3) although histopathological examination indicated that the primary inoculation was the most probably cause of death due to levels of inflammation and virus antigen distribution observed. All the remaining mice (RABV, n=26; EBLV-1, n=26; EBLV-2, n=29) survived the tertiary and
Rapid installation of numerical models in multiple parent codes
Energy Technology Data Exchange (ETDEWEB)
Brannon, R.M.; Wong, M.K.
1996-10-01
A set of``model interface guidelines``, called MIG, is offered as a means to more rapidly install numerical models (such as stress-strain laws) into any parent code (hydrocode, finite element code, etc.) without having to modify the model subroutines. The model developer (who creates the model package in compliance with the guidelines) specifies the model`s input and storage requirements in a standardized way. For portability, database management (such as saving user inputs and field variables) is handled by the parent code. To date, NUG has proved viable in beta installations of several diverse models in vectorized and parallel codes written in different computer languages. A NUG-compliant model can be installed in different codes without modifying the model`s subroutines. By maintaining one model for many codes, MIG facilitates code-to-code comparisons and reduces duplication of effort potentially reducing the cost of installing and sharing models.
International Nuclear Information System (INIS)
Valor, A.; Caleyo, F.; Alfonso, L.; Rivas, D.; Hallen, J.M.
2007-01-01
In this work, a new stochastic model capable of simulating pitting corrosion is developed and validated. Pitting corrosion is modeled as the combination of two stochastic processes: pit initiation and pit growth. Pit generation is modeled as a nonhomogeneous Poisson process, in which induction time for pit initiation is simulated as the realization of a Weibull process. In this way, the exponential and Weibull distributions can be considered as the possible distributions for pit initiation time. Pit growth is simulated using a nonhomogeneous Markov process. Extreme value statistics is used to find the distribution of maximum pit depths resulting from the combination of the initiation and growth processes for multiple pits. The proposed model is validated using several published experiments on pitting corrosion. It is capable of reproducing the experimental observations with higher quality than the stochastic models available in the literature for pitting corrosion
Energy Technology Data Exchange (ETDEWEB)
Valor, A. [Facultad de Fisica, Universidad de La Habana, San Lazaro y L, Vedado, 10400 Havana (Cuba); Caleyo, F. [Departamento de Ingenieria, Metalurgica, IPN-ESIQIE, UPALM Edif. 7, Zacatenco, Mexico DF 07738 (Mexico)]. E-mail: fcaleyo@gmail.com; Alfonso, L. [Departamento de Ingenieria, Metalurgica, IPN-ESIQIE, UPALM Edif. 7, Zacatenco, Mexico DF 07738 (Mexico); Rivas, D. [Departamento de Ingenieria, Metalurgica, IPN-ESIQIE, UPALM Edif. 7, Zacatenco, Mexico DF 07738 (Mexico); Hallen, J.M. [Departamento de Ingenieria, Metalurgica, IPN-ESIQIE, UPALM Edif. 7, Zacatenco, Mexico DF 07738 (Mexico)
2007-02-15
In this work, a new stochastic model capable of simulating pitting corrosion is developed and validated. Pitting corrosion is modeled as the combination of two stochastic processes: pit initiation and pit growth. Pit generation is modeled as a nonhomogeneous Poisson process, in which induction time for pit initiation is simulated as the realization of a Weibull process. In this way, the exponential and Weibull distributions can be considered as the possible distributions for pit initiation time. Pit growth is simulated using a nonhomogeneous Markov process. Extreme value statistics is used to find the distribution of maximum pit depths resulting from the combination of the initiation and growth processes for multiple pits. The proposed model is validated using several published experiments on pitting corrosion. It is capable of reproducing the experimental observations with higher quality than the stochastic models available in the literature for pitting corrosion.
An Additive-Multiplicative Restricted Mean Residual Life Model
DEFF Research Database (Denmark)
Mansourvar, Zahra; Martinussen, Torben; Scheike, Thomas H.
2016-01-01
mean residual life model to study the association between the restricted mean residual life function and potential regression covariates in the presence of right censoring. This model extends the proportional mean residual life model using an additive model as its covariate dependent baseline....... For the suggested model, some covariate effects are allowed to be time-varying. To estimate the model parameters, martingale estimating equations are developed, and the large sample properties of the resulting estimators are established. In addition, to assess the adequacy of the model, we investigate a goodness...
Brewe, Eric; Traxler, Adrienne; de la Garza, Jorge; Kramer, Laird H.
2013-12-01
We report on a multiyear study of student attitudes measured with the Colorado Learning Attitudes about Science Survey in calculus-based introductory physics taught with the Modeling Instruction curriculum. We find that five of six instructors and eight of nine sections using Modeling Instruction showed significantly improved attitudes from pre- to postcourse. Cohen’s d effect sizes range from 0.08 to 0.95 for individual instructors. The average effect was d=0.45, with a 95% confidence interval of (0.26-0.64). These results build on previously published results showing positive shifts in attitudes from Modeling Instruction classes. We interpret these data in light of other published positive attitudinal shifts and explore mechanistic explanations for similarities and differences with other published positive shifts.
Directory of Open Access Journals (Sweden)
Eric Brewe
2013-10-01
Full Text Available We report on a multiyear study of student attitudes measured with the Colorado Learning Attitudes about Science Survey in calculus-based introductory physics taught with the Modeling Instruction curriculum. We find that five of six instructors and eight of nine sections using Modeling Instruction showed significantly improved attitudes from pre- to postcourse. Cohen’s d effect sizes range from 0.08 to 0.95 for individual instructors. The average effect was d=0.45, with a 95% confidence interval of (0.26–0.64. These results build on previously published results showing positive shifts in attitudes from Modeling Instruction classes. We interpret these data in light of other published positive attitudinal shifts and explore mechanistic explanations for similarities and differences with other published positive shifts.
Interstitial integrals in the multiple-scattering model
International Nuclear Information System (INIS)
Swanson, J.R.; Dill, D.
1982-01-01
We present an efficient method for the evaluation of integrals involving multiple-scattering wave functions over the interstitial region. Transformation of the multicenter interstitial wave functions to a single center representation followed by a geometric projection reduces the integrals to products of analytic angular integrals and numerical radial integrals. The projection function, which has the value 1 in the interstitial region and 0 elsewhere, has a closed-form partial-wave expansion. The method is tested by comparing its results with exact normalization and dipole integrals; the differences are 2% at worst and typically less than 1%. By providing an efficient means of calculating Coulomb integrals, the method allows treatment of electron correlations using a multiple scattering basis set
A new spatial multiple discrete-continuous modeling approach to land use change analysis.
2013-09-01
This report formulates a multiple discrete-continuous probit (MDCP) land-use model within a : spatially explicit economic structural framework for land-use change decisions. The spatial : MDCP model is capable of predicting both the type and intensit...
Steam consumption minimization model in a multiple evaporation effect in a sugar plant
International Nuclear Information System (INIS)
Villada, Fernando; Valencia, Jaime A; Moreno, German; Murillo, J. Joaquin
1992-01-01
In this work, a mathematical model to minimize the steam consumption in a multiple effect evaporation system is shown. The model is based in the dynamic programming technique and the results are tested in a Colombian sugar mill
New Hybrid Variational Recovery Model for Blurred Images with Multiplicative Noise
DEFF Research Database (Denmark)
Dong, Yiqiu; Zeng, Tieyong
2013-01-01
A new hybrid variational model for recovering blurred images in the presence of multiplicative noise is proposed. Inspired by previous work on multiplicative noise removal, an I-divergence technique is used to build a strictly convex model under a condition that ensures the uniqueness...
Pursuing the method of multiple working hypotheses for hydrological modeling
Clark, M.P.; Kavetski, D.; Fenicia, F.
2011-01-01
Ambiguities in the representation of environmental processes have manifested themselves in a plethora of hydrological models, differing in almost every aspect of their conceptualization and implementation. The current overabundance of models is symptomatic of an insufficient scientific understanding
Multiple Models of Reality and How to Use Them
Jamroga, W.J.; Blockeel, H.; Denecker, M.
2002-01-01
A virtual agent may obviously benefit from having an up-to-date model of her environment of activity. The model may include actual users' profiles, a dynamic environment characteristic or some assumptions being accepted by default. However, the agent doesn't have to stick to one model only, she can
A Review of the Modelling of Thermally Interacting Multiple Boreholes
Directory of Open Access Journals (Sweden)
Seama Koohi-Fayegh
2013-06-01
Full Text Available Much attention is now focused on utilizing ground heat pumps for heating and cooling buildings, as well as water heating, refrigeration and other thermal tasks. Modeling such systems is important for understanding, designing and optimizing their performance and characteristics. Several heat transfer models exist for ground heat exchangers. In this review article, challenges of modelling heat transfer in vertical heat exchangers are described, some analytical and numerical models are reviewed and compared, recent related developments are described and the importance of modelling these systems is discussed from a variety of aspects, such as sustainability of geothermal systems or their potential impacts on the ecosystems nearby.
Multiple phase transitions in the generalized Curie-Weiss model
International Nuclear Information System (INIS)
Eisele, T.; Ellis, R.S.
1988-01-01
The generalized Curie-Weiss model is an extension of the classical Curie-Weiss model in which the quadratic interaction function of the mean spin value is replaced by a more general interaction function. It is shown that the generalized Curie-Weiss model can have a sequence of phase transitions at different critical temperatures. Both first-order and second-order phase transitions can occur, and explicit criteria for the two types are given. Three examples of generalized Curie-Weiss models are worked out in detail, including one example with infinitely many phase transitions. A number of results are derived using large-deviation techniques
Towards Integration of CAx Systems and a Multiple-View Product Modeller in Mechanical Design
Directory of Open Access Journals (Sweden)
H. Song
2005-01-01
Full Text Available This paper deals with the development of an integration framework and its implementation for the connexion of CAx systems and multiple-view product modelling. The integration framework is presented regarding its conceptual level and the implementation level is described currently with the connexion of a functional modeller, a multiple-view product modeller, an optimisation module and a CAD system. The integration between the multiple-view product modeller and CATIA V5 based on the STEP standard is described in detail. Finally, the presented works are discussed and future research developments are suggested.
Multiple Model Adaptive Control Using Dual Youla-Kucera Factorisation
DEFF Research Database (Denmark)
Bendtsen, Jan Dimon; Trangbæk, Klaus
2012-01-01
We propose a multi-model adaptive control scheme for uncertain linear plants based on the concept of model unfalsification. The approach relies on examining the ability of a pre-computed set of plant-controller candidates and choosing the one that is best able to reproduce observed in- and output...
Multiple operating models for data linkage: A privacy positive
Directory of Open Access Journals (Sweden)
Katrina Irvine
2017-04-01
Our data linkage centre will implement new operating models with cascading levels of data handling on behalf of custodians. Sharing or publication of empirical evidence on timeframes, efficiency and quality can provide useful inputs in the design of new operating models and assist with the development of stakeholder and public confidence.
Seaman, Shaun R; Hughes, Rachael A
2018-06-01
Estimating the parameters of a regression model of interest is complicated by missing data on the variables in that model. Multiple imputation is commonly used to handle these missing data. Joint model multiple imputation and full-conditional specification multiple imputation are known to yield imputed data with the same asymptotic distribution when the conditional models of full-conditional specification are compatible with that joint model. We show that this asymptotic equivalence of imputation distributions does not imply that joint model multiple imputation and full-conditional specification multiple imputation will also yield asymptotically equally efficient inference about the parameters of the model of interest, nor that they will be equally robust to misspecification of the joint model. When the conditional models used by full-conditional specification multiple imputation are linear, logistic and multinomial regressions, these are compatible with a restricted general location joint model. We show that multiple imputation using the restricted general location joint model can be substantially more asymptotically efficient than full-conditional specification multiple imputation, but this typically requires very strong associations between variables. When associations are weaker, the efficiency gain is small. Moreover, full-conditional specification multiple imputation is shown to be potentially much more robust than joint model multiple imputation using the restricted general location model to mispecification of that model when there is substantial missingness in the outcome variable.
Knowledge representation to support reasoning based on multiple models
Gillam, April; Seidel, Jorge P.; Parker, Alice C.
1990-01-01
Model Based Reasoning is a powerful tool used to design and analyze systems, which are often composed of numerous interactive, interrelated subsystems. Models of the subsystems are written independently and may be used together while they are still under development. Thus the models are not static. They evolve as information becomes obsolete, as improved artifact descriptions are developed, and as system capabilities change. Researchers are using three methods to support knowledge/data base growth, to track the model evolution, and to handle knowledge from diverse domains. First, the representation methodology is based on having pools, or types, of knowledge from which each model is constructed. In addition information is explicit. This includes the interactions between components, the description of the artifact structure, and the constraints and limitations of the models. The third principle we have followed is the separation of the data and knowledge from the inferencing and equation solving mechanisms. This methodology is used in two distinct knowledge-based systems: one for the design of space systems and another for the synthesis of VLSI circuits. It has facilitated the growth and evolution of our models, made accountability of results explicit, and provided credibility for the user community. These capabilities have been implemented and are being used in actual design projects.
Multiplicative quiver varieties and generalised Ruijsenaars-Schneider models
Chalykh, Oleg; Fairon, Maxime
2017-11-01
We study some classical integrable systems naturally associated with multiplicative quiver varieties for the (extended) cyclic quiver with m vertices. The phase space of our integrable systems is obtained by quasi-Hamiltonian reduction from the space of representations of the quiver. Three families of Poisson-commuting functions are constructed and written explicitly in suitable Darboux coordinates. The case m = 1 corresponds to the tadpole quiver and the Ruijsenaars-Schneider system and its variants, while for m > 1 we obtain new integrable systems that generalise the Ruijsenaars-Schneider system. These systems and their quantum versions also appeared recently in the context of supersymmetric gauge theory and cyclotomic DAHAs (Braverman et al. [32,34,35] and Kodera and Nakajima [36]), as well as in the context of the Macdonald theory (Chalykh and Etingof, 2013).
The impacts of multiple stressors to model ecological structures
International Nuclear Information System (INIS)
Landis, W.G.; Kelly, S.A.; Markiewicz, A.J.; Matthews, R.A.; Matthews, G.B.
1995-01-01
The basis of the community conditioning hypothesis is that ecological structures are the result of their unique etiology. Systems that have been exposed to a variety of stressors should reflect this history. The authors how conducted a series of microcosm experiments that can compare the effects of multiple stressors upon community dynamics. The microcosm protocols are derived from the Standardized Aquatic Microcosm (SAM) and have Lemma and additional protozoan species. Two multiple stressor experiments have been conducted. In an extended length SAM (ELSAM), two of four treatments were dosed with the turbine fuel JP-8 one week into the experiment. Two treatments were later exposed to the heat stress, one that had received jet fuel and one that had not. Similarly, an ELSAM was conducted with the second stressor being the further addition of JP-8 replacing the heat shock. Biological, physical and chemical data were analyzed with multivariate techniques including nonmetric clustering and association analysis. Space-time worms and phase diagrams were also employed to ascertain the dynamic relationships of variables identified as important by the multivariate techniques. The experiments do not result in a simple additive linear response to the additional stressor. Examination of the relative population dynamics reveal alterations in trajectories that suggest treatment related effects. As in previous single stressor experiments, recovery does not occur even after extended experimental periods. The authors are now attempting to measure the resulting trajectories, changes in similarity vectors and overall dynamics. However, community conditioning does appear to be an important framework in understanding systems with a heterogeneous array of stressors
A heart model with multiple chambers for myocardial scintigraphy
International Nuclear Information System (INIS)
Pretschner, D.P.; Hundeshagen, H.
1980-01-01
A heart model is portrayed which consists from individual segments to be filled with activity. The mechanics allow to vary the position in order to generate different positions for evaluation of the scintigraphic systems performance. (orig.) [de
Multiple Model Particle Filtering For Multi-Target Tracking
National Research Council Canada - National Science Library
Hero, Alfred; Kreucher, Chris; Kastella, Keith
2004-01-01
.... The details of this method have been presented elsewhere 1. One feature of real targets is that they are poorly described by a single kinematic model Target behavior may change dramatically i.e...
Computational Modeling of Human Multiple-Task Performance
National Research Council Canada - National Science Library
Kieras, David E; Meyer, David
2005-01-01
This is the final report for a project that was a continuation of an earlier, long-term project on the development and validation of the EPIC cognitive architecture for modeling human cognition and performance...
A Bose-Einstein model of particle multiplicity distributions
Energy Technology Data Exchange (ETDEWEB)
Mekjian, A.Z. [Department of Physics and Astronomy, Rutgers University, Piscataway, NJ 08854 (United States) and California Institute of Technology, Kellogg Radiation Lab., Pasadena, CA 91106 (United States) and MTA KFKI RMKI, 114 PO Box 49, H-1525 Budapest (Hungary)]. E-mail: amekjian@physics.rutgers.edu; Csoergoe, T. [MTA KFKI RMKI, 114 PO Box 49, H-1525 Budapest (Hungary); Hegyi, S. [MTA KFKI RMKI, 114 PO Box 49, H-1525 Budapest (Hungary)
2007-03-01
A model of particle production is developed based on a parallel with a theory of Bose-Einstein condensation and similarities with other critical phenomena such as critical opalescence. The role of a power law critical exponent {tau} and Levy index {alpha} are studied. Various features of this model are developed and compared with other commonly used models of particle production which are shown to differ by having different values for {tau}, {alpha}. While void scaling is a feature of this model, hierarchical structure is not a general property of it. The value of the exponent {tau}=2 is a transition point associated with void and hierarchical scaling features. An exponent {gamma} is introduced to describe enhanced fluctuations near a critical point. Experimentally determined properties of the void scaling function can be used to determine {tau}.
A Bose-Einstein model of particle multiplicity distributions
International Nuclear Information System (INIS)
Mekjian, A.Z.; Csoergoe, T.; Hegyi, S.
2007-01-01
A model of particle production is developed based on a parallel with a theory of Bose-Einstein condensation and similarities with other critical phenomena such as critical opalescence. The role of a power law critical exponent τ and Levy index α are studied. Various features of this model are developed and compared with other commonly used models of particle production which are shown to differ by having different values for τ, α. While void scaling is a feature of this model, hierarchical structure is not a general property of it. The value of the exponent τ=2 is a transition point associated with void and hierarchical scaling features. An exponent γ is introduced to describe enhanced fluctuations near a critical point. Experimentally determined properties of the void scaling function can be used to determine τ
A Bose Einstein model of particle multiplicity distributions
Mekjian, A. Z.; Csörgö, T.; Hegyi, S.
2007-03-01
A model of particle production is developed based on a parallel with a theory of Bose-Einstein condensation and similarities with other critical phenomena such as critical opalescence. The role of a power law critical exponent τ and Levy index α are studied. Various features of this model are developed and compared with other commonly used models of particle production which are shown to differ by having different values for τ, α. While void scaling is a feature of this model, hierarchical structure is not a general property of it. The value of the exponent τ=2 is a transition point associated with void and hierarchical scaling features. An exponent γ is introduced to describe enhanced fluctuations near a critical point. Experimentally determined properties of the void scaling function can be used to determine τ.
Discrete modeling of multiple discontinuities in rock mass using XFEM
Das, Kamal C.; Ausas, Roberto Federico; Carol, Ignacio; Rodrigues, Eduardo; Sandeep, Sandra; Vargas, P. E.; Gonzalez, Nubia Aurora; Segura, Josep María; Lakshmikantha, Ramasesha Mookanahallipatna; Mello,, U.
2017-01-01
Modeling of discontinuities (fractures and fault surfaces) is of major importance to assess the geomechanical behavior of oil and gas reservoirs, especially for tight and unconventional reservoirs. Numerical analysis of discrete discontinuities traditionally has been studied using interface element concepts, however more recently there are attempts to use extended finite element method (XFEM). The development of an XFEM tool for geo-mechanical fractures/faults modeling has significant industr...
Model-based monitoring of rotors with multiple coexisting faults
International Nuclear Information System (INIS)
Rossner, Markus
2015-01-01
Monitoring systems are applied to many rotors, but only few monitoring systems can separate coexisting errors and identify their quantity. This research project solves this problem using a combination of signal-based and model-based monitoring. The signal-based part performs a pre-selection of possible errors; these errors are further separated with model-based methods. This approach is demonstrated for the errors unbalance, bow, stator-fixed misalignment, rotor-fixed misalignment and roundness errors. For the model-based part, unambiguous error definitions and models are set up. The Ritz approach reduces the model order and therefore speeds up the diagnosis. Identification algorithms are developed for the different rotor faults. Hereto, reliable damage indicators and proper sub steps of the diagnosis have to be defined. For several monitoring problems, measuring both deflection and bearing force is very useful. The monitoring system is verified by experiments on an academic rotor test rig. The interpretation of the measurements requires much knowledge concerning the dynamics of the rotor. Due to the model-based approach, the system can separate errors with similar signal patterns and identify bow and roundness error online at operation speed. [de
Exploring the Use of Multiple Analogical Models when Teaching and Learning Chemical Equilibrium
Harrison, Allan G.; De Jong, Onno
2005-01-01
This study describes the multiple analogical models used to introduce and teach Grade 12 chemical equilibrium. We examine the teacher's reasons for using models, explain each model's development during the lessons, and analyze the understandings students derived from the models. A case study approach was used and the data were drawn from the…
Simple model for multiple-choice collective decision making.
Lee, Ching Hua; Lucas, Andrew
2014-11-01
We describe a simple model of heterogeneous, interacting agents making decisions between n≥2 discrete choices. For a special class of interactions, our model is the mean field description of random field Potts-like models and is effectively solved by finding the extrema of the average energy E per agent. In these cases, by studying the propagation of decision changes via avalanches, we argue that macroscopic dynamics is well captured by a gradient flow along E. We focus on the permutation symmetric case, where all n choices are (on average) the same, and spontaneous symmetry breaking (SSB) arises purely from cooperative social interactions. As examples, we show that bimodal heterogeneity naturally provides a mechanism for the spontaneous formation of hierarchies between decisions and that SSB is a preferred instability to discontinuous phase transitions between two symmetric points. Beyond the mean field limit, exponentially many stable equilibria emerge when we place this model on a graph of finite mean degree. We conclude with speculation on decision making with persistent collective oscillations. Throughout the paper, we emphasize analogies between methods of solution to our model and common intuition from diverse areas of physics, including statistical physics and electromagnetism.
Chlorpyrifos PBPK/PD model for multiple routes of exposure.
Poet, Torka S; Timchalk, Charles; Hotchkiss, Jon A; Bartels, Michael J
2014-10-01
1. Chlorpyrifos (CPF) is an important pesticide used to control crop insects. Human Exposures to CPF will occur primarily through oral exposure to residues on foods. A physiologically based pharmacokinetic/pharmacodynamic (PBPK/PD) model has been developed that describes the relationship between oral, dermal and inhalation doses of CPF and key events in the pathway for cholinergic effects. The model was built on a prior oral model that addressed age-related changes in metabolism and physiology. This multi-route model was developed in rats and humans to validate all scenarios in a parallelogram design. 2. Critical biological effects from CPF exposure require metabolic activation to CPF oxon, and small amounts of metabolism in tissues will potentially have a great effect on pharmacokinetics and pharmacodynamic outcomes. Metabolism (bioactivation and detoxification) was therefore added in diaphragm, brain, lung and skin compartments. Pharmacokinetic data are available for controlled human exposures via the oral and dermal routes and from oral and inhalation studies in rats. The validated model was then used to determine relative dermal versus inhalation uptake from human volunteers exposed to CPF in an indoor scenario.
A measurement model of multiple intelligence profiles of management graduates
Krishnan, Heamalatha; Awang, Siti Rahmah
2017-05-01
In this study, developing a fit measurement model and identifying the best fitting items to represent Howard Gardner's nine intelligences namely, musical intelligence, bodily-kinaesthetic intelligence, mathematical/logical intelligence, visual/spatial intelligence, verbal/linguistic intelligence, interpersonal intelligence, intrapersonal intelligence, naturalist intelligence and spiritual intelligence are the main interest in order to enhance the opportunities of the management graduates for employability. In order to develop a fit measurement model, Structural Equation Modeling (SEM) was applied. A psychometric test which is the Ability Test in Employment (ATIEm) was used as the instrument to measure the existence of nine types of intelligence of 137 University Teknikal Malaysia Melaka (UTeM) management graduates for job placement purposes. The initial measurement model contains nine unobserved variables and each unobserved variable is measured by ten observed variables. Finally, the modified measurement model deemed to improve the Normed chi-square (NC) = 1.331; Incremental Fit Index (IFI) = 0.940 and Root Mean Square of Approximation (RMSEA) = 0.049 was developed. The findings showed that the UTeM management graduates possessed all nine intelligences either high or low. Musical intelligence, mathematical/logical intelligence, naturalist intelligence and spiritual intelligence contributed highest loadings on certain items. However, most of the intelligences such as bodily kinaesthetic intelligence, visual/spatial intelligence, verbal/linguistic intelligence interpersonal intelligence and intrapersonal intelligence possessed by UTeM management graduates are just at the borderline.
Multiple bifurcations and periodic 'bubbling' in a delay population model
International Nuclear Information System (INIS)
Peng Mingshu
2005-01-01
In this paper, the flip bifurcation and periodic doubling bifurcations of a discrete population model without delay influence is firstly studied and the phenomenon of Feigenbaum's cascade of periodic doublings is also observed. Secondly, we explored the Neimark-Sacker bifurcation in the delay population model (two-dimension discrete dynamical systems) and the unique stable closed invariant curve which bifurcates from the nontrivial fixed point. Finally, a computer-assisted study for the delay population model is also delved into. Our computer simulation shows that the introduction of delay effect in a nonlinear difference equation derived from the logistic map leads to much richer dynamic behavior, such as stable node → stable focus → an lower-dimensional closed invariant curve (quasi-periodic solution, limit cycle) or/and stable periodic solutions → chaotic attractor by cascading bubbles (the combination of potential period doubling and reverse period-doubling) and the sudden change between two different attractors, etc
Semantics of Temporal Models with Multiple Temporal Dimensions
DEFF Research Database (Denmark)
Kraft, Peter; Sørensen, Jens Otto
ending up with lexical data models. In particular we look upon the representations by sets of normalised tables, by sets of 1NF tables and by sets of N1NF/nested tables. At each translation step we focus on how the temporal semantic is consistently maintained. In this way we recognise the requirements...... for representation of temporal properties in different models and the correspondence between the models. The results rely on the assumptions that the temporal dimensions are interdependent and ordered. Thus for example the valid periods of existences of a property in a mini world are dependent on the transaction...... periods in which the corresponding recordings are valid. This is not the normal way of looking at temporal dimensions and we give arguments supporting our assumption....
Modeling of plates with multiple anisotropic layers and residual stress
DEFF Research Database (Denmark)
Engholm, Mathias; Pedersen, Thomas; Thomsen, Erik Vilain
2016-01-01
Usually the analytical approach for modeling of plates uses the single layer plate equation to obtain the deflection and does not take anisotropy and residual stress into account. Based on the stress–strain relation of each layer and balancing stress resultants and bending moments, a general...... multilayered anisotropic plate equation is developed for plates with an arbitrary number of layers. The exact deflection profile is calculated for a circular clamped plate of anisotropic materials with residual bi-axial stress.From the deflection shape the critical stress for buckling is calculated......, and an excellent agreement between the two models is seen with a relative difference of less than 2% for all calculations. The model was also used to extract the cell capacitance, the parasitic capacitance and the residual stress of a pressure sensor composed of a multilayered plate of silicon and silicon oxide...
Dynamic information architecture system (DIAS) : multiple model simulation management
International Nuclear Information System (INIS)
Simunich, K. L.; Sydelko, P.; Dolph, J.; Christiansen, J.
2002-01-01
Dynamic Information Architecture System (DIAS) is a flexible, extensible, object-based framework for developing and maintaining complex multidisciplinary simulations of a wide variety of application contexts. The modeling domain of a specific DIAS-based simulation is determined by (1) software Entity (domain-specific) objects that represent the real-world entities that comprise the problem space (atmosphere, watershed, human), and (2) simulation models and other data processing applications that express the dynamic behaviors of the domain entities. In DIAS, models communicate only with Entity objects, never with each other. Each Entity object has a number of Parameter and Aspect (of behavior) objects associated with it. The Parameter objects contain the state properties of the Entity object. The Aspect objects represent the behaviors of the Entity object and how it interacts with other objects. DIAS extends the ''Object'' paradigm by abstraction of the object's dynamic behaviors, separating the ''WHAT'' from the ''HOW.'' DIAS object class definitions contain an abstract description of the various aspects of the object's behavior (the WHAT), but no implementation details (the HOW). Separate DIAS models/applications carry the implementation of object behaviors (the HOW). Any model deemed appropriate, including existing legacy-type models written in other languages, can drive entity object behavior. The DIAS design promotes plug-and-play of alternative models, with minimal recoding of existing applications. The DIAS Context Builder object builds a constructs or scenario for the simulation, based on developer specification and user inputs. Because DIAS is a discrete event simulation system, there is a Simulation Manager object with which all events are processed. Any class that registers to receive events must implement an event handler (method) to process the event during execution. Event handlers can schedule other events; create or remove Entities from the
Multiple sclerosis care: an integrated disease-management model.
Burks, J
1998-04-01
A disease-management model must be integrated, comprehensive, individual patient focused and outcome driven. In addition to high quality care, the successful model must reduce variations in care and costs. MS specialists need to be intimately involved in the long-term care of MS patients, while not neglecting primary care issues. A nurse care manager is the "glue" between the managed care company, health care providers and the patient/family. Disease management focuses on education and prevention, and can be cost effective as well as patient specific. To implement a successful program, managed care companies and health care providers must work together.
Dynamic information architecture system (DIAS) : multiple model simulation management.
Energy Technology Data Exchange (ETDEWEB)
Simunich, K. L.; Sydelko, P.; Dolph, J.; Christiansen, J.
2002-05-13
Dynamic Information Architecture System (DIAS) is a flexible, extensible, object-based framework for developing and maintaining complex multidisciplinary simulations of a wide variety of application contexts. The modeling domain of a specific DIAS-based simulation is determined by (1) software Entity (domain-specific) objects that represent the real-world entities that comprise the problem space (atmosphere, watershed, human), and (2) simulation models and other data processing applications that express the dynamic behaviors of the domain entities. In DIAS, models communicate only with Entity objects, never with each other. Each Entity object has a number of Parameter and Aspect (of behavior) objects associated with it. The Parameter objects contain the state properties of the Entity object. The Aspect objects represent the behaviors of the Entity object and how it interacts with other objects. DIAS extends the ''Object'' paradigm by abstraction of the object's dynamic behaviors, separating the ''WHAT'' from the ''HOW.'' DIAS object class definitions contain an abstract description of the various aspects of the object's behavior (the WHAT), but no implementation details (the HOW). Separate DIAS models/applications carry the implementation of object behaviors (the HOW). Any model deemed appropriate, including existing legacy-type models written in other languages, can drive entity object behavior. The DIAS design promotes plug-and-play of alternative models, with minimal recoding of existing applications. The DIAS Context Builder object builds a constructs or scenario for the simulation, based on developer specification and user inputs. Because DIAS is a discrete event simulation system, there is a Simulation Manager object with which all events are processed. Any class that registers to receive events must implement an event handler (method) to process the event during execution. Event handlers
Innovative supply chain optimization models with multiple uncertainty factors
DEFF Research Database (Denmark)
Choi, Tsan Ming; Govindan, Kannan; Li, Xiang
2017-01-01
Uncertainty is an inherent factor that affects all dimensions of supply chain activities. In today’s business environment, initiatives to deal with one specific type of uncertainty might not be effective since other types of uncertainty factors and disruptions may be present. These factors relate...... to supply chain competition and coordination. Thus, to achieve a more efficient and effective supply chain requires the deployment of innovative optimization models and novel methods. This preface provides a concise review of critical research issues regarding innovative supply chain optimization models...
Network formation under heterogeneous costs: The multiple group model
Kamphorst, J.J.A.; van der Laan, G.
2007-01-01
It is widely recognized that the shape of networks influences both individual and aggregate behavior. This raises the question which types of networks are likely to arise. In this paper we investigate a model of network formation, where players are divided into groups and the costs of a link between
Framework for Modelling Multiple Input Complex Aggregations for Interactive Installations
DEFF Research Database (Denmark)
Padfield, Nicolas; Andreasen, Troels
2012-01-01
on fuzzy logic and provides a method for variably balancing interaction and user input with the intention of the artist or director. An experimental design is presented, demonstrating an intuitive interface for parametric modelling of a complex aggregation function. The aggregation function unifies...
Multiple Linear Regression Model for Estimating the Price of a ...
African Journals Online (AJOL)
Ghana Mining Journal ... In the modeling, the Ordinary Least Squares (OLS) normality assumption which could introduce errors in the statistical analyses was dealt with by log transformation of the data, ensuring the data is normally ... The resultant MLRM is: Ŷi MLRM = (X'X)-1X'Y(xi') where X is the sample data matrix.
A multiple-compartment model for biokinetics studies in plants
International Nuclear Information System (INIS)
Garcia, Fermin; Pietrobron, Flavio; Fonseca, Agnes M.F.; Mol, Anderson W.; Rodriguez, Oscar; Guzman, Fernando
2001-01-01
In the present work is used the system of linear equations based in the general Assimakopoulos's GMCM model , for the development of a new method that will determine the flow's parameters and transfer coefficients in plants. The need of mathematical models to quantify the penetration of a trace substance in animals and plants, has often been stressed in the literature. Usually, in radiological environment studies, it is used the mean value of contaminant concentrations on whole or edible part plant body, without taking in account vegetable physiology regularities. In this work concepts and mathematical formulation of a Vegetable Multi-compartment Model (VMCM), taking into account the plant's physiology regularities is presented. The model based in general ideas of the GMCM , and statistical Square Minimum Method STATFLUX is proposed to use in inverse sense: the experimental time dependence of concentration in each compartment, should be input, and the parameters should be determined from this data in a statistical approach. The case of Uranium metabolism is discussed. (author)
Modeling of Optimal Power Generation using Multiple Kites
Williams, P.; Lansdorp, B.; Ockels, W.J.
2008-01-01
Kite systems have the potential to revolutionize energy generation. Large scale systems are envisioned that can fly autonomously in “power generation” cycles which drive a ground-based generator. In order for such systems to produce power efficiently, good models of the system are required. This
On the thermoluminescent interactive multiple-trap system (IMTS) model: is it a simple model?
International Nuclear Information System (INIS)
Gil T, M. I.; Perez C, L.; Cruz Z, E.; Furetta, C.; Roman L, J.
2016-10-01
In the thermally stimulated luminescence phenomenon, named thermoluminescence (Tl), the electrons and holes generated by the radiation-matter interaction can be trapped by the metastable levels in the band gap of the solid. Following, the electron can be thermally releases into the conduction band and a radiatively recombination with hole close to the recombination center occurred and the glow curve is emitted. However, the complex mechanism of trapping and thermally releases occurred in the band gap of solid. Some models, such as; first, second and general-order kinetics, have been well established to explain the behaviour of the glow curves and their defects recombination mechanism. In this work, expressions for and Interactive Multiple-Trap System model (IMTS) was obtained assuming: a set of discrete electron traps (active traps At), another set of thermally disconnected trap (TDT) and a recombination center (Rc) too. A numerical analysis based on the Levenberg-Marquardt method in conjunction with an implicit Rosenbrock method was taken into account to simulate the glow curve. The numerical method was tested through synthetic Tl glow curves for a wide range of trap parameters. The activation energy and kinetics order were determined using values from the General Order Kinetics (GOK) model as entry data to IMTS model. This model was tested using the experimental glow curves obtained from Ce or Eu-doped MgF 2 (LiF) polycrystals samples. Results shown that the IMTS model can predict more accurately the behavior of the Tl glow curves that those obtained by the GOK modified by Rasheedy and by the Mixed Order Kinetics model. (Author)
On the thermoluminescent interactive multiple-trap system (IMTS) model: is it a simple model?
Energy Technology Data Exchange (ETDEWEB)
Gil T, M. I.; Perez C, L. [UNAM, Facultad de Quimica, Ciudad Universitaria, 04510 Ciudad de Mexico (Mexico); Cruz Z, E.; Furetta, C.; Roman L, J., E-mail: ecruz@nucleares.unam.mx [UNAM, Instituto de Ciencias Nucleares, Ciudad Universitaria, 04510 Ciudad de Mexico (Mexico)
2016-10-15
In the thermally stimulated luminescence phenomenon, named thermoluminescence (Tl), the electrons and holes generated by the radiation-matter interaction can be trapped by the metastable levels in the band gap of the solid. Following, the electron can be thermally releases into the conduction band and a radiatively recombination with hole close to the recombination center occurred and the glow curve is emitted. However, the complex mechanism of trapping and thermally releases occurred in the band gap of solid. Some models, such as; first, second and general-order kinetics, have been well established to explain the behaviour of the glow curves and their defects recombination mechanism. In this work, expressions for and Interactive Multiple-Trap System model (IMTS) was obtained assuming: a set of discrete electron traps (active traps At), another set of thermally disconnected trap (TDT) and a recombination center (Rc) too. A numerical analysis based on the Levenberg-Marquardt method in conjunction with an implicit Rosenbrock method was taken into account to simulate the glow curve. The numerical method was tested through synthetic Tl glow curves for a wide range of trap parameters. The activation energy and kinetics order were determined using values from the General Order Kinetics (GOK) model as entry data to IMTS model. This model was tested using the experimental glow curves obtained from Ce or Eu-doped MgF{sub 2}(LiF) polycrystals samples. Results shown that the IMTS model can predict more accurately the behavior of the Tl glow curves that those obtained by the GOK modified by Rasheedy and by the Mixed Order Kinetics model. (Author)
MODELING OF TARGETED DRUG DELIVERY PART II. MULTIPLE DRUG ADMINISTRATION
Directory of Open Access Journals (Sweden)
A. V. Zaborovskiy
2017-01-01
Full Text Available In oncology practice, despite significant advances in early cancer detection, surgery, radiotherapy, laser therapy, targeted therapy, etc., chemotherapy is unlikely to lose its relevance in the near future. In this context, the development of new antitumor agents is one of the most important problems of cancer research. In spite of the importance of searching for new compounds with antitumor activity, the possibilities of the “old” agents have not been fully exhausted. Targeted delivery of antitumor agents can give them a “second life”. When developing new targeted drugs and their further introduction into clinical practice, the change in their pharmacodynamics and pharmacokinetics plays a special role. The paper describes a pharmacokinetic model of the targeted drug delivery. The conditions under which it is meaningful to search for a delivery vehicle for the active substance were described. Primary screening of antitumor agents was undertaken to modify them for the targeted delivery based on underlying assumptions of the model.
Modeling of CMUTs with Multiple Anisotropic Layers and Residual Stress
DEFF Research Database (Denmark)
Engholm, Mathias; Thomsen, Erik Vilain
2014-01-01
Usually the analytical approach for modeling CMUTs uses the single layer plate equation to obtain the deflection and does not take anisotropy and residual stress into account. A highly accurate model is developed for analytical characterization of CMUTs taking an arbitrary number of layers...... and residual stress into account. Based on the stress-strain relation of each layer and balancing stress resultants and bending moments, a general multilayered anisotropic plate equation is developed for plates with an arbitrary number of layers. The exact deflection profile is calculated for a circular...... clamped plate of anisotropic materials with residual bi-axial stress. From the deflection shape the critical stress for buckling is calculated and by using the Rayleigh-Ritz method the natural frequency is estimated....
Standard model fermion hierarchies with multiple Higgs doublets
International Nuclear Information System (INIS)
Solaguren-Beascoa Negre, Ana
2016-01-01
The hierarchies between the Standard Model (SM) fermion masses and mixing angles and the origin of neutrino masses are two of the biggest mysteries in particle physics. We extend the SM with new Higgs doublets to solve these issues. The lightest fermion masses and the mixing angles are generated through radiative effects, correctly reproducing the hierarchy pattern. Neutrino masses are generated in the see-saw mechanism.
Optical model with multiple band couplings using soft rotator structure
Martyanov, Dmitry; Soukhovitskii, Efrem; Capote, Roberto; Quesada, Jose Manuel; Chiba, Satoshi
2017-09-01
A new dispersive coupled-channel optical model (DCCOM) is derived that describes nucleon scattering on 238U and 232Th targets using a soft-rotator-model (SRM) description of the collective levels of the target nucleus. SRM Hamiltonian parameters are adjusted to the observed collective levels of the target nucleus. SRM nuclear wave functions (mixed in K quantum number) have been used to calculate coupling matrix elements of the generalized optical model. Five rotational bands are coupled: the ground-state band, β-, γ-, non-axial- bands, and a negative parity band. Such coupling scheme includes almost all levels below 1.2 MeV of excitation energy of targets. The "effective" deformations that define inter-band couplings are derived from SRM Hamiltonian parameters. Conservation of nuclear volume is enforced by introducing a monopolar deformed potential leading to additional couplings between rotational bands. The present DCCOM describes the total cross section differences between 238U and 232Th targets within experimental uncertainty from 50 keV up to 200 MeV of neutron incident energy. SRM couplings and volume conservation allow a precise calculation of the compound-nucleus (CN) formation cross sections, which is significantly different from the one calculated with rigid-rotor potentials with any number of coupled levels.
Modeling water demand when households have multiple sources of water
Coulibaly, Lassina; Jakus, Paul M.; Keith, John E.
2014-07-01
A significant portion of the world's population lives in areas where public water delivery systems are unreliable and/or deliver poor quality water. In response, people have developed important alternatives to publicly supplied water. To date, most water demand research has been based on single-equation models for a single source of water, with very few studies that have examined water demand from two sources of water (where all nonpublic system water sources have been aggregated into a single demand). This modeling approach leads to two outcomes. First, the demand models do not capture the full range of alternatives, so the true economic relationship among the alternatives is obscured. Second, and more seriously, economic theory predicts that demand for a good becomes more price-elastic as the number of close substitutes increases. If researchers artificially limit the number of alternatives studied to something less than the true number, the price elasticity estimate may be biased downward. This paper examines water demand in a region with near universal access to piped water, but where system reliability and quality is such that many alternative sources of water exist. In extending the demand analysis to four sources of water, we are able to (i) demonstrate why households choose the water sources they do, (ii) provide a richer description of the demand relationships among sources, and (iii) calculate own-price elasticity estimates that are more elastic than those generally found in the literature.
An architecture model for multiple disease management information systems.
Chen, Lichin; Yu, Hui-Chu; Li, Hao-Chun; Wang, Yi-Van; Chen, Huang-Jen; Wang, I-Ching; Wang, Chiou-Shiang; Peng, Hui-Yu; Hsu, Yu-Ling; Chen, Chi-Huang; Chuang, Lee-Ming; Lee, Hung-Chang; Chung, Yufang; Lai, Feipei
2013-04-01
Disease management is a program which attempts to overcome the fragmentation of healthcare system and improve the quality of care. Many studies have proven the effectiveness of disease management. However, the case managers were spending the majority of time in documentation, coordinating the members of the care team. They need a tool to support them with daily practice and optimizing the inefficient workflow. Several discussions have indicated that information technology plays an important role in the era of disease management. Whereas applications have been developed, it is inefficient to develop information system for each disease management program individually. The aim of this research is to support the work of disease management, reform the inefficient workflow, and propose an architecture model that enhance on the reusability and time saving of information system development. The proposed architecture model had been successfully implemented into two disease management information system, and the result was evaluated through reusability analysis, time consumed analysis, pre- and post-implement workflow analysis, and user questionnaire survey. The reusability of the proposed model was high, less than half of the time was consumed, and the workflow had been improved. The overall user aspect is positive. The supportiveness during daily workflow is high. The system empowers the case managers with better information and leads to better decision making.
A Technique of Fuzzy C-Mean in Multiple Linear Regression Model toward Paddy Yield
Syazwan Wahab, Nur; Saifullah Rusiman, Mohd; Mohamad, Mahathir; Amira Azmi, Nur; Che Him, Norziha; Ghazali Kamardan, M.; Ali, Maselan
2018-04-01
In this paper, we propose a hybrid model which is a combination of multiple linear regression model and fuzzy c-means method. This research involved a relationship between 20 variates of the top soil that are analyzed prior to planting of paddy yields at standard fertilizer rates. Data used were from the multi-location trials for rice carried out by MARDI at major paddy granary in Peninsular Malaysia during the period from 2009 to 2012. Missing observations were estimated using mean estimation techniques. The data were analyzed using multiple linear regression model and a combination of multiple linear regression model and fuzzy c-means method. Analysis of normality and multicollinearity indicate that the data is normally scattered without multicollinearity among independent variables. Analysis of fuzzy c-means cluster the yield of paddy into two clusters before the multiple linear regression model can be used. The comparison between two method indicate that the hybrid of multiple linear regression model and fuzzy c-means method outperform the multiple linear regression model with lower value of mean square error.
Iliceto, Paolo; Pompili, Maurizio; Spencer-Thomas, Sally; Ferracuti, Stefano; Erbuto, Denise; Lester, David; Candilera, Gabriella; Girardi, Paolo
2013-03-01
Occupational stress is a multivariate process involving sources of pressure, psycho-physiological distress, locus of control, work dissatisfaction, depression, anxiety, mental health disorders, hopelessness, and suicide ideation. Healthcare professionals are known for higher rates of occupational-related distress (burnout and compassion fatigue) and higher rates of suicide. The purpose of this study was to explain the relationships between occupational stress and some psychopathological dimensions in a sample of health professionals. We investigated 156 nurses and physicians, 62 males and 94 females, who were administered self-report questionnaires to assess occupational stress [occupational stress inventory (OSI)], temperament (temperament evaluation of Memphis, Pisa, Paris, and San Diego autoquestionnaire), and hopelessness (Beck hopelessness scale). The best Multiple Indicators Multiple Causes model with five OSI predictors yielded the following results: χ2(9) = 14.47 (p = 0.11); χ2/df = 1.60; comparative fit index = 0.99; root mean square error of approximation = 0.05. This model provided a good fit to the empirical data, showing a strong direct influence of casual variables such as work dissatisfaction, absence of type A behavior, and especially external locus of control, psychological and physiological distress on latent variable psychopathology. Occupational stress is in a complex relationship with temperament and hopelessness and also common among healthcare professionals.
Deterministic integer multiple firing depending on initial state in Wang model
Energy Technology Data Exchange (ETDEWEB)
Xie Yong [Institute of Nonlinear Dynamics, MSSV, Department of Engineering Mechanics, Xi' an Jiaotong University, Xi' an 710049 (China)]. E-mail: yxie@mail.xjtu.edu.cn; Xu Jianxue [Institute of Nonlinear Dynamics, MSSV, Department of Engineering Mechanics, Xi' an Jiaotong University, Xi' an 710049 (China); Jiang Jun [Institute of Nonlinear Dynamics, MSSV, Department of Engineering Mechanics, Xi' an Jiaotong University, Xi' an 710049 (China)
2006-12-15
We investigate numerically dynamical behaviour of the Wang model, which describes the rhythmic activities of thalamic relay neurons. The model neuron exhibits Type I excitability from a global view, but Type II excitability from a local view. There exists a narrow range of bistability, in which a subthreshold oscillation and a suprathreshold firing behaviour coexist. A special firing pattern, integer multiple firing can be found in the certain part of the bistable range. The characteristic feature of such firing pattern is that the histogram of interspike intervals has a multipeaked structure, and the peaks are located at about integer multiples of a basic interspike interval. Since the Wang model is noise-free, the integer multiple firing is a deterministic firing pattern. The existence of bistability leads to the deterministic integer multiple firing depending on the initial state of the model neuron, i.e., the initial values of the state variables.
Deterministic integer multiple firing depending on initial state in Wang model
International Nuclear Information System (INIS)
Xie Yong; Xu Jianxue; Jiang Jun
2006-01-01
We investigate numerically dynamical behaviour of the Wang model, which describes the rhythmic activities of thalamic relay neurons. The model neuron exhibits Type I excitability from a global view, but Type II excitability from a local view. There exists a narrow range of bistability, in which a subthreshold oscillation and a suprathreshold firing behaviour coexist. A special firing pattern, integer multiple firing can be found in the certain part of the bistable range. The characteristic feature of such firing pattern is that the histogram of interspike intervals has a multipeaked structure, and the peaks are located at about integer multiples of a basic interspike interval. Since the Wang model is noise-free, the integer multiple firing is a deterministic firing pattern. The existence of bistability leads to the deterministic integer multiple firing depending on the initial state of the model neuron, i.e., the initial values of the state variables
As a fast and effective technique, the multiple linear regression (MLR) method has been widely used in modeling and prediction of beach bacteria concentrations. Among previous works on this subject, however, several issues were insufficiently or inconsistently addressed. Those is...
A geospatial modelling approach to predict seagrass habitat recovery under multiple stressor regimes
Restoration of estuarine seagrass habitats requires a clear understanding of the modes of action of multiple interacting stressors including nutrients, climate change, coastal land-use change, and habitat modification. We have developed and demonstrated a geospatial modeling a...
A minimal unified model of disease trajectories captures hallmarks of multiple sclerosis
Kannan, Venkateshan; Kiani, Narsis A.; Piehl, Fredrik; Tegner, Jesper
2017-01-01
Multiple Sclerosis (MS) is an autoimmune disease targeting the central nervous system (CNS) causing demyelination and neurodegeneration leading to accumulation of neurological disability. Here we present a minimal, computational model involving
Shared Authentic Leadership in Research Teams: Testing a Multiple Mediation Model
Günter, Hannes; Gardner, William L.; Davis McCauley, Kelly; Randolph-Seng, Brandon; P. Prahbu, Veena
2017-01-01
Research teams face complex leadership and coordination challenges. We propose shared authentic leadership (SAL) as a timely approach to addressing these challenges. Drawing from authentic and functional leadership theories, we posit a multiple mediation model that suggests three mechanisms whereby
An object-oriented approach to evaluating multiple spectral models
International Nuclear Information System (INIS)
Majoras, R.E.; Richardson, W.M.; Seymour, R.S.
1995-01-01
A versatile, spectroscopy analysis engine has been developed by using object-oriented design and analysis techniques coupled with an object-oriented language, C++. This engine provides the spectroscopist with the choice of several different peak shape models that are tailored to the type of spectroscopy being performed. It also allows ease of development in adapting the engine to other analytical methods requiring more complex peak fitting in the future. This results in a program that can currently be used across a wide range of spectroscopy applications and anticipates inclusion of future advances in the field. (author) 6 refs.; 1 fig
A combined statistical model for multiple motifs search
International Nuclear Information System (INIS)
Gao Lifeng; Liu Xin; Guan Shan
2008-01-01
Transcription factor binding sites (TFBS) play key roles in genebior 6.8 wavelet expression and regulation. They are short sequence segments with definite structure and can be recognized by the corresponding transcription factors correctly. From the viewpoint of statistics, the candidates of TFBS should be quite different from the segments that are randomly combined together by nucleotide. This paper proposes a combined statistical model for finding over-represented short sequence segments in different kinds of data set. While the over-represented short sequence segment is described by position weight matrix, the nucleotide distribution at most sites of the segment should be far from the background nucleotide distribution. The central idea of this approach is to search for such kind of signals. This algorithm is tested on 3 data sets, including binding sites data set of cyclic AMP receptor protein in E.coli, PlantProm DB which is a non-redundant collection of proximal promoter sequences from different species, collection of the intergenic sequences of the whole genome of E.Coli. Even though the complexity of these three data sets is quite different, the results show that this model is rather general and sensible. (general)
Verification of road databases using multiple road models
Ziems, Marcel; Rottensteiner, Franz; Heipke, Christian
2017-08-01
In this paper a new approach for automatic road database verification based on remote sensing images is presented. In contrast to existing methods, the applicability of the new approach is not restricted to specific road types, context areas or geographic regions. This is achieved by combining several state-of-the-art road detection and road verification approaches that work well under different circumstances. Each one serves as an independent module representing a unique road model and a specific processing strategy. All modules provide independent solutions for the verification problem of each road object stored in the database in form of two probability distributions, the first one for the state of a database object (correct or incorrect), and a second one for the state of the underlying road model (applicable or not applicable). In accordance with the Dempster-Shafer Theory, both distributions are mapped to a new state space comprising the classes correct, incorrect and unknown. Statistical reasoning is applied to obtain the optimal state of a road object. A comparison with state-of-the-art road detection approaches using benchmark datasets shows that in general the proposed approach provides results with larger completeness. Additional experiments reveal that based on the proposed method a highly reliable semi-automatic approach for road data base verification can be designed.
A model for AGN variability on multiple time-scales
Sartori, Lia F.; Schawinski, Kevin; Trakhtenbrot, Benny; Caplar, Neven; Treister, Ezequiel; Koss, Michael J.; Urry, C. Megan; Zhang, C. E.
2018-05-01
We present a framework to link and describe active galactic nuclei (AGN) variability on a wide range of time-scales, from days to billions of years. In particular, we concentrate on the AGN variability features related to changes in black hole fuelling and accretion rate. In our framework, the variability features observed in different AGN at different time-scales may be explained as realisations of the same underlying statistical properties. In this context, we propose a model to simulate the evolution of AGN light curves with time based on the probability density function (PDF) and power spectral density (PSD) of the Eddington ratio (L/LEdd) distribution. Motivated by general galaxy population properties, we propose that the PDF may be inspired by the L/LEdd distribution function (ERDF), and that a single (or limited number of) ERDF+PSD set may explain all observed variability features. After outlining the framework and the model, we compile a set of variability measurements in terms of structure function (SF) and magnitude difference. We then combine the variability measurements on a SF plot ranging from days to Gyr. The proposed framework enables constraints on the underlying PSD and the ability to link AGN variability on different time-scales, therefore providing new insights into AGN variability and black hole growth phenomena.
The empirical content of models with multiple equilibria in economies with social interactions
Alberto Bisin; Andrea Moro; Giorgio Topa
2011-01-01
We study a general class of models with social interactions that might display multiple equilibria. We propose an estimation procedure for these models and evaluate its efficiency and computational feasibility relative to different approaches taken to the curse of dimensionality implied by the multiplicity. Using data on smoking among teenagers, we implement the proposed estimation procedure to understand how group interactions affect health-related choices. We find that interaction effects a...
MODEL PENSKORAN PARTIAL CREDIT PADA BUTIR MULTIPLE TRUE-FALSE BIDANG FISIKA
Wasis Wasis
2013-01-01
Tujuan penelitian ini menghasilkan model penskoran politomus untuk respons butir multiple true-false, sehingga dapat mengestimasi secara lebih akurat kemampuan di bidang fisika. Pengembangan penskoran menggunakan Four-D model dan diuji akurasinya melalui penelitian empiris dan simulasi. Penelitian empiris menggunakan 15 butir multiple true-false yang diambil dari soal UMPTN tahun 1996-2006 dan dikenakan pada 410 mahasiswa baru FMIPA Universitas Negeri Surabaya angkatan tahun 2007. Respons pes...
Multiple attribute decision making model and application to food safety risk evaluation.
Ma, Lihua; Chen, Hong; Yan, Huizhe; Yang, Lifeng; Wu, Lifeng
2017-01-01
Decision making for supermarket food purchase decisions are characterized by network relationships. This paper analyzed factors that influence supermarket food selection and proposes a supplier evaluation index system based on the whole process of food production. The author established the intuitive interval value fuzzy set evaluation model based on characteristics of the network relationship among decision makers, and validated for a multiple attribute decision making case study. Thus, the proposed model provides a reliable, accurate method for multiple attribute decision making.
Testing effect of a drug using multiple nested models for the dose–response
DEFF Research Database (Denmark)
Baayen, C.; Hougaard, P.; Pipper, C. B.
2015-01-01
of the assumed dose–response model. Bretz et al. (2005, Biometrics 61, 738–748) suggested a combined approach, which selects one or more suitable models from a set of candidate models using a multiple comparison procedure. The method initially requires a priori estimates of any non-linear parameters...
A Model of Distraction using new Architectural Mechanisms to Manage Multiple Goals
Taatgen, Niels; Katidioti, Ioanna; Borst, Jelmer; van Vugt, Marieke; Taatgen, Niels; van Vugt, Marieke; Borst, Jelmer; Mehlhorn, Katja
2015-01-01
Cognitive models assume a one-to-one correspondence between task and goals. We argue that modeling a task by combining multiple goals has several advantages: a task can be constructed from components that are reused from other tasks, and it enables modeling thought processes that compete with or
Music genre classification via likelihood fusion from multiple feature models
Shiu, Yu; Kuo, C.-C. J.
2005-01-01
Music genre provides an efficient way to index songs in a music database, and can be used as an effective means to retrieval music of a similar type, i.e. content-based music retrieval. A new two-stage scheme for music genre classification is proposed in this work. At the first stage, we examine a couple of different features, construct their corresponding parametric models (e.g. GMM and HMM) and compute their likelihood functions to yield soft classification results. In particular, the timbre, rhythm and temporal variation features are considered. Then, at the second stage, these soft classification results are integrated to result in a hard decision for final music genre classification. Experimental results are given to demonstrate the performance of the proposed scheme.
Electronic Commerce Success Model: A Search for Multiple Criteria
Directory of Open Access Journals (Sweden)
Didi Achjari
2004-01-01
Full Text Available The current study attempts to develop and examine framework of e-commerce success. In order to obtain comprehensive and robust measures, the framework accomodates key factors that are identified in the literature concerning the success of electronic commerce. The structural model comprises of four exogenous variables (Internal Driver, Internal Impediment, External Driver and Exgternal Impediment and one endogenous variable (Electornic Commerce Success eith 24 observed variables. The study that was administered within large Australian companies using questionaire survey concluded that benefits for both internal organization and external parties from the use of e-commerce were the main factor tro predict perceived and/or expected success of electronic commerce.
Uncertainty and Preference Modelling for Multiple Criteria Vehicle Evaluation
Directory of Open Access Journals (Sweden)
Qiuping Yang
2010-12-01
Full Text Available A general framework for vehicle assessment is proposed based on both mass survey information and the evidential reasoning (ER approach. Several methods for uncertainty and preference modeling are developed within the framework, including the measurement of uncertainty caused by missing information, the estimation of missing information in original surveys, the use of nonlinear functions for data mapping, and the use of nonlinear functions as utility function to combine distributed assessments into a single index. The results of the investigation show that various measures can be used to represent the different preferences of decision makers towards the same feedback from respondents. Based on the ER approach, credible and informative analysis can be conducted through the complete understanding of the assessment problem in question and the full exploration of available information.
Application of multiple objective models to water resources planning and management
International Nuclear Information System (INIS)
North, R.M.
1993-01-01
Over the past 30 years, we have seen the birth and growth of multiple objective analysis from an idea without tools to one with useful applications. Models have been developed and applications have been researched to address the multiple purposes and objectives inherent in the development and management of water resources. A practical approach to multiple objective modelling incorporates macroeconomic-based policies and expectations in order to optimize the results from both engineering (structural) and management (non-structural) alternatives, while taking into account the economic and environmental trade-offs. (author). 27 refs, 4 figs, 3 tabs
Materials and nanosystems : interdisciplinary computational modeling at multiple scales
International Nuclear Information System (INIS)
Huber, S.E.
2014-01-01
Over the last five decades, computer simulation and numerical modeling have become valuable tools complementing the traditional pillars of science, experiment and theory. In this thesis, several applications of computer-based simulation and modeling shall be explored in order to address problems and open issues in chemical and molecular physics. Attention shall be paid especially to the different degrees of interrelatedness and multiscale-flavor, which may - at least to some extent - be regarded as inherent properties of computational chemistry. In order to do so, a variety of computational methods are used to study features of molecular systems which are of relevance in various branches of science and which correspond to different spatial and/or temporal scales. Proceeding from small to large measures, first, an application in astrochemistry, the investigation of spectroscopic and energetic aspects of carbonic acid isomers shall be discussed. In this respect, very accurate and hence at the same time computationally very demanding electronic structure methods like the coupled-cluster approach are employed. These studies are followed by the discussion of an application in the scope of plasma-wall interaction which is related to nuclear fusion research. There, the interactions of atoms and molecules with graphite surfaces are explored using density functional theory methods. The latter are computationally cheaper than coupled-cluster methods and thus allow the treatment of larger molecular systems, but yield less accuracy and especially reduced error control at the same time. The subsequently presented exploration of surface defects at low-index polar zinc oxide surfaces, which are of interest in materials science and surface science, is another surface science application. The necessity to treat even larger systems of several hundreds of atoms requires the use of approximate density functional theory methods. Thin gold nanowires consisting of several thousands of
A multiple shock model for common cause failures using discrete Markov chain
International Nuclear Information System (INIS)
Chung, Dae Wook; Kang, Chang Soon
1992-01-01
The most widely used models in common cause analysis are (single) shock models such as the BFR, and the MFR. But, single shock model can not treat the individual common cause separately and has some irrational assumptions. Multiple shock model for common cause failures is developed using Markov chain theory. This model treats each common cause shock as separately and sequently occuring event to implicate the change in failure probability distribution due to each common cause shock. The final failure probability distribution is evaluated and compared with that from the BFR model. The results show that multiple shock model which minimizes the assumptions in the BFR model is more realistic and conservative than the BFR model. The further work for application is the estimations of parameters such as common cause shock rate and component failure probability given a shock,p, through the data analysis
Leptophobic Z' in models with multiple Higgs doublet fields
Chiang, Cheng-Wei; Nomura, Takaaki; Yagyu, Kei
2015-05-01
We study the collider phenomenology of the leptophobic Z' boson from an extra U(1)' gauge symmetry in models with N -Higgs doublet fields. We assume that the Z' boson at tree level has (i) no Z- Z' mixing, (ii) no interaction with the charged leptons, and (iii) no flavour-changing neutral current. Under such a setup, it is shown that in the N = 1 case, all the U(1)' charges of left-handed quark doublets and right-handed up- and down- type quarks are required to be the same, while in the N ≥ 3 case one can take different charges for the three types of quarks. The N = 2 case is not well-defined under the above three requirements. We study the processes ( V = γ , Z and W ±) with the leptonic decays of Z and W ± at the LHC. The most promising discovery channel or the most stringent constraint on the U(1)' gauge coupling constant comes from the Z'γ process below the threshold and from the process above the threshold. Assuming the collision energy of 8 TeV and integrated luminosity of 19.6 fb-1, we find that the constraint from the Z'γ search in the lower mass regime can be stronger than that from the UA2 experiment. In the N ≥ 3 case, we consider four benchmark points for the Z' couplings with quarks. If such a Z' is discovered, a careful comparison between the Z'γ and Z' W signals is crucial to reveal the nature of Z' couplings with quarks. We also present the discovery reach of the Z' boson at the 14-TeV LHC in both N = 1 and N ≥ 3 cases.
Directory of Open Access Journals (Sweden)
Hayduk Leslie A
2012-10-01
Full Text Available Abstract Background Structural equation modeling developed as a statistical melding of path analysis and factor analysis that obscured a fundamental tension between a factor preference for multiple indicators and path modeling’s openness to fewer indicators. Discussion Multiple indicators hamper theory by unnecessarily restricting the number of modeled latents. Using the few best indicators – possibly even the single best indicator of each latent – encourages development of theoretically sophisticated models. Additional latent variables permit stronger statistical control of potential confounders, and encourage detailed investigation of mediating causal mechanisms. Summary We recommend the use of the few best indicators. One or two indicators are often sufficient, but three indicators may occasionally be helpful. More than three indicators are rarely warranted because additional redundant indicators provide less research benefit than single indicators of additional latent variables. Scales created from multiple indicators can introduce additional problems, and are prone to being less desirable than either single or multiple indicators.
Bernhardt, Paul W; Wang, Huixia Judy; Zhang, Daowen
2014-01-01
Models for survival data generally assume that covariates are fully observed. However, in medical studies it is not uncommon for biomarkers to be censored at known detection limits. A computationally-efficient multiple imputation procedure for modeling survival data with covariates subject to detection limits is proposed. This procedure is developed in the context of an accelerated failure time model with a flexible seminonparametric error distribution. The consistency and asymptotic normality of the multiple imputation estimator are established and a consistent variance estimator is provided. An iterative version of the proposed multiple imputation algorithm that approximates the EM algorithm for maximum likelihood is also suggested. Simulation studies demonstrate that the proposed multiple imputation methods work well while alternative methods lead to estimates that are either biased or more variable. The proposed methods are applied to analyze the dataset from a recently-conducted GenIMS study.
Learned helplessness, discouraged workers, and multiple unemployment equilibria in a search model
Bjørnstad, Roger
2001-01-01
Abstract: Unemployment varies strongly between countries with comparable economic structure. Some economists have tried to explain these differences with institutional differences in the labour market. Instead, this paper focuses on a model with multiple equilibria so that the same socioeconomic structure can give rise to different levels of unemployment. Unemployed workers' search efficiency are modelled within an equilibrium search model and lay behind these results. In the model learned...
Optimization of the Darrieus wind turbines with double-multiple-streamtube model
International Nuclear Information System (INIS)
Paraschivoiu, I.
1985-01-01
This paper discusses a new improvement of the double-multiple-stream tube model by considering the stream tube expansion effects on the Darrieus wind turbine. These effects, allowing a more realistic modeling of the upwind/downwind flow field asymmetries inherent in the Darrieus rotor, were calculated by using CARDAAX computer code. When the dynamic stall is introduced in the double-multiple-stream tube model, the aerodynamic loads and performance show significant changes in the range of low tip-speed ratio
Dependence of Xmax and multiplicity of electron and muon on different high energy interaction models
Directory of Open Access Journals (Sweden)
G Rastegarzadeh
2010-06-01
Full Text Available Different high energy interaction models are the applied in CORSIKA code to simulate Extensive Air Showers (EAS generated by Cosmic Rays (CR. In this work the effects of QGSJET01, QGSJETII, DPMJET, SIBYLL models on Xmax and multiplicity of secondary electrons and muons at observation level are studied.
Modelling the Dynamics of Intracellular Processes as an Organisation of Multiple Agents
Bosse, T.; Jonker, C.M.; Treur, J.; Armano, G.; Merelli, E.; Denzinger, J.; Martin, A.; Miles, S.; Tianfield, H.; Unland, R.
2005-01-01
This paper explores how the dynamics of complex biological processes can be modeled as an organisation of multiple agents. This modelling perspective identifies organisational structure occurring in complex decentralised processes and handles complexity of the analysis of the dynamics by structuring
A Convex Variational Model for Restoring Blurred Images with Multiplicative Noise
DEFF Research Database (Denmark)
Dong, Yiqiu; Tieyong Zeng
2013-01-01
In this paper, a new variational model for restoring blurred images with multiplicative noise is proposed. Based on the statistical property of the noise, a quadratic penalty function technique is utilized in order to obtain a strictly convex model under a mild condition, which guarantees...
Preacher, Kristopher J.; Curran, Patrick J.; Bauer, Daniel J.
2006-01-01
Simple slopes, regions of significance, and confidence bands are commonly used to evaluate interactions in multiple linear regression (MLR) models, and the use of these techniques has recently been extended to multilevel or hierarchical linear modeling (HLM) and latent curve analysis (LCA). However, conducting these tests and plotting the…
Due to the complexity of the processes contributing to beach bacteria concentrations, many researchers rely on statistical modeling, among which multiple linear regression (MLR) modeling is most widely used. Despite its ease of use and interpretation, there may be time dependence...
DEFF Research Database (Denmark)
Dalgaard, Jens; Pena, Jose; Kocka, Tomas
2004-01-01
We propose a method to assist the user in the interpretation of the best Bayesian network model indu- ced from data. The method consists in extracting relevant features from the model (e.g. edges, directed paths and Markov blankets) and, then, assessing the con¯dence in them by studying multiple...
Stepwise Analysis of Differential Item Functioning Based on Multiple-Group Partial Credit Model.
Muraki, Eiji
1999-01-01
Extended an Item Response Theory (IRT) method for detection of differential item functioning to the partial credit model and applied the method to simulated data using a stepwise procedure. Then applied the stepwise DIF analysis based on the multiple-group partial credit model to writing trend data from the National Assessment of Educational…
Validation of a multiple compartment model for the transport of cesium through animals
International Nuclear Information System (INIS)
Assimakopoulos, P.A.; Ioannides, K.G.; Pakou, A.A.
1991-01-01
A general multiple compartment model, which describes the transport of trace elements through animals is presented. This model considers a system of K interconnected compartments of volume V i , i = 1,2,....,K, each containing, at a given time t, N i molecules of a trace substance. (5 figs.)
Modelling and simulation of multiple single - phase induction motor in parallel connection
Directory of Open Access Journals (Sweden)
Sujitjorn, S.
2006-11-01
Full Text Available A mathematical model for parallel connected n-multiple single-phase induction motors in generalized state-space form is proposed in this paper. The motor group draws electric power from one inverter. The model is developed by the dq-frame theory and was tested against four loading scenarios in which satisfactory results were obtained.
A basket two-part model to analyze medical expenditure on interdependent multiple sectors.
Sugawara, Shinya; Wu, Tianyi; Yamanishi, Kenji
2018-05-01
This study proposes a novel statistical methodology to analyze expenditure on multiple medical sectors using consumer data. Conventionally, medical expenditure has been analyzed by two-part models, which separately consider purchase decision and amount of expenditure. We extend the traditional two-part models by adding the step of basket analysis for dimension reduction. This new step enables us to analyze complicated interdependence between multiple sectors without an identification problem. As an empirical application for the proposed method, we analyze data of 13 medical sectors from the Medical Expenditure Panel Survey. In comparison with the results of previous studies that analyzed the multiple sector independently, our method provides more detailed implications of the impacts of individual socioeconomic status on the composition of joint purchases from multiple medical sectors; our method has a better prediction performance.
Alternative approaches to reliability modeling of a multiple engineered barrier system
International Nuclear Information System (INIS)
Ananda, M.M.A.; Singh, A.K.
1994-01-01
The lifetime of the engineered barrier system used for containment of high-level radioactive waste will significantly impact the total performance of a geological repository facility. Currently two types of designs are under consideration for an engineered barrier system, single engineered barrier system and multiple engineered barrier system. Multiple engineered barrier system consists of several metal barriers and the waste form (cladding). Some recent work show that a significant improvement of performance can be achieved by utilizing multiple engineered barrier systems. Considering sequential failures for each barrier, we model the reliability of the multiple engineered barrier system. Weibull and exponential lifetime distributions are used through out the analysis. Furthermore, the number of failed engineered barrier systems in a repository at a given time is modeled using a poisson approximation
A prediction method based on wavelet transform and multiple models fusion for chaotic time series
International Nuclear Information System (INIS)
Zhongda, Tian; Shujiang, Li; Yanhong, Wang; Yi, Sha
2017-01-01
In order to improve the prediction accuracy of chaotic time series, a prediction method based on wavelet transform and multiple models fusion is proposed. The chaotic time series is decomposed and reconstructed by wavelet transform, and approximate components and detail components are obtained. According to different characteristics of each component, least squares support vector machine (LSSVM) is used as predictive model for approximation components. At the same time, an improved free search algorithm is utilized for predictive model parameters optimization. Auto regressive integrated moving average model (ARIMA) is used as predictive model for detail components. The multiple prediction model predictive values are fusion by Gauss–Markov algorithm, the error variance of predicted results after fusion is less than the single model, the prediction accuracy is improved. The simulation results are compared through two typical chaotic time series include Lorenz time series and Mackey–Glass time series. The simulation results show that the prediction method in this paper has a better prediction.
International Nuclear Information System (INIS)
Kang, Li; Tang, Sanyi
2016-01-01
Highlights: • The discrete single species and multiple species models with random perturbation are proposed. • The complex dynamics and interesting bifurcation behavior have been investigated. • The reverse effects of random perturbation on discrete systems have been discussed and revealed. • The main results can be applied for pest control and resources management. - Abstract: The natural species are likely to present several interesting and complex phenomena under random perturbations, which have been confirmed by simple mathematical models. The important questions are: how the random perturbations influence the dynamics of the discrete population models with multiple steady states or multiple species interactions? and is there any different effects for single species and multiple species models with random perturbation? To address those interesting questions, we have proposed the discrete single species model with two stable equilibria and the host-parasitoid model with Holling type functional response functions to address how the random perturbation affects the dynamics. The main results indicate that the random perturbation does not change the number of blurred orbits of the single species model with two stable steady states compared with results for the classical Ricker model with same random perturbation, but it can strength the stability. However, extensive numerical investigations depict that the random perturbation does not influence the complexities of the host-parasitoid models compared with the results for the models without perturbation, while it does increase the period of periodic orbits doubly. All those confirm that the random perturbation has a reverse effect on the dynamics of the discrete single and multiple population models, which could be applied in reality including pest control and resources management.
A P-value model for theoretical power analysis and its applications in multiple testing procedures
Directory of Open Access Journals (Sweden)
Fengqing Zhang
2016-10-01
Full Text Available Abstract Background Power analysis is a critical aspect of the design of experiments to detect an effect of a given size. When multiple hypotheses are tested simultaneously, multiplicity adjustments to p-values should be taken into account in power analysis. There are a limited number of studies on power analysis in multiple testing procedures. For some methods, the theoretical analysis is difficult and extensive numerical simulations are often needed, while other methods oversimplify the information under the alternative hypothesis. To this end, this paper aims to develop a new statistical model for power analysis in multiple testing procedures. Methods We propose a step-function-based p-value model under the alternative hypothesis, which is simple enough to perform power analysis without simulations, but not too simple to lose the information from the alternative hypothesis. The first step is to transform distributions of different test statistics (e.g., t, chi-square or F to distributions of corresponding p-values. We then use a step function to approximate each of the p-value’s distributions by matching the mean and variance. Lastly, the step-function-based p-value model can be used for theoretical power analysis. Results The proposed model is applied to problems in multiple testing procedures. We first show how the most powerful critical constants can be chosen using the step-function-based p-value model. Our model is then applied to the field of multiple testing procedures to explain the assumption of monotonicity of the critical constants. Lastly, we apply our model to a behavioral weight loss and maintenance study to select the optimal critical constants. Conclusions The proposed model is easy to implement and preserves the information from the alternative hypothesis.
A versatile method for confirmatory evaluation of the effects of a covariate in multiple models
DEFF Research Database (Denmark)
Pipper, Christian Bressen; Ritz, Christian; Bisgaard, Hans
2012-01-01
to provide a fine-tuned control of the overall type I error in a wide range of epidemiological experiments where in reality no other useful alternative exists. The methodology proposed is applied to a multiple-end-point study of the effect of neonatal bacterial colonization on development of childhood asthma.......Modern epidemiology often requires testing of the effect of a covariate on multiple end points from the same study. However, popular state of the art methods for multiple testing require the tests to be evaluated within the framework of a single model unifying all end points. This severely limits...
Model for nucleus-nucleus, hadron-nucleus and hadron-proton multiplicity distributions
International Nuclear Information System (INIS)
Singh, C.P.; Shyam, M.; Tuli, S.K.
1986-07-01
A model relating hadron-proton, hadron-nucleus and nucleus-nucleus multiplicity distributions is proposed and some interesting consequences are derived. The values of the parameters are the same for all the processes and are given by the QCD hypothesis of ''universal'' hadronic multiplicities which are found to be asymptotically independent of target and beam in hadronic and current induced reactions in particle physics. (author)
Bereczkei, Tamas; Mesko, Norbert
2007-01-01
Multiple Fitness Model states that attractiveness varies across multiple dimensions, with each feature representing a different aspect of mate value. In the present study, male raters judged the attractiveness of young females with neotenous and mature facial features, with various hair lengths. Results revealed that the physical appearance of long-haired women was rated high, regardless of their facial attractiveness being valued high or low. Women rated as most attractive were those whose f...
Kim, J.; Sonnenthal, E. L.; Rutqvist, J.
2011-12-01
Rigorous modeling of coupling between fluid, heat, and geomechanics (thermo-poro-mechanics), in fractured porous media is one of the important and difficult topics in geothermal reservoir simulation, because the physics are highly nonlinear and strongly coupled. Coupled fluid/heat flow and geomechanics are investigated using the multiple interacting continua (MINC) method as applied to naturally fractured media. In this study, we generalize constitutive relations for the isothermal elastic dual porosity model proposed by Berryman (2002) to those for the non-isothermal elastic/elastoplastic multiple porosity model, and derive the coupling coefficients of coupled fluid/heat flow and geomechanics and constraints of the coefficients. When the off-diagonal terms of the total compressibility matrix for the flow problem are zero, the upscaled drained bulk modulus for geomechanics becomes the harmonic average of drained bulk moduli of the multiple continua. In this case, the drained elastic/elastoplastic moduli for mechanics are determined by a combination of the drained moduli and volume fractions in multiple porosity materials. We also determine a relation between local strains of all multiple porosity materials in a gridblock and the global strain of the gridblock, from which we can track local and global elastic/plastic variables. For elastoplasticity, the return mapping is performed for all multiple porosity materials in the gridblock. For numerical implementation, we employ and extend the fixed-stress sequential method of the single porosity model to coupled fluid/heat flow and geomechanics in multiple porosity systems, because it provides numerical stability and high accuracy. This sequential scheme can be easily implemented by using a porosity function and its corresponding porosity correction, making use of the existing robust flow and geomechanics simulators. We implemented the proposed modeling and numerical algorithm to the reaction transport simulator
Validation and calibration of structural models that combine information from multiple sources.
Dahabreh, Issa J; Wong, John B; Trikalinos, Thomas A
2017-02-01
Mathematical models that attempt to capture structural relationships between their components and combine information from multiple sources are increasingly used in medicine. Areas covered: We provide an overview of methods for model validation and calibration and survey studies comparing alternative approaches. Expert commentary: Model validation entails a confrontation of models with data, background knowledge, and other models, and can inform judgments about model credibility. Calibration involves selecting parameter values to improve the agreement of model outputs with data. When the goal of modeling is quantitative inference on the effects of interventions or forecasting, calibration can be viewed as estimation. This view clarifies issues related to parameter identifiability and facilitates formal model validation and the examination of consistency among different sources of information. In contrast, when the goal of modeling is the generation of qualitative insights about the modeled phenomenon, calibration is a rather informal process for selecting inputs that result in model behavior that roughly reproduces select aspects of the modeled phenomenon and cannot be equated to an estimation procedure. Current empirical research on validation and calibration methods consists primarily of methodological appraisals or case-studies of alternative techniques and cannot address the numerous complex and multifaceted methodological decisions that modelers must make. Further research is needed on different approaches for developing and validating complex models that combine evidence from multiple sources.
Jansen van Rensburg, Gerhardus J.; Kok, Schalk; Wilke, Daniel N.
2018-03-01
This paper presents the development and numerical implementation of a state variable based thermomechanical material model, intended for use within a fully implicit finite element formulation. Plastic hardening, thermal recovery and multiple cycles of recrystallisation can be tracked for single peak as well as multiple peak recrystallisation response. The numerical implementation of the state variable model extends on a J2 isotropic hypo-elastoplastic modelling framework. The complete numerical implementation is presented as an Abaqus UMAT and linked subroutines. Implementation is discussed with detailed explanation of the derivation and use of various sensitivities, internal state variable management and multiple recrystallisation cycle contributions. A flow chart explaining the proposed numerical implementation is provided as well as verification on the convergence of the material subroutine. The material model is characterised using two high temperature data sets for cobalt and copper. The results of finite element analyses using the material parameter values characterised on the copper data set are also presented.
Improving the Pattern Reproducibility of Multiple-Point-Based Prior Models Using Frequency Matching
DEFF Research Database (Denmark)
Cordua, Knud Skou; Hansen, Thomas Mejer; Mosegaard, Klaus
2014-01-01
Some multiple-point-based sampling algorithms, such as the snesim algorithm, rely on sequential simulation. The conditional probability distributions that are used for the simulation are based on statistics of multiple-point data events obtained from a training image. During the simulation, data...... events with zero probability in the training image statistics may occur. This is handled by pruning the set of conditioning data until an event with non-zero probability is found. The resulting probability distribution sampled by such algorithms is a pruned mixture model. The pruning strategy leads...... to a probability distribution that lacks some of the information provided by the multiple-point statistics from the training image, which reduces the reproducibility of the training image patterns in the outcome realizations. When pruned mixture models are used as prior models for inverse problems, local re...
Optimized production planning model for a multi-plant cultivation system under uncertainty
Ke, Shunkui; Guo, Doudou; Niu, Qingliang; Huang, Danfeng
2015-02-01
An inexact multi-constraint programming model under uncertainty was developed by incorporating a production plan algorithm into the crop production optimization framework under the multi-plant collaborative cultivation system. In the production plan, orders from the customers are assigned to a suitable plant under the constraints of plant capabilities and uncertainty parameters to maximize profit and achieve customer satisfaction. The developed model and solution method were applied to a case study of a multi-plant collaborative cultivation system to verify its applicability. As determined in the case analysis involving different orders from customers, the period of plant production planning and the interval between orders can significantly affect system benefits. Through the analysis of uncertain parameters, reliable and practical decisions can be generated using the suggested model of a multi-plant collaborative cultivation system.
Bathellier, Brice; Tee, Sui Poh; Hrovat, Christina; Rumpel, Simon
2013-01-01
Learning speed can strongly differ across individuals. This is seen in humans and animals. Here, we measured learning speed in mice performing a discrimination task and developed a theoretical model based on the reinforcement learning framework to account for differences between individual mice. We found that, when using a multiplicative learning rule, the starting connectivity values of the model strongly determine the shape of learning curves. This is in contrast to current learning models ...
MODEL PENSKORAN PARTIAL CREDIT PADA BUTIR MULTIPLE TRUE-FALSE BIDANG FISIKA
Directory of Open Access Journals (Sweden)
Wasis Wasis
2013-01-01
Full Text Available Tujuan penelitian ini menghasilkan model penskoran politomus untuk respons butir multiple true-false, sehingga dapat mengestimasi secara lebih akurat kemampuan di bidang fisika. Pengembangan penskoran menggunakan Four-D model dan diuji akurasinya melalui penelitian empiris dan simulasi. Penelitian empiris menggunakan 15 butir multiple true-false yang diambil dari soal UMPTN tahun 1996-2006 dan dikenakan pada 410 mahasiswa baru FMIPA Universitas Negeri Surabaya angkatan tahun 2007. Respons peserta tes diskor dengan tiga model partial credit (PCM I; II; dan III dan secara dikotomus. Hasil penskoran dianalisis dengan program Quest untuk mendapat-kan estimasi tingkat kesukaran butir (δ dan estimasi ke-mampuan peserta (θ untuk menentukan nilai fungsi informasi tes dan kesalahan baku estimasi. Penelitian simulasi mengguna-kan data bangkitan berdasarkan parameter empiris (δ dan θ memakai program statistik SAS dan akurasi estimasinya di-analisis dengan metode root mean squared error (RMSE. Hasil penelitian ini menunjukkan: (i Penskoran PCM dengan pem-bobotan mampu mengestimasi kemampuan lebih akurat di-bandingkan tanpa pembobotan maupun secara dikotomus; (ii Semakin banyak jumlah kategori dalam penskoran partial credit, semakin akurat. Kata kunci: model penskoran partial credit, butir multiple true-false ____________________________________________________________ THE PARTIAL CREDIT SCORING MODEL FOR THE MULTIPLE TRUE-FALSE BUTIRS IN PHYSICS Abstract This study is an attempt to overcome the weaknesses. This study aims to produce a polytomous scoring model for responses to multiple true-false butirs in order to get a more accurate estimation of abilities in physics. It adopts the Four-D model and its accuracy is assessed through empirical and simulation studies. The empirical study employed 15 multiple true-false butirs taken from the New Students Entrance Test of State University the year of 1996–2006. It administered to 410 new students enrolled
Reduction of bias in neutron multiplicity assay using a weighted point model
Energy Technology Data Exchange (ETDEWEB)
Geist, W. H. (William H.); Krick, M. S. (Merlyn S.); Mayo, D. R. (Douglas R.)
2004-01-01
Accurate assay of most common plutonium samples was the development goal for the nondestructive assay technique of neutron multiplicity counting. Over the past 20 years the technique has been proven for relatively pure oxides and small metal items. Unfortunately, the technique results in large biases when assaying large metal items. Limiting assumptions, such as unifoh multiplication, in the point model used to derive the multiplicity equations causes these biases for large dense items. A weighted point model has been developed to overcome some of the limitations in the standard point model. Weighting factors are detemiined from Monte Carlo calculations using the MCNPX code. Monte Carlo calculations give the dependence of the weighting factors on sample mass and geometry, and simulated assays using Monte Carlo give the theoretical accuracy of the weighted-point-model assay. Measured multiplicity data evaluated with both the standard and weighted point models are compared to reference values to give the experimental accuracy of the assay. Initial results show significant promise for the weighted point model in reducing or eliminating biases in the neutron multiplicity assay of metal items. The negative biases observed in the assay of plutonium metal samples are caused by variations in the neutron multiplication for neutrons originating in various locations in the sample. The bias depends on the mass and shape of the sample and depends on the amount and energy distribution of the ({alpha},n) neutrons in the sample. When the standard point model is used, this variable-multiplication bias overestimates the multiplication and alpha values of the sample, and underestimates the plutonium mass. The weighted point model potentially can provide assay accuracy of {approx}2% (1 {sigma}) for cylindrical plutonium metal samples < 4 kg with {alpha} < 1 without knowing the exact shape of the samples, provided that the ({alpha},n) source is uniformly distributed throughout the
Multiple attribute decision making model and application to food safety risk evaluation.
Directory of Open Access Journals (Sweden)
Lihua Ma
Full Text Available Decision making for supermarket food purchase decisions are characterized by network relationships. This paper analyzed factors that influence supermarket food selection and proposes a supplier evaluation index system based on the whole process of food production. The author established the intuitive interval value fuzzy set evaluation model based on characteristics of the network relationship among decision makers, and validated for a multiple attribute decision making case study. Thus, the proposed model provides a reliable, accurate method for multiple attribute decision making.
Multiplicity distributions in a thermodynamical model of hadron production in e+e- collisions
International Nuclear Information System (INIS)
Becattini, F.; Giovannini, A.; Lupia, S.
1996-01-01
Predictions of a thermodynamical model of hadron production for multiplicity distributions in e + e - annihilations at LEP and PEP-PETRA centre of mass energies are shown. The production process is described as a two-step process in which primary hadrons emitted from the thermal source decay into final observable particles. The final charged track multiplicity distributions turn out to be of negative binomial type and are in quite good agreement with experimental observations. The average number of clans calculated from fitted negative binomial coincides with the average number of primary hadrons predicted by the thermodynamical model, suggesting that clans should be identified with primary hadrons. (orig.)
A feedback control model for network flow with multiple pure time delays
Press, J.
1972-01-01
A control model describing a network flow hindered by multiple pure time (or transport) delays is formulated. Feedbacks connect each desired output with a single control sector situated at the origin. The dynamic formulation invokes the use of differential difference equations. This causes the characteristic equation of the model to consist of transcendental functions instead of a common algebraic polynomial. A general graphical criterion is developed to evaluate the stability of such a problem. A digital computer simulation confirms the validity of such criterion. An optimal decision making process with multiple delays is presented.
Multiple Model Adaptive Attitude Control of LEO Satellite with Angular Velocity Constraints
Shahrooei, Abolfazl; Kazemi, Mohammad Hosein
2018-04-01
In this paper, the multiple model adaptive control is utilized to improve the transient response of attitude control system for a rigid spacecraft. An adaptive output feedback control law is proposed for attitude control under angular velocity constraints and its almost global asymptotic stability is proved. The multiple model adaptive control approach is employed to counteract large uncertainty in parameter space of the inertia matrix. The nonlinear dynamics of a low earth orbit satellite is simulated and the proposed control algorithm is implemented. The reported results show the effectiveness of the suggested scheme.
Kim, Paul Youngbin; Park, Irene J K
2009-07-01
Adapting the theory of reasoned action, the present study examined help-seeking beliefs, attitudes, and intent among Asian American college students (N = 110). A multiple mediation model was tested to see if the relation between Asian values and willingness to see a counselor was mediated by attitudes toward seeking professional psychological help and subjective norm. A bootstrapping procedure was used to test the multiple mediation model. Results indicated that subjective norm was the sole significant mediator of the effect of Asian values on willingness to see a counselor. The findings highlight the importance of social influences on help-seeking intent among Asian American college students.
International Nuclear Information System (INIS)
Tweardy, Matthew C.; McConchie, Seth; Hayward, Jason P.
2017-01-01
An extension of the point kinetics model is developed in this paper to describe the neutron multiplicity response of a bare uranium object under interrogation by an associated particle imaging deuterium-tritium (D-T) measurement system. This extended model is used to estimate the total neutron multiplication of the uranium. Both MCNPX-PoliMi simulations and data from active interrogation measurements of highly enriched and depleted uranium geometries are used to evaluate the potential of this method and to identify the sources of systematic error. The detection efficiency correction for measured coincidence response is identified as a large source of systematic error. If the detection process is not considered, results suggest that the method can estimate total multiplication to within 13% of the simulated value. Values for multiplicity constants in the point kinetics equations are sensitive to enrichment due to (n, xn) interactions by D-T neutrons and can introduce another significant source of systematic bias. This can theoretically be corrected if isotopic composition is known a priori. Finally, the spatial dependence of multiplication is also suspected of introducing further systematic bias for high multiplication uranium objects.
Tweardy, Matthew C.; McConchie, Seth; Hayward, Jason P.
2017-07-01
An extension of the point kinetics model is developed to describe the neutron multiplicity response of a bare uranium object under interrogation by an associated particle imaging deuterium-tritium (D-T) measurement system. This extended model is used to estimate the total neutron multiplication of the uranium. Both MCNPX-PoliMi simulations and data from active interrogation measurements of highly enriched and depleted uranium geometries are used to evaluate the potential of this method and to identify the sources of systematic error. The detection efficiency correction for measured coincidence response is identified as a large source of systematic error. If the detection process is not considered, results suggest that the method can estimate total multiplication to within 13% of the simulated value. Values for multiplicity constants in the point kinetics equations are sensitive to enrichment due to (n, xn) interactions by D-T neutrons and can introduce another significant source of systematic bias. This can theoretically be corrected if isotopic composition is known a priori. The spatial dependence of multiplication is also suspected of introducing further systematic bias for high multiplication uranium objects.
A minimal unified model of disease trajectories captures hallmarks of multiple sclerosis
Kannan, Venkateshan
2017-03-29
Multiple Sclerosis (MS) is an autoimmune disease targeting the central nervous system (CNS) causing demyelination and neurodegeneration leading to accumulation of neurological disability. Here we present a minimal, computational model involving the immune system and CNS that generates the principal subtypes of the disease observed in patients. The model captures several key features of MS, especially those that distinguish the chronic progressive phase from that of the relapse-remitting. In addition, a rare subtype of the disease, progressive relapsing MS naturally emerges from the model. The model posits the existence of two key thresholds, one in the immune system and the other in the CNS, that separate dynamically distinct behavior of the model. Exploring the two-dimensional space of these thresholds, we obtain multiple phases of disease evolution and these shows greater variation than the clinical classification of MS, thus capturing the heterogeneity that is manifested in patients.
Enhancing CIDOC-CRM and compatible models with the concept of multiple interpretation
Van Ruymbeke, M.; Hallot, P.; Billen, R.
2017-08-01
Modelling cultural heritage and archaeological objects is used as much for management as for research purposes. To ensure the sustainable benefit of digital data, models benefit from taking the data specificities of historical and archaeological domains into account. Starting from a conceptual model tailored to storing these specificities, we present, in this paper, an extended mapping to CIDOC-CRM and its compatible models. Offering an ideal framework to structure and highlight the best modelling practices, these ontologies are essentially dedicated to storing semantic data which provides information about cultural heritage objects. Based on this standard, our proposal focuses on multiple interpretation and sequential reality.
Jonrinaldi; Rahman, T.; Henmaidi; Wirdianto, E.; Zhang, D. Z.
2018-03-01
This paper proposed a mathematical model for multiple items Economic Production and Order Quantity (EPQ/EOQ) with considering continuous and discrete demand simultaneously in a system consisting of a vendor and multiple buyers. This model is used to investigate the optimal production lot size of the vendor and the number of shipments policy of orders to multiple buyers. The model considers the multiple buyers’ holding cost as well as transportation cost, which minimize the total production and inventory costs of the system. The continuous demand from any other customers can be fulfilled anytime by the vendor while the discrete demand from multiple buyers can be fulfilled by the vendor using the multiple delivery policy with a number of shipments of items in the production cycle time. A mathematical model is developed to illustrate the system based on EPQ and EOQ model. Solution procedures are proposed to solve the model using a Mixed Integer Non Linear Programming (MINLP) and algorithm methods. Then, the numerical example is provided to illustrate the system and results are discussed.
Protein structure modeling for CASP10 by multiple layers of global optimization.
Joo, Keehyoung; Lee, Juyong; Sim, Sangjin; Lee, Sun Young; Lee, Kiho; Heo, Seungryong; Lee, In-Ho; Lee, Sung Jong; Lee, Jooyoung
2014-02-01
In the template-based modeling (TBM) category of CASP10 experiment, we introduced a new protocol called protein modeling system (PMS) to generate accurate protein structures in terms of side-chains as well as backbone trace. In the new protocol, a global optimization algorithm, called conformational space annealing (CSA), is applied to the three layers of TBM procedure: multiple sequence-structure alignment, 3D chain building, and side-chain re-modeling. For 3D chain building, we developed a new energy function which includes new distance restraint terms of Lorentzian type (derived from multiple templates), and new energy terms that combine (physical) energy terms such as dynamic fragment assembly (DFA) energy, DFIRE statistical potential energy, hydrogen bonding term, etc. These physical energy terms are expected to guide the structure modeling especially for loop regions where no template structures are available. In addition, we developed a new quality assessment method based on random forest machine learning algorithm to screen templates, multiple alignments, and final models. For TBM targets of CASP10, we find that, due to the combination of three stages of CSA global optimizations and quality assessment, the modeling accuracy of PMS improves at each additional stage of the protocol. It is especially noteworthy that the side-chains of the final PMS models are far more accurate than the models in the intermediate steps. Copyright © 2013 Wiley Periodicals, Inc.
Directory of Open Access Journals (Sweden)
Tae-Hyoung Kim
2017-01-01
Full Text Available This paper studies the metaheuristic optimizer-based direct identification of a multiple-mode system consisting of a finite set of linear regression representations of subsystems. To this end, the concept of a multiple-mode linear regression model is first introduced, and its identification issues are established. A method for reducing the identification problem for multiple-mode models to an optimization problem is also described in detail. Then, to overcome the difficulties that arise because the formulated optimization problem is inherently ill-conditioned and nonconvex, the cyclic-network-topology-based constrained particle swarm optimizer (CNT-CPSO is introduced, and a concrete procedure for the CNT-CPSO-based identification methodology is developed. This scheme requires no prior knowledge of the mode transitions between subsystems and, unlike some conventional methods, can handle a large amount of data without difficulty during the identification process. This is one of the distinguishing features of the proposed method. The paper also considers an extension of the CNT-CPSO-based identification scheme that makes it possible to simultaneously obtain both the optimal parameters of the multiple submodels and a certain decision parameter involved in the mode transition criteria. Finally, an experimental setup using a DC motor system is established to demonstrate the practical usability of the proposed metaheuristic optimizer-based identification scheme for developing a multiple-mode linear regression model.
Peeters, E.T.H.M.
2001-01-01
Organisms are always exposed to several simultaneously operating stressors in nature. It appears that the combined effects of multiple stressors cannot be understood as a simple product of their individual effects. To understand how multiple stressors affect the composition and functioning
Xu, Xueli; von Davier, Matthias
2008-01-01
The general diagnostic model (GDM) utilizes located latent classes for modeling a multidimensional proficiency variable. In this paper, the GDM is extended by employing a log-linear model for multiple populations that assumes constraints on parameters across multiple groups. This constrained model is compared to log-linear models that assume…
A toy MCT model for multiple glass transitions: Double swallow tail singularity
Energy Technology Data Exchange (ETDEWEB)
Ryzhov, V.N. [Institute for High Pressure Physics, Russian Academy of Sciences, Troitsk 142190, Moscow region (Russian Federation); Moscow Institute of Physics and Technology, 141700 Moscow (Russian Federation); Tareyeva, E.E. [Institute for High Pressure Physics, Russian Academy of Sciences, Troitsk 142190, Moscow region (Russian Federation)
2014-11-07
We propose a toy model to describe in the frame of Mode Coupling Theory multiple glass transitions. The model is based on the postulated simple form for static structure factor as a sum of two delta-functions. This form makes it possible to solve the MCT equations in almost analytical way. The phase diagram is governed by two swallow tails resulting from two A{sub 4} singularities and includes liquid–glass transition and multiple glasses. The diagram has much in common with those of binary and quasibinary systems. - Highlights: • A simple toy model is proposed for description of glass–glass transitions. • The static structure factor of the model has the form of a sum of delta-functions. • The phase diagram contains A{sub 4} bifurcation singularities and A{sub 3} end points. • The results can be applied for the qualitative description of quasibinary systems.
Energy Technology Data Exchange (ETDEWEB)
Tikare, Veena [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hernandez-Rivera, Efrain [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Madison, Jonathan D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Holm, Elizabeth Ann [Carnegie Mellon Univ., Pittsburgh, PA (United States); Patterson, Burton R. [Univ. of Florida, Gainesville, FL (United States). Dept. of Materials Science and Engineering; Homer, Eric R. [Brigham Young Univ., Provo, UT (United States). Dept. of Mechanical Engineering
2013-09-01
Most materials microstructural evolution processes progress with multiple processes occurring simultaneously. In this work, we have concentrated on the processes that are active in nuclear materials, in particular, nuclear fuels. These processes are coarsening, nucleation, differential diffusion, phase transformation, radiation-induced defect formation and swelling, often with temperature gradients present. All these couple and contribute to evolution that is unique to nuclear fuels and materials. Hybrid model that combines elements from the Potts Monte Carlo, phase-field models and others have been developed to address these multiple physical processes. These models are described and applied to several processes in this report. An important feature of the models developed are that they are coded as applications within SPPARKS, a Sandiadeveloped framework for simulation at the mesoscale of microstructural evolution processes by kinetic Monte Carlo methods. This makes these codes readily accessible and adaptable for future applications.
Slow walking model for children with multiple disabilities via an application of humanoid robot
Wang, ZeFeng; Peyrodie, Laurent; Cao, Hua; Agnani, Olivier; Watelain, Eric; Wang, HaoPing
2016-02-01
Walk training research with children having multiple disabilities is presented. Orthosis aid in walking for children with multiple disabilities such as Cerebral Palsy continues to be a clinical and technological challenge. In order to reduce pain and improve treatment strategies, an intermediate structure - humanoid robot NAO - is proposed as an assay platform to study walking training models, to be transferred to future special exoskeletons for children. A suitable and stable walking model is proposed for walk training. It would be simulated and tested on NAO. This comparative study of zero moment point (ZMP) supports polygons and energy consumption validates the model as more stable than the conventional NAO. Accordingly direction variation of the center of mass and the slopes of linear regression knee/ankle angles, the Slow Walk model faithfully emulates the gait pattern of children.
DEFF Research Database (Denmark)
Pallmann, Philip; Ritz, Christian; Hothorn, Ludwig A
2018-01-01
, however only asymptotically. In this paper, we show how to make the approach also applicable to small-sample data problems. Specifically, we discuss the computation of adjusted P values and simultaneous confidence bounds for comparisons of randomised treatment groups as well as for levels......Simultaneous inference in longitudinal, repeated-measures, and multi-endpoint designs can be onerous, especially when trying to find a reasonable joint model from which the interesting effects and covariances are estimated. A novel statistical approach known as multiple marginal models greatly...... simplifies the modelling process: the core idea is to "marginalise" the problem and fit multiple small models to different portions of the data, and then estimate the overall covariance matrix in a subsequent, separate step. Using these estimates guarantees strong control of the family-wise error rate...
Micro-macro multilevel latent class models with multiple discrete individual-level variables
Bennink, M.; Croon, M.A.; Kroon, B.; Vermunt, J.K.
2016-01-01
An existing micro-macro method for a single individual-level variable is extended to the multivariate situation by presenting two multilevel latent class models in which multiple discrete individual-level variables are used to explain a group-level outcome. As in the univariate case, the
Comprehension of Multiple Documents with Conflicting Information: A Two-Step Model of Validation
Richter, Tobias; Maier, Johanna
2017-01-01
In this article, we examine the cognitive processes that are involved when readers comprehend conflicting information in multiple texts. Starting from the notion of routine validation during comprehension, we argue that readers' prior beliefs may lead to a biased processing of conflicting information and a one-sided mental model of controversial…
Compacton solutions and multiple compacton solutions for a continuum Toda lattice model
International Nuclear Information System (INIS)
Fan Xinghua; Tian Lixin
2006-01-01
Some special solutions of the Toda lattice model with a transversal degree of freedom are obtained. With the aid of Mathematica and Wu elimination method, more explicit solitary wave solutions, including compacton solutions, multiple compacton solutions, peakon solutions, as well as periodic solutions are found in this paper
Sensitivity Analysis of Multiple Informant Models When Data Are Not Missing at Random
Blozis, Shelley A.; Ge, Xiaojia; Xu, Shu; Natsuaki, Misaki N.; Shaw, Daniel S.; Neiderhiser, Jenae M.; Scaramella, Laura V.; Leve, Leslie D.; Reiss, David
2013-01-01
Missing data are common in studies that rely on multiple informant data to evaluate relationships among variables for distinguishable individuals clustered within groups. Estimation of structural equation models using raw data allows for incomplete data, and so all groups can be retained for analysis even if only 1 member of a group contributes…
ANALYSIS OF THE FINANCIAL PERFORMANCES OF THE FIRM, BY USING THE MULTIPLE REGRESSION MODEL
Directory of Open Access Journals (Sweden)
Constantin Anghelache
2011-11-01
Full Text Available The information achieved through the use of simple linear regression are not always enough to characterize the evolution of an economic phenomenon and, furthermore, to identify its possible future evolution. To remedy these drawbacks, the special literature includes multiple regression models, in which the evolution of the dependant variable is defined depending on two or more factorial variables.
Global and 3D Spatial Assessment of Neuroinflammation in Rodent Models of Multiple Sclerosis
DEFF Research Database (Denmark)
Gupta, Shashank; Utoft, Regine Egeholm; Hasseldam, Henrik
2013-01-01
Multiple Sclerosis (MS) is a progressive autoimmune inflammatory and demyelinating disease of the central nervous system (CNS). T cells play a key role in the progression of neuroinflammation in MS and also in the experimental autoimmune encephalomyelitis (EAE) animal models for the disease. A te...
A general framework for the evaluation of genetic association studies using multiple marginal models
DEFF Research Database (Denmark)
Kitsche, Andreas; Ritz, Christian; Hothorn, Ludwig A.
2016-01-01
OBJECTIVE: In this study, we present a simultaneous inference procedure as a unified analysis framework for genetic association studies. METHODS: The method is based on the formulation of multiple marginal models that reflect different modes of inheritance. The basic advantage of this methodology...
Estimating Ambiguity Preferences and Perceptions in Multiple Prior Models: Evidence from the Field
S.G. Dimmock (Stephen); R.R.P. Kouwenberg (Roy); O.S. Mitchell (Olivia); K. Peijnenburg (Kim)
2015-01-01
markdownabstractWe develop a tractable method to estimate multiple prior models of decision-making under ambiguity. In a representative sample of the U.S. population, we measure ambiguity attitudes in the gain and loss domains. We find that ambiguity aversion is common for uncertain events of
Inference regarding multiple structural changes in linear models with endogenous regressors
Boldea, O.; Hall, A.R.; Han, S.
2012-01-01
This paper considers the linear model with endogenous regressors and multiple changes in the parameters at unknown times. It is shown that minimization of a Generalized Method of Moments criterion yields inconsistent estimators of the break fractions, but minimization of the Two Stage Least Squares
On an efficient multiple time step Monte Carlo simulation of the SABR model
Leitao Rodriguez, A.; Grzelak, L.A.; Oosterlee, C.W.
2017-01-01
In this paper, we will present a multiple time step Monte Carlo simulation technique for pricing options under the Stochastic Alpha Beta Rho model. The proposed method is an extension of the one time step Monte Carlo method that we proposed in an accompanying paper Leitao et al. [Appl. Math.
Ling, Ru; Liu, Jiawang
2011-12-01
To construct prediction model for health workforce and hospital beds in county hospitals of Hunan by multiple linear regression. We surveyed 16 counties in Hunan with stratified random sampling according to uniform questionnaires,and multiple linear regression analysis with 20 quotas selected by literature view was done. Independent variables in the multiple linear regression model on medical personnels in county hospitals included the counties' urban residents' income, crude death rate, medical beds, business occupancy, professional equipment value, the number of devices valued above 10 000 yuan, fixed assets, long-term debt, medical income, medical expenses, outpatient and emergency visits, hospital visits, actual available bed days, and utilization rate of hospital beds. Independent variables in the multiple linear regression model on county hospital beds included the the population of aged 65 and above in the counties, disposable income of urban residents, medical personnel of medical institutions in county area, business occupancy, the total value of professional equipment, fixed assets, long-term debt, medical income, medical expenses, outpatient and emergency visits, hospital visits, actual available bed days, utilization rate of hospital beds, and length of hospitalization. The prediction model shows good explanatory and fitting, and may be used for short- and mid-term forecasting.
Clinical trials: odds ratios and multiple regression models--why and how to assess them
Sobh, Mohamad; Cleophas, Ton J.; Hadj-Chaib, Amel; Zwinderman, Aeilko H.
2008-01-01
Odds ratios (ORs), unlike chi2 tests, provide direct insight into the strength of the relationship between treatment modalities and treatment effects. Multiple regression models can reduce the data spread due to certain patient characteristics and thus improve the precision of the treatment
Eikonal multiple scattering model within the framework of Feynman's positron theory
International Nuclear Information System (INIS)
Tekou, A.
1986-07-01
The Bethe Salpeter equation for nucleon-nucleon, nucleon-nucleus and nucleus-nucleus scattering is eikonalized. Multiple scattering series is obtained. Contributions of three body interations are included. The model presented below may be used to investigate atomic collisions. (author)
Performance modeling and optimization of sparse matrix-vector multiplication on NVIDIA CUDA platform
Xu, S.; Xue, W.; Lin, H.X.
2011-01-01
In this article, we discuss the performance modeling and optimization of Sparse Matrix-Vector Multiplication (SpMV) on NVIDIA GPUs using CUDA. SpMV has a very low computation-data ratio and its performance is mainly bound by the memory bandwidth. We propose optimization of SpMV based on ELLPACK from
Li, Spencer D.
2011-01-01
Mediation analysis in child and adolescent development research is possible using large secondary data sets. This article provides an overview of two statistical methods commonly used to test mediated effects in secondary analysis: multiple regression and structural equation modeling (SEM). Two empirical studies are presented to illustrate the…
An explicit statistical model of learning lexical segmentation using multiple cues
Çöltekin, Ça ̆grı; Nerbonne, John; Lenci, Alessandro; Padró, Muntsa; Poibeau, Thierry; Villavicencio, Aline
2014-01-01
This paper presents an unsupervised and incremental model of learning segmentation that combines multiple cues whose use by children and adults were attested by experimental studies. The cues we exploit in this study are predictability statistics, phonotactics, lexical stress and partial lexical
Modeling transport pricing with multiple stakeholders. Working paper : Methodology and a case study
Smits, E.
2012-01-01
Pricing measures (e.g., a kilometre charge or cordon toll) are used to improve the external effects of transportation (e.g., congestion or emissions). This working paper presents a planning model for pricing while taking the preferences and interactions of multiple stakeholders (e.g., governments or
Statistical models for quantifying diagnostic accuracy with multiple lesions per patient
Zwinderman, Aeilko H.; Glas, Afina S.; Bossuyt, Patrick M.; Florie, Jasper; Bipat, Shandra; Stoker, Jaap
2008-01-01
We propose random-effects models to summarize and quantify the accuracy of the diagnosis of multiple lesions on a single image without assuming independence between lesions. The number of false-positive lesions was assumed to be distributed as a Poisson mixture, and the proportion of true-positive
Novakovic, A.M.; Krekels, E.H.; Munafo, A.; Ueckert, S.; Karlsson, M.O.
2016-01-01
In this study, we report the development of the first item response theory (IRT) model within a pharmacometrics framework to characterize the disease progression in multiple sclerosis (MS), as measured by Expanded Disability Status Score (EDSS). Data were collected quarterly from a 96-week phase III
Trautwein, Ulrich; Marsh, Herbert W.; Nagengast, Benjamin; Ludtke, Oliver; Nagy, Gabriel; Jonkmann, Kathrin
2012-01-01
In modern expectancy-value theory (EVT) in educational psychology, expectancy and value beliefs additively predict performance, persistence, and task choice. In contrast to earlier formulations of EVT, the multiplicative term Expectancy x Value in regression-type models typically plays no major role in educational psychology. The present study…
Cost optimization in the (S-1,S) lost sales inventory model with multiple demand classes
Kranenburg, A.A.; Houtum, van G.J.J.A.N.
2007-01-01
For the (S-1,S) lost sales inventory model with multiple demand classes that have different lost sales penalty cost parameters, three accurate and efficient heuristic algorithms are presented that, at a given base stock level, aim to find optimal values for the critical levels, i.e., values that
Yude Pan; Richard Birdsey; John Hom; Kevin McCullough
2007-01-01
We used our GIS variant of the PnET-CN model to investigate changes of forest carbon stocks and fluxes in Mid-Atlantic temperate forests over the last century (1900-2000). Forests in this region are affected by multiple environmental changes including climate, atmospheric CO2 concentration, N deposition and tropospheric ozone, and extensive land disturbances. Our...
Analysis on trust influencing factors and trust model from multiple perspectives of online Auction
Yu, Wang
2017-10-01
Current reputation models lack the research on online auction trading completely so they cannot entirely reflect the reputation status of users and may cause problems on operability. To evaluate the user trust in online auction correctly, a trust computing model based on multiple influencing factors is established. It aims at overcoming the efficiency of current trust computing methods and the limitations of traditional theoretical trust models. The improved model comprehensively considers the trust degree evaluation factors of three types of participants according to different participation modes of online auctioneers, to improve the accuracy, effectiveness and robustness of the trust degree. The experiments test the efficiency and the performance of our model under different scale of malicious user, under environment like eBay and Sporas model. The experimental results analysis show the model proposed in this paper makes up the deficiency of existing model and it also has better feasibility.
Bayesian models based on test statistics for multiple hypothesis testing problems.
Ji, Yuan; Lu, Yiling; Mills, Gordon B
2008-04-01
We propose a Bayesian method for the problem of multiple hypothesis testing that is routinely encountered in bioinformatics research, such as the differential gene expression analysis. Our algorithm is based on modeling the distributions of test statistics under both null and alternative hypotheses. We substantially reduce the complexity of the process of defining posterior model probabilities by modeling the test statistics directly instead of modeling the full data. Computationally, we apply a Bayesian FDR approach to control the number of rejections of null hypotheses. To check if our model assumptions for the test statistics are valid for various bioinformatics experiments, we also propose a simple graphical model-assessment tool. Using extensive simulations, we demonstrate the performance of our models and the utility of the model-assessment tool. In the end, we apply the proposed methodology to an siRNA screening and a gene expression experiment.
Directory of Open Access Journals (Sweden)
Eduard Dyachuk
2015-02-01
Full Text Available The complex unsteady aerodynamics of vertical axis wind turbines (VAWT poses significant challenges to the simulation tools. Dynamic stall is one of the phenomena associated with the unsteady conditions for VAWTs, and it is in the focus of the study. Two dynamic stall models are compared: the widely-used Gormont model and a Leishman–Beddoes-type model. The models are included in a double multiple streamtube model. The effects of flow curvature and flow expansion are also considered. The model results are assessed against the measured data on a Darrieus turbine with curved blades. To study the dynamic stall effects, the comparison of force coefficients between the simulations and experiments is done at low tip speed ratios. Simulations show that the Leishman–Beddoes model outperforms the Gormont model for all tested conditions.
Double-multiple streamtube model for studying vertical-axis wind turbines
Paraschivoiu, Ion
1988-08-01
This work describes the present state-of-the-art in double-multiple streamtube method for modeling the Darrieus-type vertical-axis wind turbine (VAWT). Comparisons of the analytical results with the other predictions and available experimental data show a good agreement. This method, which incorporates dynamic-stall and secondary effects, can be used for generating a suitable aerodynamic-load model for structural design analysis of the Darrieus rotor.
International Nuclear Information System (INIS)
Gridneva, S.A.; Rus'kin, V.I.
1980-01-01
Basic features of the statistical model of multiple hadron production based on microcanonical distribution and taking into account the laws of conservation of total angular momentum, isotopic spin, p-, G-, C-eveness and Bose-Einstein statistics requirements are given. The model predictions are compared with experimental data on anti NN annihilation at rest and e + e - annihilation in hadrons at annihilation total energy from 2 to 3 GeV [ru
INCORPORATING MULTIPLE OBJECTIVES IN PLANNING MODELS OF LOW-RESOURCE FARMERS
Flinn, John C.; Jayasuriya, Sisira; Knight, C. Gregory
1980-01-01
Linear goal programming provides a means of formally incorporating the multiple goals of a household into the analysis of farming systems. Using this approach, the set of plans which come as close as possible to achieving a set of desired goals under conditions of land and cash scarcity are derived for a Filipino tenant farmer. A challenge in making LGP models empirically operational is the accurate definition of the goals of the farm household being modelled.
Multiple-event probability in general-relativistic quantum mechanics. II. A discrete model
International Nuclear Information System (INIS)
Mondragon, Mauricio; Perez, Alejandro; Rovelli, Carlo
2007-01-01
We introduce a simple quantum mechanical model in which time and space are discrete and periodic. These features avoid the complications related to continuous-spectrum operators and infinite-norm states. The model provides a tool for discussing the probabilistic interpretation of generally covariant quantum systems, without the confusion generated by spurious infinities. We use the model to illustrate the formalism of general-relativistic quantum mechanics, and to test the definition of multiple-event probability introduced in a companion paper [Phys. Rev. D 75, 084033 (2007)]. We consider a version of the model with unitary time evolution and a version without unitary time evolution
International Nuclear Information System (INIS)
Sibatov, R. T.; Morozova, E. V.
2015-01-01
A model of dispersive transport in disordered nanostructured semiconductors has been proposed taking into account the percolation structure of a sample and joint action of several mechanisms. Topological and energy disorders have been simultaneously taken into account within the multiple trapping model on a comb structure modeling the percolation character of trajectories. The joint action of several mechanisms has been described within random walks with a mixture of waiting time distributions. Integral transport equations with fractional derivatives have been obtained for an arbitrary density of localized states. The kinetics of the transient current has been calculated within the proposed new model in order to analyze time-of-flight experiments for nanostructured semiconductors
Yang, Weichao; Xu, Kui; Lian, Jijian; Bin, Lingling; Ma, Chao
2018-05-01
Flood is a serious challenge that increasingly affects the residents as well as policymakers. Flood vulnerability assessment is becoming gradually relevant in the world. The purpose of this study is to develop an approach to reveal the relationship between exposure, sensitivity and adaptive capacity for better flood vulnerability assessment, based on the fuzzy comprehensive evaluation method (FCEM) and coordinated development degree model (CDDM). The approach is organized into three parts: establishment of index system, assessment of exposure, sensitivity and adaptive capacity, and multiple flood vulnerability assessment. Hydrodynamic model and statistical data are employed for the establishment of index system; FCEM is used to evaluate exposure, sensitivity and adaptive capacity; and CDDM is applied to express the relationship of the three components of vulnerability. Six multiple flood vulnerability types and four levels are proposed to assess flood vulnerability from multiple perspectives. Then the approach is applied to assess the spatiality of flood vulnerability in Hainan's eastern area, China. Based on the results of multiple flood vulnerability, a decision-making process for rational allocation of limited resources is proposed and applied to the study area. The study shows that multiple flood vulnerability assessment can evaluate vulnerability more completely, and help decision makers learn more information about making decisions in a more comprehensive way. In summary, this study provides a new way for flood vulnerability assessment and disaster prevention decision. Copyright © 2018 Elsevier Ltd. All rights reserved.
Dynamic Bus Travel Time Prediction Models on Road with Multiple Bus Routes
Bai, Cong; Peng, Zhong-Ren; Lu, Qing-Chang; Sun, Jian
2015-01-01
Accurate and real-time travel time information for buses can help passengers better plan their trips and minimize waiting times. A dynamic travel time prediction model for buses addressing the cases on road with multiple bus routes is proposed in this paper, based on support vector machines (SVMs) and Kalman filtering-based algorithm. In the proposed model, the well-trained SVM model predicts the baseline bus travel times from the historical bus trip data; the Kalman filtering-based dynamic algorithm can adjust bus travel times with the latest bus operation information and the estimated baseline travel times. The performance of the proposed dynamic model is validated with the real-world data on road with multiple bus routes in Shenzhen, China. The results show that the proposed dynamic model is feasible and applicable for bus travel time prediction and has the best prediction performance among all the five models proposed in the study in terms of prediction accuracy on road with multiple bus routes. PMID:26294903
Dynamic Bus Travel Time Prediction Models on Road with Multiple Bus Routes.
Bai, Cong; Peng, Zhong-Ren; Lu, Qing-Chang; Sun, Jian
2015-01-01
Accurate and real-time travel time information for buses can help passengers better plan their trips and minimize waiting times. A dynamic travel time prediction model for buses addressing the cases on road with multiple bus routes is proposed in this paper, based on support vector machines (SVMs) and Kalman filtering-based algorithm. In the proposed model, the well-trained SVM model predicts the baseline bus travel times from the historical bus trip data; the Kalman filtering-based dynamic algorithm can adjust bus travel times with the latest bus operation information and the estimated baseline travel times. The performance of the proposed dynamic model is validated with the real-world data on road with multiple bus routes in Shenzhen, China. The results show that the proposed dynamic model is feasible and applicable for bus travel time prediction and has the best prediction performance among all the five models proposed in the study in terms of prediction accuracy on road with multiple bus routes.
International Nuclear Information System (INIS)
Freiesleben, Trine; Sohbati, Reza; Murray, Andrew; Jain, Mayank; Al Khasawneh, Sahar; Hvidt, Søren; Jakobsen, Bo
2015-01-01
Interest in the optically stimulated luminescence (OSL) dating of rock surfaces has increased significantly over the last few years, as the potential of the method has been explored. It has been realized that luminescence-depth profiles show qualitative evidence for multiple daylight exposure and burial events. To quantify both burial and exposure events a new mathematical model is developed by expanding the existing models of evolution of luminescence–depth profiles, to include repeated sequential events of burial and exposure to daylight. This new model is applied to an infrared stimulated luminescence-depth profile from a feldspar-rich granite cobble from an archaeological site near Aarhus, Denmark. This profile shows qualitative evidence for multiple daylight exposure and burial events; these are quantified using the model developed here. By determining the burial ages from the surface layer of the cobble and by fitting the new model to the luminescence profile, it is concluded that the cobble was well bleached before burial. This indicates that the OSL burial age is likely to be reliable. In addition, a recent known exposure event provides an approximate calibration for older daylight exposure events. This study confirms the suggestion that rock surfaces contain a record of exposure and burial history, and that these events can be quantified. The burial age of rock surfaces can thus be dated with confidence, based on a knowledge of their pre-burial light exposure; it may also be possible to determine the length of a fossil exposure, using a known natural light exposure as calibration. - Highlights: • Evidence for multiple exposure and burial events in the history of a single cobble. • OSL rock surface dating model improved to include multiple burial/exposure cycles. • Application of the new model quantifies burial and exposure events.
DEFF Research Database (Denmark)
Brodin, Nils Patrik; Vogelius, Ivan R.; Bjørk-Eriksson, Thomas
2013-01-01
As pediatric medulloblastoma (MB) is a relatively rare disease, it is important to extract the maximum information from trials and cohort studies. Here, a framework was developed for modeling tumor control with multiple modes of failure and time-to-progression for standard-risk MB, using published...
Wang, Q. J.; Robertson, D. E.; Chiew, F. H. S.
2009-05-01
Seasonal forecasting of streamflows can be highly valuable for water resources management. In this paper, a Bayesian joint probability (BJP) modeling approach for seasonal forecasting of streamflows at multiple sites is presented. A Box-Cox transformed multivariate normal distribution is proposed to model the joint distribution of future streamflows and their predictors such as antecedent streamflows and El Niño-Southern Oscillation indices and other climate indicators. Bayesian inference of model parameters and uncertainties is implemented using Markov chain Monte Carlo sampling, leading to joint probabilistic forecasts of streamflows at multiple sites. The model provides a parametric structure for quantifying relationships between variables, including intersite correlations. The Box-Cox transformed multivariate normal distribution has considerable flexibility for modeling a wide range of predictors and predictands. The Bayesian inference formulated allows the use of data that contain nonconcurrent and missing records. The model flexibility and data-handling ability means that the BJP modeling approach is potentially of wide practical application. The paper also presents a number of statistical measures and graphical methods for verification of probabilistic forecasts of continuous variables. Results for streamflows at three river gauges in the Murrumbidgee River catchment in southeast Australia show that the BJP modeling approach has good forecast quality and that the fitted model is consistent with observed data.
Scaling of multiplicity distribution in hadron collisions and diffractive-excitation like models
International Nuclear Information System (INIS)
Buras, A.J.; Dethlefsen, J.M.; Koba, Z.
1974-01-01
Multiplicity distribution of secondary particles in inelastic hadron collision at high energy is studied in the semiclassical impact parameter representation. The scaling function is shown to consist of two factors: one geometrical and the other dynamical. We propose a specific choice of these factors, which describe satisfactorily the elastic scattering, the ratio of elastic to total cross-section and the simple scaling behaviour of multiplicity distribution in p-p collisions. Two versions of diffractive-excitation like models (global and local excitation) are presented as interpretation of our choice of dynamical factor. (author)
Multiple-collision model for pion production in relativistic nucleus-nucleus collisions
International Nuclear Information System (INIS)
Vary, J.P.
1978-01-01
A simple model for pion production in relativistic heavy-ion collisions is developed based on nucleon-nucleon data, nuclear density distribution, and the assumption of straight-line trajectories. Multiplicity distributions for total pion production and for negative-pion production are predicted for 40 Ar incident on a Pb 3 O 4 target at 1.8 GeV/nucleon. Production through intermediate baryon resonances reduces the high-multiplicity region but insufficiently to yield agreement with data. This implies the need for a coherent production mechanism
Modeling and optimization of a utility system containing multiple extractions steam turbines
International Nuclear Information System (INIS)
Luo, Xianglong; Zhang, Bingjian; Chen, Ying; Mo, Songping
2011-01-01
Complex turbines with multiple controlled and/or uncontrolled extractions are popularly used in the processing industry and cogeneration plants to provide steam of different levels, electric power, and driving power. To characterize thermodynamic behavior under varying conditions, nonlinear mathematical models are developed based on energy balance, thermodynamic principles, and semi-empirical equations. First, the complex turbine is decomposed into several simple turbines from the controlled extraction stages and modeled in series. THM (The turbine hardware model) developing concept is applied to predict the isentropic efficiency of the decomposed simple turbines. Stodola's formulation is also used to simulate the uncontrolled extraction steam parameters. The thermodynamic properties of steam and water are regressed through linearization or piece-wise linearization. Second, comparison between the simulated results using the proposed model and the data in the working condition diagram provided by the manufacturer is conducted over a wide range of operations. The simulation results yield small deviation from the data in the working condition diagram where the maximum modeling error is 0.87% among the compared seven operation conditions. Last, the optimization model of a utility system containing multiple extraction turbines is established and a detailed case is analyzed. Compared with the conventional operation strategy, a maximum of 5.47% of the total operation cost is saved using the proposed optimization model. -- Highlights: → We develop a complete simulation model for steam turbine with multiple extractions. → We test the simulation model using the performance data of commercial turbines. → The simulation error of electric power generation is no more than 0.87%. → We establish a utility system operational optimization model. → The optimal industrial operation scheme featured with 5.47% of cost saving.
An improved null model for assessing the net effects of multiple stressors on communities.
Thompson, Patrick L; MacLennan, Megan M; Vinebrooke, Rolf D
2018-01-01
Ecological stressors (i.e., environmental factors outside their normal range of variation) can mediate each other through their interactions, leading to unexpected combined effects on communities. Determining whether the net effect of stressors is ecologically surprising requires comparing their cumulative impact to a null model that represents the linear combination of their individual effects (i.e., an additive expectation). However, we show that standard additive and multiplicative null models that base their predictions on the effects of single stressors on community properties (e.g., species richness or biomass) do not provide this linear expectation, leading to incorrect interpretations of antagonistic and synergistic responses by communities. We present an alternative, the compositional null model, which instead bases its predictions on the effects of stressors on individual species, and then aggregates them to the community level. Simulations demonstrate the improved ability of the compositional null model to accurately provide a linear expectation of the net effect of stressors. We simulate the response of communities to paired stressors that affect species in a purely additive fashion and compare the relative abilities of the compositional null model and two standard community property null models (additive and multiplicative) to predict these linear changes in species richness and community biomass across different combinations (both positive, negative, or opposite) and intensities of stressors. The compositional model predicts the linear effects of multiple stressors under almost all scenarios, allowing for proper classification of net effects, whereas the standard null models do not. Our findings suggest that current estimates of the prevalence of ecological surprises on communities based on community property null models are unreliable, and should be improved by integrating the responses of individual species to the community level as does our
Improved double-multiple streamtube model for the Darrieus-type vertical axis wind turbine
Berg, D. E.
Double streamtube codes model the curved blade (Darrieus-type) vertical axis wind turbine (VAWT) as a double actuator fish arrangement (one half) and use conservation of momentum principles to determine the forces acting on the turbine blades and the turbine performance. Sandia National Laboratories developed a double multiple streamtube model for the VAWT which incorporates the effects of the incident wind boundary layer, nonuniform velocity between the upwind and downwind sections of the rotor, dynamic stall effects and local blade Reynolds number variations. The theory underlying this VAWT model is described, as well as the code capabilities. Code results are compared with experimental data from two VAWT's and with the results from another double multiple streamtube and a vortex filament code. The effects of neglecting dynamic stall and horizontal wind velocity distribution are also illustrated.
Abidi, Yassine; Bellassoued, Mourad; Mahjoub, Moncef; Zemzemi, Nejib
2018-03-01
In this paper, we consider the inverse problem of space dependent multiple ionic parameters identification in cardiac electrophysiology modelling from a set of observations. We use the monodomain system known as a state-of-the-art model in cardiac electrophysiology and we consider a general Hodgkin-Huxley formalism to describe the ionic exchanges at the microscopic level. This formalism covers many physiological transmembrane potential models including those in cardiac electrophysiology. Our main result is the proof of the uniqueness and a Lipschitz stability estimate of ion channels conductance parameters based on some observations on an arbitrary subdomain. The key idea is a Carleman estimate for a parabolic operator with multiple coefficients and an ordinary differential equation system.
Multiple sequential failure model: A probabilistic approach to quantifying human error dependency
International Nuclear Information System (INIS)
Samanta
1985-01-01
This paper rpesents a probabilistic approach to quantifying human error dependency when multiple tasks are performed. Dependent human failures are dominant contributors to risks from nuclear power plants. An overview of the Multiple Sequential Failure (MSF) model developed and its use in probabilistic risk assessments (PRAs) depending on the available data are discussed. A small-scale psychological experiment was conducted on the nature of human dependency and the interpretation of the experimental data by the MSF model show remarkable accommodation of the dependent failure data. The model, which provides an unique method for quantification of dependent failures in human reliability analysis, can be used in conjunction with any of the general methods currently used for performing the human reliability aspect in PRAs
Directory of Open Access Journals (Sweden)
Fábio Engel de Camargo
2012-11-01
Full Text Available In this work, the Verhulst model and the Perron-Frobenius theorem are applied on the power control problem which is a concern in multiple access communication networks due to the multiple access interference. This paper deals with the performance versus complexity tradeoff of both power control algorithm (PCA, as well as highlights the computational cost aspects regarding the implementability of distributed PCA (DPCA version for both algorithms. As a proof-of-concept the DPCA implementation is carried out deploying a commercial point-floating DSP platform. Numerical results in terms of DSP cycles and computational time as well indicate a feasibility of implementing the PCA-Verhulst model in 2G and 3G cellular systems; b high computational cost for the PCA-Perron-Frobenius model.
Linear systems with unstructured multiplicative uncertainty: Modeling and robust stability analysis.
Directory of Open Access Journals (Sweden)
Radek Matušů
Full Text Available This article deals with continuous-time Linear Time-Invariant (LTI Single-Input Single-Output (SISO systems affected by unstructured multiplicative uncertainty. More specifically, its aim is to present an approach to the construction of uncertain models based on the appropriate selection of a nominal system and a weight function and to apply the fundamentals of robust stability investigation for considered sort of systems. The initial theoretical parts are followed by three extensive illustrative examples in which the first order time-delay, second order and third order plants with parametric uncertainty are modeled as systems with unstructured multiplicative uncertainty and subsequently, the robust stability of selected feedback loops containing constructed models and chosen controllers is analyzed and obtained results are discussed.
Directory of Open Access Journals (Sweden)
Yoonsu Shin
2016-01-01
Full Text Available In the 5G era, the operational cost of mobile wireless networks will significantly increase. Further, massive network capacity and zero latency will be needed because everything will be connected to mobile networks. Thus, self-organizing networks (SON are needed, which expedite automatic operation of mobile wireless networks, but have challenges to satisfy the 5G requirements. Therefore, researchers have proposed a framework to empower SON using big data. The recent framework of a big data-empowered SON analyzes the relationship between key performance indicators (KPIs and related network parameters (NPs using machine-learning tools, and it develops regression models using a Gaussian process with those parameters. The problem, however, is that the methods of finding the NPs related to the KPIs differ individually. Moreover, the Gaussian process regression model cannot determine the relationship between a KPI and its various related NPs. In this paper, to solve these problems, we proposed multivariate multiple regression models to determine the relationship between various KPIs and NPs. If we assume one KPI and multiple NPs as one set, the proposed models help us process multiple sets at one time. Also, we can find out whether some KPIs are conflicting or not. We implement the proposed models using MapReduce.
A multiple objective mixed integer linear programming model for power generation expansion planning
Energy Technology Data Exchange (ETDEWEB)
Antunes, C. Henggeler; Martins, A. Gomes [INESC-Coimbra, Coimbra (Portugal); Universidade de Coimbra, Dept. de Engenharia Electrotecnica, Coimbra (Portugal); Brito, Isabel Sofia [Instituto Politecnico de Beja, Escola Superior de Tecnologia e Gestao, Beja (Portugal)
2004-03-01
Power generation expansion planning inherently involves multiple, conflicting and incommensurate objectives. Therefore, mathematical models become more realistic if distinct evaluation aspects, such as cost and environmental concerns, are explicitly considered as objective functions rather than being encompassed by a single economic indicator. With the aid of multiple objective models, decision makers may grasp the conflicting nature and the trade-offs among the different objectives in order to select satisfactory compromise solutions. This paper presents a multiple objective mixed integer linear programming model for power generation expansion planning that allows the consideration of modular expansion capacity values of supply-side options. This characteristic of the model avoids the well-known problem associated with continuous capacity values that usually have to be discretized in a post-processing phase without feedback on the nature and importance of the changes in the attributes of the obtained solutions. Demand-side management (DSM) is also considered an option in the planning process, assuming there is a sufficiently large portion of the market under franchise conditions. As DSM full costs are accounted in the model, including lost revenues, it is possible to perform an evaluation of the rate impact in order to further inform the decision process (Author)
International Nuclear Information System (INIS)
Chang, Qiang; Zuo, Li
2013-01-01
Spatial gradients of surrounding chemoattractants are the key factors in determining the directionality of eukaryotic cell movement. Thus, it is important for cells to accurately measure the spatial gradients of surrounding chemoattractants. Here, we study the precision of sensing the spatial gradients of multiple chemoattractants using cooperative receptor clusters. Cooperative receptors on cells are modeled as an Ising chain of Monod–Wyman–Changeux clusters subject to multiple chemical-gradient fields to study the physical limits of multiple chemoattractants spatial gradients sensing. We found that eukaryotic cells cannot sense each chemoattractant gradient individually. Instead, cells can only sense a weighted sum of surrounding chemical gradients. Moreover, the precision of sensing one chemical gradient is signicantly affected by coexisting chemoattractant concentrations. These findings can provide a further insight into the role of chemoattractants in immune response and help develop novel treatments for inflammatory diseases. (paper)
Sang, Huiyan
2011-12-01
This paper investigates the cross-correlations across multiple climate model errors. We build a Bayesian hierarchical model that accounts for the spatial dependence of individual models as well as cross-covariances across different climate models. Our method allows for a nonseparable and nonstationary cross-covariance structure. We also present a covariance approximation approach to facilitate the computation in the modeling and analysis of very large multivariate spatial data sets. The covariance approximation consists of two parts: a reduced-rank part to capture the large-scale spatial dependence, and a sparse covariance matrix to correct the small-scale dependence error induced by the reduced rank approximation. We pay special attention to the case that the second part of the approximation has a block-diagonal structure. Simulation results of model fitting and prediction show substantial improvement of the proposed approximation over the predictive process approximation and the independent blocks analysis. We then apply our computational approach to the joint statistical modeling of multiple climate model errors. © 2012 Institute of Mathematical Statistics.
An economic production model for time dependent demand with rework and multiple production setups
Directory of Open Access Journals (Sweden)
S.R. Singh
2014-04-01
Full Text Available In this paper, we present a model for time dependent demand with multiple productions and rework setups. Production is demand dependent and greater than the demand rate. Production facility produces items in m production setups and one rework setup (m, 1 policy. The major reason of reverse logistic and green supply chain is rework, so it reduces the cost of production and other ecological problems. Most of the researchers developed a rework model without deteriorating items. A numerical example and sensitivity analysis is shown to describe the model.
A Fractional Supervision Game Model of Multiple Stakeholders and Numerical Simulation
Directory of Open Access Journals (Sweden)
Rongwu Lu
2017-01-01
Full Text Available Considering the popular use of a certain kind of supervision management problem in many fields, we firstly build an ordinary supervision game model of multiple stakeholders. Secondly, a fractional supervision game model is set up and solved based on the theory of fractional calculus and a predictor-corrector numerical approach. Thirdly, the methods of phase diagram and time series graph were applied to simulate and analyse the dynamic process of the fractional order game model. Results of numerical solutions are given to illustrate our conclusions and referred to the practice.
Adapting and applying a multiple domain model of condom use to Chinese college students.
Xiao, Zhiwen; Palmgreen, Philip; Zimmerman, Rick; Noar, Seth
2010-03-01
This study adapts a multiple domain model (MDM) to explain condom use among a sample of sexually active Chinese college students. A cross-sectional survey was conducted and structural equation modeling was used to test the proposed model. Preparatory behaviors, theory of reasoned action (TRA)/theory of planned behavior variables, impulsivity, length of relationship, and alcohol use were significant direct predictors of condom use. The results suggest that MDM can provide a better understanding of heterosexual condom use among Chinese youth, and help in the design of HIV-preventive and safer sex interventions in China.
Fuzzy linear model for production optimization of mining systems with multiple entities
Vujic, Slobodan; Benovic, Tomo; Miljanovic, Igor; Hudej, Marjan; Milutinovic, Aleksandar; Pavlovic, Petar
2011-12-01
Planning and production optimization within multiple mines or several work sites (entities) mining systems by using fuzzy linear programming (LP) was studied. LP is the most commonly used operations research methods in mining engineering. After the introductory review of properties and limitations of applying LP, short reviews of the general settings of deterministic and fuzzy LP models are presented. With the purpose of comparative analysis, the application of both LP models is presented using the example of the Bauxite Basin Niksic with five mines. After the assessment, LP is an efficient mathematical modeling tool in production planning and solving many other single-criteria optimization problems of mining engineering. After the comparison of advantages and deficiencies of both deterministic and fuzzy LP models, the conclusion presents benefits of the fuzzy LP model but is also stating that seeking the optimal plan of production means to accomplish the overall analysis that will encompass the LP model approaches.
Knoben, Wouter; Woods, Ross; Freer, Jim
2016-04-01
Conceptual hydrologic models consist of a certain arrangement of spatial and temporal dynamics consisting of stores, fluxes and transformation functions, depending on the modeller's choices and intended use. They have the advantages of being computationally efficient, being relatively easy model structures to reconfigure and having relatively low input data demands. This makes them well-suited for large-scale and large-sample hydrology, where appropriately representing the dominant hydrologic functions of a catchment is a main concern. Given these requirements, the number of parameters in the model cannot be too high, to avoid equifinality and identifiability issues. This limits the number and level of complexity of dominant hydrologic processes the model can represent. Specific purposes and places thus require a specific model and this has led to an abundance of conceptual hydrologic models. No structured overview of these models exists and there is no clear method to select appropriate model structures for different catchments. This study is a first step towards creating an overview of the elements that make up conceptual models, which may later assist a modeller in finding an appropriate model structure for a given catchment. To this end, this study brings together over 30 past and present conceptual models. The reviewed model structures are simply different configurations of three basic model elements (stores, fluxes and transformation functions), depending on the hydrologic processes the models are intended to represent. Differences also exist in the inner workings of the stores, fluxes and transformations, i.e. the mathematical formulations that describe each model element's intended behaviour. We investigate the hypothesis that different model structures can produce similar behavioural simulations. This can clarify the overview of model elements by grouping elements which are similar, which can improve model structure selection.
Kim, S. H.; Lim, C. H.; Kim, J.; Lee, W. K.; Kafatos, M.
2016-12-01
The Korean Peninsula has unique agricultural environment due to the differences of political and socio-economical system between Republic of Korea (SK, hereafter) and Democratic Peoples' Republic of Korea (NK, hereafter). NK has been suffering lack of food supplies caused by natural disasters, land degradation and political failure. The neighboring developed country SK has better agricultural system but very low food self-sufficiency rate. Maize is an important crop in both countries since it is staple food for NK and SK is No. 2 maize importing country in the world after Japan. Therefore, evaluating maize yield potential (Yp) in the two distinct regions is essential to assess food security under climate change and variability. In this study, we utilized multiple process-based crop models, having ability of regional scale assessment, to evaluate maize Yp and assess the model uncertainties -EPIC, GEPIC, DSSAT, and APSIM model that has capability of regional scale expansion (apsimRegions). First we evaluated each crop model for 3 years from 2012 to 2014 using reanalysis data (RDAPS; Regional Data Assimilation and Prediction System produced by Korea Meteorological Agency) and observed yield data. Each model performances were compared over the different regions in the Korean Peninsula having different local climate characteristics. To quantify of the major influence of at each climate variables, we also conducted sensitivity test using 20 years of climatology in historical period from 1981 to 2000. Lastly, the multi-crop model ensemble analysis was performed for future period from 2031 to 2050. The required weather variables projected for mid-century were employed from COordinated Regional climate Downscaling EXperiment (CORDEX) East Asia. The high-resolution climate data were obtained from multiple regional climate models (RCM) driven by multiple climate scenarios projected from multiple global climate models (GCMs) in conjunction with multiple greenhouse gas
Directory of Open Access Journals (Sweden)
BUDIMAN
2012-01-01
Full Text Available Budiman, Arisoesilaningsih E. 2012. Predictive model of Amorphophallus muelleri growth in some agroforestry in East Java by multiple regression analysis. Biodiversitas 13: 18-22. The aims of this research was to determine the multiple regression models of vegetative and corm growth of Amorphophallus muelleri Blume in some age variations and habitat conditions of agroforestry in East Java. Descriptive exploratory research method was conducted by systematic random sampling at five agroforestries on four plantations in East Java: Saradan, Bojonegoro, Nganjuk and Blitar. In each agroforestry, we observed A. muelleri vegetative and corm growth on four growing age (1, 2, 3 and 4 years old respectively as well as environmental variables such as altitude, vegetation, climate and soil conditions. Data were analyzed using descriptive statistics to compare A. muelleri habitat in five agroforestries. Meanwhile, the influence and contribution of each environmental variable to the growth of A. muelleri vegetative and corm were determined using multiple regression analysis of SPSS 17.0. The multiple regression models of A. muelleri vegetative and corm growth were generated based on some characteristics of agroforestries and age showed high validity with R2 = 88-99%. Regression model showed that age, monthly temperatures, percentage of radiation and soil calcium (Ca content either simultaneously or partially determined the growth of A. muelleri vegetative and corm. Based on these models, the A. muelleri corm reached the optimal growth after four years of cultivation and they will be ready to be harvested. Additionally, the soil Ca content should reach 25.3 me.hg-1 as Sugihwaras agroforestry, with the maximal radiation of 60%.
Ichii, K.; Kondo, M.; Wang, W.; Hashimoto, H.; Nemani, R. R.
2012-12-01
Various satellite-based spatial products such as evapotranspiration (ET) and gross primary productivity (GPP) are now produced by integration of ground and satellite observations. Effective use of these multiple satellite-based products in terrestrial biosphere models is an important step toward better understanding of terrestrial carbon and water cycles. However, due to the complexity of terrestrial biosphere models with large number of model parameters, the application of these spatial data sets in terrestrial biosphere models is difficult. In this study, we established an effective but simple framework to refine a terrestrial biosphere model, Biome-BGC, using multiple satellite-based products as constraints. We tested the framework in the monsoon Asia region covered by AsiaFlux observations. The framework is based on the hierarchical analysis (Wang et al. 2009) with model parameter optimization constrained by satellite-based spatial data. The Biome-BGC model is separated into several tiers to minimize the freedom of model parameter selections and maximize the independency from the whole model. For example, the snow sub-model is first optimized using MODIS snow cover product, followed by soil water sub-model optimized by satellite-based ET (estimated by an empirical upscaling method; Support Vector Regression (SVR) method; Yang et al. 2007), photosynthesis model optimized by satellite-based GPP (based on SVR method), and respiration and residual carbon cycle models optimized by biomass data. As a result of initial assessment, we found that most of default sub-models (e.g. snow, water cycle and carbon cycle) showed large deviations from remote sensing observations. However, these biases were removed by applying the proposed framework. For example, gross primary productivities were initially underestimated in boreal and temperate forest and overestimated in tropical forests. However, the parameter optimization scheme successfully reduced these biases. Our analysis
Jamshidi, Kambiz; Salehi, Jawad A.
2005-05-01
This paper describes a study of the performance of various configurations for placing multiple optical amplifiers in a typical coherent ultrashort light pulse code-division multiple access (CULP-CDMA) communication system using the additive noise model. For this study, a comprehensive performance analysis was developed that takes into account multiple-access noise, noise due to optical amplifiers, and thermal noise using the saddle-point approximation technique. Prior to obtaining the overall system performance, the input/output statistical models for different elements of the system such as encoders/decoders,star coupler, and optical amplifiers were obtained. Performance comparisons between an ideal and lossless quantum-limited case and a typical CULP-CDMA with various losses exhibit more than 30 dB more power requirement to obtain the same bit-error rate (BER). Considering the saturation effect of optical amplifiers, this paper discusses an algorithm for amplifiers' gain setting in various stages of the network in order to overcome the nonlinear effects on signal modulation in optical amplifiers. Finally, using this algorithm,various configurations of multiple optical amplifiers in CULP-CDMA are discussed and the rules for the required optimum number of amplifiers are shown with their corresponding optimum locations to be implemented along the CULP-CDMA system.
Vugrin, Eric D.; Rostron, Brian L.; Verzi, Stephen J.; Brodsky, Nancy S.; Brown, Theresa J.; Choiniere, Conrad J.; Coleman, Blair N.; Paredes, Antonio; Apelberg, Benjamin J.
2015-01-01
Background Recent declines in US cigarette smoking prevalence have coincided with increases in use of other tobacco products. Multiple product tobacco models can help assess the population health impacts associated with use of a wide range of tobacco products. Methods and Findings We present a multi-state, dynamical systems population structure model that can be used to assess the effects of tobacco product use behaviors on population health. The model incorporates transition behaviors, such as initiation, cessation, switching, and dual use, related to the use of multiple products. The model tracks product use prevalence and mortality attributable to tobacco use for the overall population and by sex and age group. The model can also be used to estimate differences in these outcomes between scenarios by varying input parameter values. We demonstrate model capabilities by projecting future cigarette smoking prevalence and smoking-attributable mortality and then simulating the effects of introduction of a hypothetical new lower-risk tobacco product under a variety of assumptions about product use. Sensitivity analyses were conducted to examine the range of population impacts that could occur due to differences in input values for product use and risk. We demonstrate that potential benefits from cigarette smokers switching to the lower-risk product can be offset over time through increased initiation of this product. Model results show that population health benefits are particularly sensitive to product risks and initiation, switching, and dual use behaviors. Conclusion Our model incorporates the variety of tobacco use behaviors and risks that occur with multiple products. As such, it can evaluate the population health impacts associated with the introduction of new tobacco products or policies that may result in product switching or dual use. Further model development will include refinement of data inputs for non-cigarette tobacco products and inclusion of health
Hybrid approaches for multiple-species stochastic reaction–diffusion models
International Nuclear Information System (INIS)
Spill, Fabian; Guerrero, Pilar; Alarcon, Tomas; Maini, Philip K.; Byrne, Helen
2015-01-01
Reaction–diffusion models are used to describe systems in fields as diverse as physics, chemistry, ecology and biology. The fundamental quantities in such models are individual entities such as atoms and molecules, bacteria, cells or animals, which move and/or react in a stochastic manner. If the number of entities is large, accounting for each individual is inefficient, and often partial differential equation (PDE) models are used in which the stochastic behaviour of individuals is replaced by a description of the averaged, or mean behaviour of the system. In some situations the number of individuals is large in certain regions and small in others. In such cases, a stochastic model may be inefficient in one region, and a PDE model inaccurate in another. To overcome this problem, we develop a scheme which couples a stochastic reaction–diffusion system in one part of the domain with its mean field analogue, i.e. a discretised PDE model, in the other part of the domain. The interface in between the two domains occupies exactly one lattice site and is chosen such that the mean field description is still accurate there. In this way errors due to the flux between the domains are small. Our scheme can account for multiple dynamic interfaces separating multiple stochastic and deterministic domains, and the coupling between the domains conserves the total number of particles. The method preserves stochastic features such as extinction not observable in the mean field description, and is significantly faster to simulate on a computer than the pure stochastic model. - Highlights: • A novel hybrid stochastic/deterministic reaction–diffusion simulation method is given. • Can massively speed up stochastic simulations while preserving stochastic effects. • Can handle multiple reacting species. • Can handle moving boundaries
Hybrid approaches for multiple-species stochastic reaction–diffusion models
Energy Technology Data Exchange (ETDEWEB)
Spill, Fabian, E-mail: fspill@bu.edu [Department of Biomedical Engineering, Boston University, 44 Cummington Street, Boston, MA 02215 (United States); Department of Mechanical Engineering, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, MA 02139 (United States); Guerrero, Pilar [Department of Mathematics, University College London, Gower Street, London WC1E 6BT (United Kingdom); Alarcon, Tomas [Centre de Recerca Matematica, Campus de Bellaterra, Edifici C, 08193 Bellaterra (Barcelona) (Spain); Departament de Matemàtiques, Universitat Atonòma de Barcelona, 08193 Bellaterra (Barcelona) (Spain); Maini, Philip K. [Wolfson Centre for Mathematical Biology, Mathematical Institute, University of Oxford, Oxford OX2 6GG (United Kingdom); Byrne, Helen [Wolfson Centre for Mathematical Biology, Mathematical Institute, University of Oxford, Oxford OX2 6GG (United Kingdom); Computational Biology Group, Department of Computer Science, University of Oxford, Oxford OX1 3QD (United Kingdom)
2015-10-15
Reaction–diffusion models are used to describe systems in fields as diverse as physics, chemistry, ecology and biology. The fundamental quantities in such models are individual entities such as atoms and molecules, bacteria, cells or animals, which move and/or react in a stochastic manner. If the number of entities is large, accounting for each individual is inefficient, and often partial differential equation (PDE) models are used in which the stochastic behaviour of individuals is replaced by a description of the averaged, or mean behaviour of the system. In some situations the number of individuals is large in certain regions and small in others. In such cases, a stochastic model may be inefficient in one region, and a PDE model inaccurate in another. To overcome this problem, we develop a scheme which couples a stochastic reaction–diffusion system in one part of the domain with its mean field analogue, i.e. a discretised PDE model, in the other part of the domain. The interface in between the two domains occupies exactly one lattice site and is chosen such that the mean field description is still accurate there. In this way errors due to the flux between the domains are small. Our scheme can account for multiple dynamic interfaces separating multiple stochastic and deterministic domains, and the coupling between the domains conserves the total number of particles. The method preserves stochastic features such as extinction not observable in the mean field description, and is significantly faster to simulate on a computer than the pure stochastic model. - Highlights: • A novel hybrid stochastic/deterministic reaction–diffusion simulation method is given. • Can massively speed up stochastic simulations while preserving stochastic effects. • Can handle multiple reacting species. • Can handle moving boundaries.
On the representability of complete genomes by multiple competing finite-context (Markov models.
Directory of Open Access Journals (Sweden)
Armando J Pinho
Full Text Available A finite-context (Markov model of order k yields the probability distribution of the next symbol in a sequence of symbols, given the recent past up to depth k. Markov modeling has long been applied to DNA sequences, for example to find gene-coding regions. With the first studies came the discovery that DNA sequences are non-stationary: distinct regions require distinct model orders. Since then, Markov and hidden Markov models have been extensively used to describe the gene structure of prokaryotes and eukaryotes. However, to our knowledge, a comprehensive study about the potential of Markov models to describe complete genomes is still lacking. We address this gap in this paper. Our approach relies on (i multiple competing Markov models of different orders (ii careful programming techniques that allow orders as large as sixteen (iii adequate inverted repeat handling (iv probability estimates suited to the wide range of context depths used. To measure how well a model fits the data at a particular position in the sequence we use the negative logarithm of the probability estimate at that position. The measure yields information profiles of the sequence, which are of independent interest. The average over the entire sequence, which amounts to the average number of bits per base needed to describe the sequence, is used as a global performance measure. Our main conclusion is that, from the probabilistic or information theoretic point of view and according to this performance measure, multiple competing Markov models explain entire genomes almost as well or even better than state-of-the-art DNA compression methods, such as XM, which rely on very different statistical models. This is surprising, because Markov models are local (short-range, contrasting with the statistical models underlying other methods, where the extensive data repetitions in DNA sequences is explored, and therefore have a non-local character.
Contraction Options and Optimal Multiple-Stopping in Spectrally Negative Lévy Models
Energy Technology Data Exchange (ETDEWEB)
Yamazaki, Kazutoshi, E-mail: kyamazak@kansai-u.ac.jp [Kansai University, Department of Mathematics, Faculty of Engineering Science (Japan)
2015-08-15
This paper studies the optimal multiple-stopping problem arising in the context of the timing option to withdraw from a project in stages. The profits are driven by a general spectrally negative Lévy process. This allows the model to incorporate sudden declines of the project values, generalizing greatly the classical geometric Brownian motion model. We solve the one-stage case as well as the extension to the multiple-stage case. The optimal stopping times are of threshold-type and the value function admits an expression in terms of the scale function. A series of numerical experiments are conducted to verify the optimality and to evaluate the efficiency of the algorithm.
International Nuclear Information System (INIS)
Gomes, Alvaro; Antunes, Carlos Henggeler; Martins, Antonio Gomes
2005-01-01
This paper aims at presenting a multiple objective model to evaluate the attractiveness of the use of demand resources (through load management control actions) by different stakeholders and in diverse structure scenarios in electricity systems. For the sake of model flexibility, the multiple (and conflicting) objective functions of technical, economical and quality of service nature are able to capture distinct market scenarios and operating entities that may be interested in promoting load management activities. The computation of compromise solutions is made by resorting to evolutionary algorithms, which are well suited to tackle multiobjective problems of combinatorial nature herein involving the identification and selection of control actions to be applied to groups of loads. (Author)
Contraction Options and Optimal Multiple-Stopping in Spectrally Negative Lévy Models
International Nuclear Information System (INIS)
Yamazaki, Kazutoshi
2015-01-01
This paper studies the optimal multiple-stopping problem arising in the context of the timing option to withdraw from a project in stages. The profits are driven by a general spectrally negative Lévy process. This allows the model to incorporate sudden declines of the project values, generalizing greatly the classical geometric Brownian motion model. We solve the one-stage case as well as the extension to the multiple-stage case. The optimal stopping times are of threshold-type and the value function admits an expression in terms of the scale function. A series of numerical experiments are conducted to verify the optimality and to evaluate the efficiency of the algorithm
Smith, James A.
1992-01-01
The inversion of the leaf area index (LAI) canopy parameter from optical spectral reflectance measurements is obtained using a backpropagation artificial neural network trained using input-output pairs generated by a multiple scattering reflectance model. The problem of LAI estimation over sparse canopies (LAI 1000 percent for low LAI. Minimization methods applied to merit functions constructed from differences between measured reflectances and predicted reflectances using multiple-scattering models are unacceptably sensitive to a good initial guess for the desired parameter. In contrast, the neural network reported generally yields absolute percentage errors of <30 percent when weighting coefficients trained on one soil type were applied to predicted canopy reflectance at a different soil background.
Delay-Dependent Asymptotic Stability of Cohen-Grossberg Models with Multiple Time-Varying Delays
Directory of Open Access Journals (Sweden)
Xiaofeng Liao
2007-01-01
Full Text Available Dynamical behavior of a class of Cohen-Grossberg models with multiple time-varying delays is studied in detail. Sufficient delay-dependent criteria to ensure local and global asymptotic stabilities of the equilibrium of this network are derived by constructing suitable Lyapunov functionals. The obtained conditions are shown to be less conservative and restrictive than those reported in the known literature. Some numerical examples are included to demonstrate our results.
Attributional Style and Depression in Multiple Sclerosis: The Learned Helplessness Model
Vargas, Gray A.; Arnett, Peter A.
2013-01-01
Several etiologic theories have been proposed to explain depression in the general population. Studying these models and modifying them for use in the multiple sclerosis (MS) population may allow us to better understand depression in MS. According to the reformulated learned helplessness (LH) theory, individuals who attribute negative events to internal, stable, and global causes are more vulnerable to depression. This study differentiated attributional style that was or was not related to MS...
The Dividend Discount Model with Multiple Growth Rates of Any Order for Stock Evaluation
Hatemi-J, Abdulnasser; El-Khatib, Youssef
2018-01-01
In this paper we provide a general solution for the dividend discount model in order to compute the intrinsic value of a common stock that allows for multiple stage growth rates of any predetermined number of periods. A mathematical proof is provided for the suggested general solution. A numerical application is also presented. The solution introduced in this paper is expected to improve on the precision of stock valuation, which might be of fundamental importance for investors as well as fin...
Multiple attractors and dynamics in an OLG model with productive environment
Caravaggio, Andrea; Sodini, Mauro
2018-05-01
This work analyses an overlapping generations model in which economic activity depends on the exploitation of a free-access natural resource. In addition, public expenditures for environmental maintenance are assumed. By characterising some properties of the map and performing numerical simulations, we investigate consequences of the interplay between environmental public expenditure and private sector. In particular, we identify different scenarios in which multiple equilibria as well as complex dynamics may arise.
Rakotonarivo , Sandrine; Walker , S.C.; Kuperman , W. A.; Roux , Philippe
2011-01-01
International audience; A method to actively localize a small perturbation in a multiple scattering medium using a collection of remote acoustic sensors is presented. The approach requires only minimal modeling and no knowledge of the scatterer distribution and properties of the scattering medium and the perturbation. The medium is ensonified before and after a perturbation is introduced. The coherent difference between the measured signals then reveals all field components that have interact...
Specification and testing of Multiplicative Time-Varying GARCH models with applications
DEFF Research Database (Denmark)
Amado, Cristina; Teräsvirta, Timo
2017-01-01
In this article, we develop a specification technique for building multiplicative time-varying GARCH models of Amado and Teräsvirta (2008, 2013). The variance is decomposed into an unconditional and a conditional component such that the unconditional variance component is allowed to evolve smooth...... is illustrated in practice with two real examples: an empirical application to daily exchange rate returns and another one to daily coffee futures returns....
Model selection with multiple regression on distance matrices leads to incorrect inferences.
Directory of Open Access Journals (Sweden)
Ryan P Franckowiak
Full Text Available In landscape genetics, model selection procedures based on Information Theoretic and Bayesian principles have been used with multiple regression on distance matrices (MRM to test the relationship between multiple vectors of pairwise genetic, geographic, and environmental distance. Using Monte Carlo simulations, we examined the ability of model selection criteria based on Akaike's information criterion (AIC, its small-sample correction (AICc, and the Bayesian information criterion (BIC to reliably rank candidate models when applied with MRM while varying the sample size. The results showed a serious problem: all three criteria exhibit a systematic bias toward selecting unnecessarily complex models containing spurious random variables and erroneously suggest a high level of support for the incorrectly ranked best model. These problems effectively increased with increasing sample size. The failure of AIC, AICc, and BIC was likely driven by the inflated sample size and different sum-of-squares partitioned by MRM, and the resulting effect on delta values. Based on these findings, we strongly discourage the continued application of AIC, AICc, and BIC for model selection with MRM.
Step wise, multiple objective calibration of a hydrologic model for a snowmelt dominated basin
Hay, L.E.; Leavesley, G.H.; Clark, M.P.; Markstrom, S.L.; Viger, R.J.; Umemoto, M.
2006-01-01
The ability to apply a hydrologic model to large numbers of basins for forecasting purposes requires a quick and effective calibration strategy. This paper presents a step wise, multiple objective, automated procedure for hydrologic model calibration. This procedure includes the sequential calibration of a model's simulation of solar radiation (SR), potential evapotranspiration (PET), water balance, and daily runoff. The procedure uses the Shuffled Complex Evolution global search algorithm to calibrate the U.S. Geological Survey's Precipitation Runoff Modeling System in the Yampa River basin of Colorado. This process assures that intermediate states of the model (SR and PET on a monthly mean basis), as well as the water balance and components of the daily hydrograph are simulated, consistently with measured values.
Evolving Non-Dominated Parameter Sets for Computational Models from Multiple Experiments
Lane, Peter C. R.; Gobet, Fernand
2013-03-01
Creating robust, reproducible and optimal computational models is a key challenge for theorists in many sciences. Psychology and cognitive science face particular challenges as large amounts of data are collected and many models are not amenable to analytical techniques for calculating parameter sets. Particular problems are to locate the full range of acceptable model parameters for a given dataset, and to confirm the consistency of model parameters across different datasets. Resolving these problems will provide a better understanding of the behaviour of computational models, and so support the development of general and robust models. In this article, we address these problems using evolutionary algorithms to develop parameters for computational models against multiple sets of experimental data; in particular, we propose the `speciated non-dominated sorting genetic algorithm' for evolving models in several theories. We discuss the problem of developing a model of categorisation using twenty-nine sets of data and models drawn from four different theories. We find that the evolutionary algorithms generate high quality models, adapted to provide a good fit to all available data.
Bahng, B.; Whitmore, P.; Macpherson, K. A.; Knight, W. R.
2016-12-01
The Alaska Tsunami Forecast Model (ATFM) is a numerical model used to forecast propagation and inundation of tsunamis generated by earthquakes or other mechanisms in either the Pacific Ocean, Atlantic Ocean or Gulf of Mexico. At the U.S. National Tsunami Warning Center (NTWC), the use of the model has been mainly for tsunami pre-computation due to earthquakes. That is, results for hundreds of hypothetical events are computed before alerts, and are accessed and calibrated with observations during tsunamis to immediately produce forecasts. The model has also been used for tsunami hindcasting due to submarine landslides and due to atmospheric pressure jumps, but in a very case-specific and somewhat limited manner. ATFM uses the non-linear, depth-averaged, shallow-water equations of motion with multiply nested grids in two-way communications between domains of each parent-child pair as waves approach coastal waters. The shallow-water wave physics is readily applicable to all of the above tsunamis as well as to tides. Recently, the model has been expanded to include multiple forcing mechanisms in a systematic fashion, and to enhance the model physics for non-earthquake events.ATFM is now able to handle multiple source mechanisms, either individually or jointly, which include earthquake, submarine landslide, meteo-tsunami and tidal forcing. As for earthquakes, the source can be a single unit source or multiple, interacting source blocks. Horizontal slip contribution can be added to the sea-floor displacement. The model now includes submarine landslide physics, modeling the source either as a rigid slump, or as a viscous fluid. Additional shallow-water physics have been implemented for the viscous submarine landslides. With rigid slumping, any trajectory can be followed. As for meteo-tsunami, the forcing mechanism is capable of following any trajectory shape. Wind stress physics has also been implemented for the meteo-tsunami case, if required. As an example of multiple
Integrated model of multiple kernel learning and differential evolution for EUR/USD trading.
Deng, Shangkun; Sakurai, Akito
2014-01-01
Currency trading is an important area for individual investors, government policy decisions, and organization investments. In this study, we propose a hybrid approach referred to as MKL-DE, which combines multiple kernel learning (MKL) with differential evolution (DE) for trading a currency pair. MKL is used to learn a model that predicts changes in the target currency pair, whereas DE is used to generate the buy and sell signals for the target currency pair based on the relative strength index (RSI), while it is also combined with MKL as a trading signal. The new hybrid implementation is applied to EUR/USD trading, which is the most traded foreign exchange (FX) currency pair. MKL is essential for utilizing information from multiple information sources and DE is essential for formulating a trading rule based on a mixture of discrete structures and continuous parameters. Initially, the prediction model optimized by MKL predicts the returns based on a technical indicator called the moving average convergence and divergence. Next, a combined trading signal is optimized by DE using the inputs from the prediction model and technical indicator RSI obtained from multiple timeframes. The experimental results showed that trading using the prediction learned by MKL yielded consistent profits.
Integrated Model of Multiple Kernel Learning and Differential Evolution for EUR/USD Trading
Directory of Open Access Journals (Sweden)
Shangkun Deng
2014-01-01
Full Text Available Currency trading is an important area for individual investors, government policy decisions, and organization investments. In this study, we propose a hybrid approach referred to as MKL-DE, which combines multiple kernel learning (MKL with differential evolution (DE for trading a currency pair. MKL is used to learn a model that predicts changes in the target currency pair, whereas DE is used to generate the buy and sell signals for the target currency pair based on the relative strength index (RSI, while it is also combined with MKL as a trading signal. The new hybrid implementation is applied to EUR/USD trading, which is the most traded foreign exchange (FX currency pair. MKL is essential for utilizing information from multiple information sources and DE is essential for formulating a trading rule based on a mixture of discrete structures and continuous parameters. Initially, the prediction model optimized by MKL predicts the returns based on a technical indicator called the moving average convergence and divergence. Next, a combined trading signal is optimized by DE using the inputs from the prediction model and technical indicator RSI obtained from multiple timeframes. The experimental results showed that trading using the prediction learned by MKL yielded consistent profits.
Directory of Open Access Journals (Sweden)
Intan Kusumawati
2016-03-01
Full Text Available Penelitian ini bertujuan untuk mengetahui efektivitas penerapan model pembelajaran atraktif berbasis multiple intelligences dalam meremediasi miskonsepsi siswa tentang pemantulan cahaya pada cermin. Pada penelitian ini digunakan bentuk pre-eksperimental design dengan rancangan one group pretest-post test design. Alat pengumpulan data berupa tes pilihan ganda dengan reasoning. Hasil validitas sebesar 4,08 dan reliabilitas 0,537. Siswa dibagi menjadi lima kelompok kecerdasan, yaitu kelompok linguistic intelligence, mathematical-logical intelligence, visual-spatial intelligence, bodily-khinestetic intelligence, dan musical intelligence. Siswa membahas konsep fisika sesuai kelompok kecerdasannya dalam bentuk pembuatan pantun-puisi, teka-teki silang, menggambar kreatif, drama, dan mengarang lirik lagu. Efektivitas penerapan model pembelajaran multiple intelligences menggunakan persamaan effect size. Ditemukan bahwa skor effect size masing-masing kelompok berkategori tinggi sebesar 5,76; 3,76; 4,60; 1,70; dan 1,34. Penerapan model pembelajaran atraktif berbasis multiple intelligences efektif dalam meremediasi miskonsepsi siswa. Penelitian ini diharapkan dapat digunakan pada materi fisika dan sekolah lainnya.
International Nuclear Information System (INIS)
Wu, Xuedong; Zhu, Zhiyu; Su, Xunliang; Fan, Shaosheng; Du, Zhaoping; Chang, Yanchao; Zeng, Qingjun
2015-01-01
Wind speed prediction is one important methods to guarantee the wind energy integrated into the whole power system smoothly. However, wind power has a non–schedulable nature due to the strong stochastic nature and dynamic uncertainty nature of wind speed. Therefore, wind speed prediction is an indispensable requirement for power system operators. Two new approaches for hourly wind speed prediction are developed in this study by integrating the single multiplicative neuron model and the iterated nonlinear filters for updating the wind speed sequence accurately. In the presented methods, a nonlinear state–space model is first formed based on the single multiplicative neuron model and then the iterated nonlinear filters are employed to perform dynamic state estimation on wind speed sequence with stochastic uncertainty. The suggested approaches are demonstrated using three cases wind speed data and are compared with autoregressive moving average, artificial neural network, kernel ridge regression based residual active learning and single multiplicative neuron model methods. Three types of prediction errors, mean absolute error improvement ratio and running time are employed for different models’ performance comparison. Comparison results from Tables 1–3 indicate that the presented strategies have much better performance for hourly wind speed prediction than other technologies. - Highlights: • Developed two novel hybrid modeling methods for hourly wind speed prediction. • Uncertainty and fluctuations of wind speed can be better explained by novel methods. • Proposed strategies have online adaptive learning ability. • Proposed approaches have shown better performance compared with existed approaches. • Comparison and analysis of two proposed novel models for three cases are provided
2010-10-01
... Authorizations § 2.924 Marketing of electrically identical equipment having multiple trade names and models or... 47 Telecommunication 1 2010-10-01 2010-10-01 false Marketing of electrically identical equipment having multiple trade names and models or type numbers under the same FCC Identifier. 2.924 Section 2.924...
Inference regarding multiple structural changes in linear models with endogenous regressors☆
Hall, Alastair R.; Han, Sanggohn; Boldea, Otilia
2012-01-01
This paper considers the linear model with endogenous regressors and multiple changes in the parameters at unknown times. It is shown that minimization of a Generalized Method of Moments criterion yields inconsistent estimators of the break fractions, but minimization of the Two Stage Least Squares (2SLS) criterion yields consistent estimators of these parameters. We develop a methodology for estimation and inference of the parameters of the model based on 2SLS. The analysis covers the cases where the reduced form is either stable or unstable. The methodology is illustrated via an application to the New Keynesian Phillips Curve for the US. PMID:23805021
Reception analysis seen from the multiple mediation model: some issues for the debat
Directory of Open Access Journals (Sweden)
Guilermo Orozco
2008-12-01
Full Text Available This paper is an application of the "Multiple Mediation" Model, as it has been developing by the author during the last 10 years. The model is used as a way to substantiate a reconceptualization of different aspects of the television reception process. As understood by Orozco, it is continually under construction, and emerges from analysis made by different thinkers. Its formulation has been and will continue to be the outcome of much reflexivity between existing theoretical and epistemological assumptions (within the Cultural Studies and Critical Audience Research traditions and empirical, mostly qualitative, data.
A speed guidance strategy for multiple signalized intersections based on car-following model
Tang, Tie-Qiao; Yi, Zhi-Yan; Zhang, Jian; Wang, Tao; Leng, Jun-Qiang
2018-04-01
Signalized intersection has great roles in urban traffic system. The signal infrastructure and the driving behavior near the intersection are paramount factors that have significant impacts on traffic flow and energy consumption. In this paper, a speed guidance strategy is introduced into a car-following model to study the driving behavior and the fuel consumption in a single-lane road with multiple signalized intersections. The numerical results indicate that the proposed model can reduce the fuel consumption and the average stop times. The findings provide insightful guidance for the eco-driving strategies near the signalized intersections.
Multiple imputation to account for measurement error in marginal structural models
Edwards, Jessie K.; Cole, Stephen R.; Westreich, Daniel; Crane, Heidi; Eron, Joseph J.; Mathews, W. Christopher; Moore, Richard; Boswell, Stephen L.; Lesko, Catherine R.; Mugavero, Michael J.
2015-01-01
Background Marginal structural models are an important tool for observational studies. These models typically assume that variables are measured without error. We describe a method to account for differential and non-differential measurement error in a marginal structural model. Methods We illustrate the method estimating the joint effects of antiretroviral therapy initiation and current smoking on all-cause mortality in a United States cohort of 12,290 patients with HIV followed for up to 5 years between 1998 and 2011. Smoking status was likely measured with error, but a subset of 3686 patients who reported smoking status on separate questionnaires composed an internal validation subgroup. We compared a standard joint marginal structural model fit using inverse probability weights to a model that also accounted for misclassification of smoking status using multiple imputation. Results In the standard analysis, current smoking was not associated with increased risk of mortality. After accounting for misclassification, current smoking without therapy was associated with increased mortality [hazard ratio (HR): 1.2 (95% CI: 0.6, 2.3)]. The HR for current smoking and therapy (0.4 (95% CI: 0.2, 0.7)) was similar to the HR for no smoking and therapy (0.4; 95% CI: 0.2, 0.6). Conclusions Multiple imputation can be used to account for measurement error in concert with methods for causal inference to strengthen results from observational studies. PMID:26214338
Inferring Ice Thickness from a Glacier Dynamics Model and Multiple Surface Datasets.
Guan, Y.; Haran, M.; Pollard, D.
2017-12-01
The future behavior of the West Antarctic Ice Sheet (WAIS) may have a major impact on future climate. For instance, ice sheet melt may contribute significantly to global sea level rise. Understanding the current state of WAIS is therefore of great interest. WAIS is drained by fast-flowing glaciers which are major contributors to ice loss. Hence, understanding the stability and dynamics of glaciers is critical for predicting the future of the ice sheet. Glacier dynamics are driven by the interplay between the topography, temperature and basal conditions beneath the ice. A glacier dynamics model describes the interactions between these processes. We develop a hierarchical Bayesian model that integrates multiple ice sheet surface data sets with a glacier dynamics model. Our approach allows us to (1) infer important parameters describing the glacier dynamics, (2) learn about ice sheet thickness, and (3) account for errors in the observations and the model. Because we have relatively dense and accurate ice thickness data from the Thwaites Glacier in West Antarctica, we use these data to validate the proposed approach. The long-term goal of this work is to have a general model that may be used to study multiple glaciers in the Antarctic.
Theoretical Models of Protostellar Binary and Multiple Systems with AMR Simulations
Matsumoto, Tomoaki; Tokuda, Kazuki; Onishi, Toshikazu; Inutsuka, Shu-ichiro; Saigo, Kazuya; Takakuwa, Shigehisa
2017-05-01
We present theoretical models for protostellar binary and multiple systems based on the high-resolution numerical simulation with an adaptive mesh refinement (AMR) code, SFUMATO. The recent ALMA observations have revealed early phases of the binary and multiple star formation with high spatial resolutions. These observations should be compared with theoretical models with high spatial resolutions. We present two theoretical models for (1) a high density molecular cloud core, MC27/L1521F, and (2) a protobinary system, L1551 NE. For the model for MC27, we performed numerical simulations for gravitational collapse of a turbulent cloud core. The cloud core exhibits fragmentation during the collapse, and dynamical interaction between the fragments produces an arc-like structure, which is one of the prominent structures observed by ALMA. For the model for L1551 NE, we performed numerical simulations of gas accretion onto protobinary. The simulations exhibit asymmetry of a circumbinary disk. Such asymmetry has been also observed by ALMA in the circumbinary disk of L1551 NE.
Multiple Imputation to Account for Measurement Error in Marginal Structural Models.
Edwards, Jessie K; Cole, Stephen R; Westreich, Daniel; Crane, Heidi; Eron, Joseph J; Mathews, W Christopher; Moore, Richard; Boswell, Stephen L; Lesko, Catherine R; Mugavero, Michael J
2015-09-01
Marginal structural models are an important tool for observational studies. These models typically assume that variables are measured without error. We describe a method to account for differential and nondifferential measurement error in a marginal structural model. We illustrate the method estimating the joint effects of antiretroviral therapy initiation and current smoking on all-cause mortality in a United States cohort of 12,290 patients with HIV followed for up to 5 years between 1998 and 2011. Smoking status was likely measured with error, but a subset of 3,686 patients who reported smoking status on separate questionnaires composed an internal validation subgroup. We compared a standard joint marginal structural model fit using inverse probability weights to a model that also accounted for misclassification of smoking status using multiple imputation. In the standard analysis, current smoking was not associated with increased risk of mortality. After accounting for misclassification, current smoking without therapy was associated with increased mortality (hazard ratio [HR]: 1.2 [95% confidence interval [CI] = 0.6, 2.3]). The HR for current smoking and therapy [0.4 (95% CI = 0.2, 0.7)] was similar to the HR for no smoking and therapy (0.4; 95% CI = 0.2, 0.6). Multiple imputation can be used to account for measurement error in concert with methods for causal inference to strengthen results from observational studies.
Efficacy and immunological actions of FAHF-2 in a murine model of multiple food allergies.
Srivastava, Kamal D; Bardina, Ludmilla; Sampson, Hugh A; Li, Xiu-Min
2012-05-01
Food Allergy Herbal Formula-2 (FAHF-2) prevents anaphylaxis in a murine model of peanut allergy. Multiple food allergies (MFA) are common and associated with a higher risk of anaphylaxis. No well-characterized murine model of sensitization to multiple food allergens exists, and no satisfactory therapy for MFA is currently available. To determine the effect of FAHF-2 in a murine model of MFA. C3H/HeJ mice were orally sensitized to peanut, codfish, and egg concurrently. Oral FAHF-2 treatment commenced 1 day after completing sensitization and continued daily for 7 weeks. Mice were subsequently orally challenged with each allergen. Antibodies in sera from mice simultaneously sensitized with peanut, codfish, and egg recognized major allergens of all 3 foods, demonstrating sensitization to multiple unrelated food allergens (MFA mice). Sham-treated MFA mice exhibited anaphylactic symptoms accompanied by elevation of plasma histamine and hypothermia. In contrast, FAHF-2-treated MFA mice showed no anaphylactic symptoms, normal body temperature, and histamine levels after challenge with each allergen. Protection was accompanied by reduction in allergen-specific immunoglobulin E levels. Allergen-stimulated Th2 cytokine interleukin-4 and interleukin-13 production levels decreased, whereas the Th1 cytokine interferon-γ levels were elevated in cultured splenocytes and mesenteric lymph node cells in FAHF-2-treated mice. We established the first murine model of MFA. FAHF-2 prevents peanut, egg, and fish-induced anaphylactic reactions in this model, suggesting that FAHF-2 may have potential for treating human MFA. Copyright © 2012 American College of Allergy, Asthma & Immunology. Published by Elsevier Inc. All rights reserved.
Akkus, Zeki; Camdeviren, Handan; Celik, Fatma; Gur, Ali; Nas, Kemal
2005-09-01
To determine the risk factors of osteoporosis using a multiple binary logistic regression method and to assess the risk variables for osteoporosis, which is a major and growing health problem in many countries. We presented a case-control study, consisting of 126 postmenopausal healthy women as control group and 225 postmenopausal osteoporotic women as the case group. The study was carried out in the Department of Physical Medicine and Rehabilitation, Dicle University, Diyarbakir, Turkey between 1999-2002. The data from the 351 participants were collected using a standard questionnaire that contains 43 variables. A multiple logistic regression model was then used to evaluate the data and to find the best regression model. We classified 80.1% (281/351) of the participants using the regression model. Furthermore, the specificity value of the model was 67% (84/126) of the control group while the sensitivity value was 88% (197/225) of the case group. We found the distribution of residual values standardized for final model to be exponential using the Kolmogorow-Smirnow test (p=0.193). The receiver operating characteristic curve was found successful to predict patients with risk for osteoporosis. This study suggests that low levels of dietary calcium intake, physical activity, education, and longer duration of menopause are independent predictors of the risk of low bone density in our population. Adequate dietary calcium intake in combination with maintaining a daily physical activity, increasing educational level, decreasing birth rate, and duration of breast-feeding may contribute to healthy bones and play a role in practical prevention of osteoporosis in Southeast Anatolia. In addition, the findings of the present study indicate that the use of multivariate statistical method as a multiple logistic regression in osteoporosis, which maybe influenced by many variables, is better than univariate statistical evaluation.
Qin, Feifei; Mazloomi Moqaddam, Ali; Kang, Qinjun; Derome, Dominique; Carmeliet, Jan
2018-03-01
An entropic multiple-relaxation-time lattice Boltzmann approach is coupled to a multirange Shan-Chen pseudopotential model to study the two-phase flow. Compared with previous multiple-relaxation-time multiphase models, this model is stable and accurate for the simulation of a two-phase flow in a much wider range of viscosity and surface tension at a high liquid-vapor density ratio. A stationary droplet surrounded by equilibrium vapor is first simulated to validate this model using the coexistence curve and Laplace's law. Then, two series of droplet impact behavior, on a liquid film and a flat surface, are simulated in comparison with theoretical or experimental results. Droplet impact on a liquid film is simulated for different Reynolds numbers at high Weber numbers. With the increase of the Sommerfeld parameter, onset of splashing is observed and multiple secondary droplets occur. The droplet spreading ratio agrees well with the square root of time law and is found to be independent of Reynolds number. Moreover, shapes of simulated droplets impacting hydrophilic and superhydrophobic flat surfaces show good agreement with experimental observations through the entire dynamic process. The maximum spreading ratio of a droplet impacting the superhydrophobic flat surface is studied for a large range of Weber numbers. Results show that the rescaled maximum spreading ratios are in good agreement with a universal scaling law. This series of simulations demonstrates that the proposed model accurately captures the complex fluid-fluid and fluid-solid interfacial physical processes for a wide range of Reynolds and Weber numbers at high density ratios.
Chen, Mingshi; Senay, Gabriel B.; Singh, Ramesh K.; Verdin, James P.
2016-01-01
Evapotranspiration (ET) is an important component of the water cycle – ET from the land surface returns approximately 60% of the global precipitation back to the atmosphere. ET also plays an important role in energy transport among the biosphere, atmosphere, and hydrosphere. Current regional to global and daily to annual ET estimation relies mainly on surface energy balance (SEB) ET models or statistical and empirical methods driven by remote sensing data and various climatological databases. These models have uncertainties due to inevitable input errors, poorly defined parameters, and inadequate model structures. The eddy covariance measurements on water, energy, and carbon fluxes at the AmeriFlux tower sites provide an opportunity to assess the ET modeling uncertainties. In this study, we focused on uncertainty analysis of the Operational Simplified Surface Energy Balance (SSEBop) model for ET estimation at multiple AmeriFlux tower sites with diverse land cover characteristics and climatic conditions. The 8-day composite 1-km MODerate resolution Imaging Spectroradiometer (MODIS) land surface temperature (LST) was used as input land surface temperature for the SSEBop algorithms. The other input data were taken from the AmeriFlux database. Results of statistical analysis indicated that the SSEBop model performed well in estimating ET with an R2 of 0.86 between estimated ET and eddy covariance measurements at 42 AmeriFlux tower sites during 2001–2007. It was encouraging to see that the best performance was observed for croplands, where R2 was 0.92 with a root mean square error of 13 mm/month. The uncertainties or random errors from input variables and parameters of the SSEBop model led to monthly ET estimates with relative errors less than 20% across multiple flux tower sites distributed across different biomes. This uncertainty of the SSEBop model lies within the error range of other SEB models, suggesting systematic error or bias of the SSEBop model is within
A Lévy HJM Multiple-Curve Model with Application to CVA Computation
DEFF Research Database (Denmark)
Crépey, Stéphane; Grbac, Zorana; Ngor, Nathalie
2015-01-01
, the calibration to OTM swaptions guaranteeing that the model correctly captures volatility smile effects and the calibration to co-terminal ATM swaptions ensuring an appropriate term structure of the volatility in the model. To account for counterparty risk and funding issues, we use the calibrated multiple......-curve model as an underlying model for CVA computation. We follow a reduced-form methodology through which the problem of pricing the counterparty risk and funding costs can be reduced to a pre-default Markovian BSDE, or an equivalent semi-linear PDE. As an illustration, we study the case of a basis swap...... and a related swaption, for which we compute the counterparty risk and funding adjustments...
A multiple-field coupled resistive transition model for superconducting Nb3Sn
Directory of Open Access Journals (Sweden)
Lin Yang
2016-12-01
Full Text Available A study on the superconducting transition width as functions of the applied magnetic field and strain is performed in superconducting Nb3Sn. A quantitative, yet universal phenomenological resistivity model is proposed. The numerical simulation by the proposed model shows predicted resistive transition characteristics under variable magnetic fields and strain, which in good agreement with the experimental observations. Furthermore, a temperature-modulated magnetoresistance transition behavior in filamentary Nb3Sn conductors can also be well described by the given model. The multiple-field coupled resistive transition model is helpful for making objective determinations of the high-dimensional critical surface of Nb3Sn in the multi-parameter space, offering some preliminary information about the basic vortex-pinning mechanisms, and guiding the design of the quench protection system of Nb3Sn superconducting magnets.
Energy Technology Data Exchange (ETDEWEB)
Cui, Xiaohui [ORNL; Potok, Thomas E [ORNL
2006-01-01
The Flocking model, first proposed by Craig Reynolds, is one of the first bio-inspired computational collective behavior models that has many popular applications, such as animation. Our early research has resulted in a flock clustering algorithm that can achieve better performance than the Kmeans or the Ant clustering algorithms for data clustering. This algorithm generates a clustering of a given set of data through the embedding of the highdimensional data items on a two-dimensional grid for efficient clustering result retrieval and visualization. In this paper, we propose a bio-inspired clustering model, the Multiple Species Flocking clustering model (MSF), and present a distributed multi-agent MSF approach for document clustering.
A multiple-field coupled resistive transition model for superconducting Nb3Sn
Yang, Lin; Ding, He; Zhang, Xin; Qiao, Li
2016-12-01
A study on the superconducting transition width as functions of the applied magnetic field and strain is performed in superconducting Nb3Sn. A quantitative, yet universal phenomenological resistivity model is proposed. The numerical simulation by the proposed model shows predicted resistive transition characteristics under variable magnetic fields and strain, which in good agreement with the experimental observations. Furthermore, a temperature-modulated magnetoresistance transition behavior in filamentary Nb3Sn conductors can also be well described by the given model. The multiple-field coupled resistive transition model is helpful for making objective determinations of the high-dimensional critical surface of Nb3Sn in the multi-parameter space, offering some preliminary information about the basic vortex-pinning mechanisms, and guiding the design of the quench protection system of Nb3Sn superconducting magnets.
Interpretation of ensembles created by multiple iterative rebuilding of macromolecular models
International Nuclear Information System (INIS)
Terwilliger, Thomas C.; Grosse-Kunstleve, Ralf W.; Afonine, Pavel V.; Adams, Paul D.; Moriarty, Nigel W.; Zwart, Peter; Read, Randy J.; Turk, Dusan; Hung, Li-Wei
2007-01-01
Heterogeneity in ensembles generated by independent model rebuilding principally reflects the limitations of the data and of the model-building process rather than the diversity of structures in the crystal. Automation of iterative model building, density modification and refinement in macromolecular crystallography has made it feasible to carry out this entire process multiple times. By using different random seeds in the process, a number of different models compatible with experimental data can be created. Sets of models were generated in this way using real data for ten protein structures from the Protein Data Bank and using synthetic data generated at various resolutions. Most of the heterogeneity among models produced in this way is in the side chains and loops on the protein surface. Possible interpretations of the variation among models created by repetitive rebuilding were investigated. Synthetic data were created in which a crystal structure was modelled as the average of a set of ‘perfect’ structures and the range of models obtained by rebuilding a single starting model was examined. The standard deviations of coordinates in models obtained by repetitive rebuilding at high resolution are small, while those obtained for the same synthetic crystal structure at low resolution are large, so that the diversity within a group of models cannot generally be a quantitative reflection of the actual structures in a crystal. Instead, the group of structures obtained by repetitive rebuilding reflects the precision of the models, and the standard deviation of coordinates of these structures is a lower bound estimate of the uncertainty in coordinates of the individual models
Ni, Bing-Jie; Peng, Lai; Law, Yingyu; Guo, Jianhua; Yuan, Zhiguo
2014-04-01
Autotrophic ammonia oxidizing bacteria (AOB) have been recognized as a major contributor to N2O production in wastewater treatment systems. However, so far N2O models have been proposed based on a single N2O production pathway by AOB, and there is still a lack of effective approach for the integration of these models. In this work, an integrated mathematical model that considers multiple production pathways is developed to describe N2O production by AOB. The pathways considered include the nitrifier denitrification pathway (N2O as the final product of AOB denitrification with NO2(-) as the terminal electron acceptor) and the hydroxylamine (NH2OH) pathway (N2O as a byproduct of incomplete oxidation of NH2OH to NO2(-)). In this model, the oxidation and reduction processes are modeled separately, with intracellular electron carriers introduced to link the two types of processes. The model is calibrated and validated using experimental data obtained with two independent nitrifying cultures. The model satisfactorily describes the N2O data from both systems. The model also predicts shifts of the dominating pathway at various dissolved oxygen (DO) and nitrite levels, consistent with previous hypotheses. This unified model is expected to enhance our ability to predict N2O production by AOB in wastewater treatment systems under varying operational conditions.
Directory of Open Access Journals (Sweden)
S. Hagemann
2013-05-01
Full Text Available Climate change is expected to alter the hydrological cycle resulting in large-scale impacts on water availability. However, future climate change impact assessments are highly uncertain. For the first time, multiple global climate (three and hydrological models (eight were used to systematically assess the hydrological response to climate change and project the future state of global water resources. This multi-model ensemble allows us to investigate how the hydrology models contribute to the uncertainty in projected hydrological changes compared to the climate models. Due to their systematic biases, GCM outputs cannot be used directly in hydrological impact studies, so a statistical bias correction has been applied. The results show a large spread in projected changes in water resources within the climate–hydrology modelling chain for some regions. They clearly demonstrate that climate models are not the only source of uncertainty for hydrological change, and that the spread resulting from the choice of the hydrology model is larger than the spread originating from the climate models over many areas. But there are also areas showing a robust change signal, such as at high latitudes and in some midlatitude regions, where the models agree on the sign of projected hydrological changes, indicative of higher confidence in this ensemble mean signal. In many catchments an increase of available water resources is expected but there are some severe decreases in Central and Southern Europe, the Middle East, the Mississippi River basin, southern Africa, southern China and south-eastern Australia.
Multiple production of hadrons at high energies in the model of quark-gluon strings
International Nuclear Information System (INIS)
Kaidalov, A.B.; Ter-Martirosyan, K.A.
1983-01-01
Multiple production of hadrons at high energies is considered in the framework of the approach based on a picture of formation and subsequent fission of the quark-gluon strings, corresponding to the Pomeron with αsub(P)(0) > 1. The topological (1/nsub(f))-expansion and the colour-tube model is used. Inclusive cross-sections are expressed in therms of the structure functions and fragmentation functions of quarks and their limiting values are in an agreement with the results of the reggeon theory. It is pointed out that an account of rapidity fluctuations of the ends of the quark-gluon strings, connected to valence or sea quarks, allows one to explain a number of characteristic features of the multiple production of hadrons. In particular the model, which takes into account multipomeron configurations, reproduces the experimentally observed rise of inclusive spectra in a central region and well describes both rapidity and multiplicity distributions of charged particles up to energies of the SPS-collider. It is shown that in this approach the KNO-scaling is only approximately satisfied and the pattern of its violation at energies √ s approximately 10 3 GeV is predicted. Inclusive spectra are investigated in the whole region 0 or approximately 0.1) Feynman scaling is violated only logarithmically and deviations from it are very rsmall at s 3 +10 4 GeV
Classification of Multiple Seizure-Like States in Three Different Rodent Models of Epileptogenesis.
Guirgis, Mirna; Serletis, Demitre; Zhang, Jane; Florez, Carlos; Dian, Joshua A; Carlen, Peter L; Bardakjian, Berj L
2014-01-01
Epilepsy is a dynamical disease and its effects are evident in over fifty million people worldwide. This study focused on objective classification of the multiple states involved in the brain's epileptiform activity. Four datasets from three different rodent hippocampal preparations were explored, wherein seizure-like-events (SLE) were induced by the perfusion of a low - Mg(2+) /high-K(+) solution or 4-Aminopyridine. Local field potentials were recorded from CA3 pyramidal neurons and interneurons and modeled as Markov processes. Specifically, hidden Markov models (HMM) were used to determine the nature of the states present. Properties of the Hilbert transform were used to construct the feature spaces for HMM training. By sequentially applying the HMM training algorithm, multiple states were identified both in episodes of SLE and nonSLE activity. Specifically, preSLE and postSLE states were differentiated and multiple inner SLE states were identified. This was accomplished using features extracted from the lower frequencies (1-4 Hz, 4-8 Hz) alongside those of both the low- (40-100 Hz) and high-gamma (100-200 Hz) of the recorded electrical activity. The learning paradigm of this HMM-based system eliminates the inherent bias associated with other learning algorithms that depend on predetermined state segmentation and renders it an appropriate candidate for SLE classification.
International Nuclear Information System (INIS)
Boucher, Laurel
2013-01-01
A great deal of attention is given to the importance of communication in environmental remediation and radioactive waste management. However, very little attention is given to eliciting multiple perspectives so as to formulate high quality decisions. Plans that are based on a limited number of perspectives tend to be narrowly focused whereas those that are based on a wide variety of perspectives tend to be comprehensive, higher quality, and more apt to be put into application. In addition, existing methods of dialogue have built-in limitations in that they typically draw from the predominant thinking patterns which focus in some areas but ignore others. This can result in clarity but a lack of comprehensiveness. This paper presents a Perspective Awareness Model which helps groups such as partnering teams, interagency teams, steering committees, and working groups elicit a wide net of perspectives and viewpoints. The paper begins by describing five factors that makes cooperation among such groups challenging. Next, a Perspective Awareness Model that makes it possible to manage these five factors is presented. The two primary components of this model --- the eight 'Thinking Directions' and the 'Shared Documentation' --- are described in detail. Several examples are given to illustrate how the Perspective Awareness Model can be used to elicit multiple perspectives to formulate high quality decisions in the area of environmental remediation and radioactive waste management. (authors)
Oguz, Ozgur S; Zhou, Zhehua; Glasauer, Stefan; Wollherr, Dirk
2018-04-03
Human motor control is highly efficient in generating accurate and appropriate motor behavior for a multitude of tasks. This paper examines how kinematic and dynamic properties of the musculoskeletal system are controlled to achieve such efficiency. Even though recent studies have shown that the human motor control relies on multiple models, how the central nervous system (CNS) controls this combination is not fully addressed. In this study, we utilize an Inverse Optimal Control (IOC) framework in order to find the combination of those internal models and how this combination changes for different reaching tasks. We conducted an experiment where participants executed a comprehensive set of free-space reaching motions. The results show that there is a trade-off between kinematics and dynamics based controllers depending on the reaching task. In addition, this trade-off depends on the initial and final arm configurations, which in turn affect the musculoskeletal load to be controlled. Given this insight, we further provide a discomfort metric to demonstrate its influence on the contribution of different inverse internal models. This formulation together with our analysis not only support the multiple internal models (MIMs) hypothesis but also suggest a hierarchical framework for the control of human reaching motions by the CNS.
Multiple regression models for energy use in air-conditioned office buildings in different climates
International Nuclear Information System (INIS)
Lam, Joseph C.; Wan, Kevin K.W.; Liu Dalong; Tsang, C.L.
2010-01-01
An attempt was made to develop multiple regression models for office buildings in the five major climates in China - severe cold, cold, hot summer and cold winter, mild, and hot summer and warm winter. A total of 12 key building design variables were identified through parametric and sensitivity analysis, and considered as inputs in the regression models. The coefficient of determination R 2 varies from 0.89 in Harbin to 0.97 in Kunming, indicating that 89-97% of the variations in annual building energy use can be explained by the changes in the 12 parameters. A pseudo-random number generator based on three simple multiplicative congruential generators was employed to generate random designs for evaluation of the regression models. The difference between regression-predicted and DOE-simulated annual building energy use are largely within 10%. It is envisaged that the regression models developed can be used to estimate the likely energy savings/penalty during the initial design stage when different building schemes and design concepts are being considered.
Grotti, Marco; Abelmoschi, Maria Luisa; Soggia, Francesco; Tiberiade, Christian; Frache, Roberto
2000-12-01
The multivariate effects of Na, K, Mg and Ca as nitrates on the electrothermal atomisation of manganese, cadmium and iron were studied by multiple linear regression modelling. Since the models proved to efficiently predict the effects of the considered matrix elements in a wide range of concentrations, they were applied to correct the interferences occurring in the determination of trace elements in seawater after pre-concentration of the analytes. In order to obtain a statistically significant number of samples, a large volume of the certified seawater reference materials CASS-3 and NASS-3 was treated with Chelex-100 resin; then, the chelating resin was separated from the solution, divided into several sub-samples, each of them was eluted with nitric acid and analysed by electrothermal atomic absorption spectrometry (for trace element determinations) and inductively coupled plasma optical emission spectrometry (for matrix element determinations). To minimise any other systematic error besides that due to matrix effects, accuracy of the pre-concentration step and contamination levels of the procedure were checked by inductively coupled plasma mass spectrometric measurements. Analytical results obtained by applying the multiple linear regression models were compared with those obtained with other calibration methods, such as external calibration using acid-based standards, external calibration using matrix-matched standards and the analyte addition technique. Empirical models proved to efficiently reduce interferences occurring in the analysis of real samples, allowing an improvement of accuracy better than for other calibration methods.
Nonlinear Modeling and Identification of an Aluminum Honeycomb Panel with Multiple Bolts
Directory of Open Access Journals (Sweden)
Yongpeng Chu
2016-01-01
Full Text Available This paper focuses on the nonlinear dynamics modeling and parameter identification of an Aluminum Honeycomb Panel (AHP with multiple bolted joints. Finite element method using eight-node solid elements is exploited to model the panel and the bolted connection interface as a homogeneous, isotropic plate and as a thin layer of nonlinear elastic-plastic material, respectively. The material properties of a thin layer are defined by a bilinear elastic plastic model, which can describe the energy dissipation and softening phenomena in the bolted joints under nonlinear states. Experimental tests at low and high excitation levels are performed to reveal the dynamic characteristics of the bolted structure. In particular, the linear material parameters of the panel are identified via experimental tests at low excitation levels, whereas the nonlinear material parameters of the thin layer are updated by using the genetic algorithm to minimize the residual error between the measured and the simulation data at a high excitation level. It is demonstrated by comparing the frequency responses of the updated FEM and the experimental system that the thin layer of bilinear elastic-plastic material is very effective for modeling the nonlinear joint interface of the assembled structure with multiple bolts.
The multiple deficit model of dyslexia: what does it mean for identification and intervention?
Ring, Jeremiah; Black, Jeffrey L
2018-04-24
Research demonstrates that phonological skills provide the basis of reading acquisition and are a primary processing deficit in dyslexia. This consensus has led to the development of effective methods of reading intervention. However, a single phonological deficit is not sufficient to account for the heterogeneity of individuals with dyslexia, and recent research provides evidence that supports a multiple-deficit model of reading disorders. Two studies are presented that investigate (1) the prevalence of phonological and cognitive processing deficit profiles in children with significant reading disability and (2) the effects of those same phonological and cognitive processing skills on reading development in a sample of children that received treatment for dyslexia. The results are discussed in the context of implications for identification and an intervention approach that accommodates multiple deficits within a comprehensive skills-based reading program.
Preacher, Kristopher J; Hayes, Andrew F
2008-08-01
Hypotheses involving mediation are common in the behavioral sciences. Mediation exists when a predictor affects a dependent variable indirectly through at least one intervening variable, or mediator. Methods to assess mediation involving multiple simultaneous mediators have received little attention in the methodological literature despite a clear need. We provide an overview of simple and multiple mediation and explore three approaches that can be used to investigate indirect processes, as well as methods for contrasting two or more mediators within a single model. We present an illustrative example, assessing and contrasting potential mediators of the relationship between the helpfulness of socialization agents and job satisfaction. We also provide SAS and SPSS macros, as well as Mplus and LISREL syntax, to facilitate the use of these methods in applications.
International Nuclear Information System (INIS)
Peng, G.H.; Sun, D.H.
2010-01-01
An improved multiple car-following (MCF) model is proposed, based on the full velocity difference (FVD) model, but taking into consideration multiple information inputs from preceding vehicles. The linear stability condition of the model is obtained by using the linear stability theory. Through nonlinear analysis, the modified Korteweg-de Vries (mKdV) equation is derived to describe the traffic behavior near the critical point. Numerical simulation shows that the proposed model is theoretically an improvement over others, while retaining many strong points in the previous ones by adjusting the information of the multiple leading vehicles.
Directory of Open Access Journals (Sweden)
Kelemen Arpad
2008-08-01
Full Text Available Abstract Background This paper addresses key biological problems and statistical issues in the analysis of large gene expression data sets that describe systemic temporal response cascades to therapeutic doses in multiple tissues such as liver, skeletal muscle, and kidney from the same animals. Affymetrix time course gene expression data U34A are obtained from three different tissues including kidney, liver and muscle. Our goal is not only to find the concordance of gene in different tissues, identify the common differentially expressed genes over time and also examine the reproducibility of the findings by integrating the results through meta analysis from multiple tissues in order to gain a significant increase in the power of detecting differentially expressed genes over time and to find the differential differences of three tissues responding to the drug. Results and conclusion Bayesian categorical model for estimating the proportion of the 'call' are used for pre-screening genes. Hierarchical Bayesian Mixture Model is further developed for the identifications of differentially expressed genes across time and dynamic clusters. Deviance information criterion is applied to determine the number of components for model comparisons and selections. Bayesian mixture model produces the gene-specific posterior probability of differential/non-differential expression and the 95% credible interval, which is the basis for our further Bayesian meta-inference. Meta-analysis is performed in order to identify commonly expressed genes from multiple tissues that may serve as ideal targets for novel treatment strategies and to integrate the results across separate studies. We have found the common expressed genes in the three tissues. However, the up/down/no regulations of these common genes are different at different time points. Moreover, the most differentially expressed genes were found in the liver, then in kidney, and then in muscle.
MMOSS-I: a CANDU multiple-channel thermosyphoning flow stability model
Energy Technology Data Exchange (ETDEWEB)
Gulshani, P [Atomic Energy of Canada Ltd., Mississauga, ON (Canada); Huynh, H [Hydro-Quebec, Montreal, PQ (Canada)
1996-12-31
This paper presents a multiple-channel flow stability model, dubbed MMOSS, developed to predict the conditions for the onset of flow oscillations in a CANDU-type multiple-channel heat transport system under thermosyphoning conditions. The model generalizes that developed previously to account for the effects of any channel flow reversal. Two-phase thermosyphoning conditions are predicted by thermalhydraulic codes for some postulated accident scenarios in CANDU. Two-phase thermosyphoning experiments in the multiple-channel RD-14M facility have indicated that pass-to-pass out-of-phase oscillations in the loop conditions caused the flow in some of the heated channels to undergo sustained reversal in direction. This channel flow reversal had significant effects on the channel and loop conditions. It is, therefore, important to understand the nature of the oscillations and be able to predict the conditions for the onset of the oscillations or for stable flow in RD-14M and the reactor. For stable flow conditions, oscillation-induced channel flow reversal is not expected. MMOSS was developed for a figure-of-eight system with any number of channels. The system characteristic equation was derived from a linearization of the conservation equations. In this paper, the MMOSS characteristic equation is solved for a system of N identical channel assemblies. The resulting model is called MMOSS-I. This simplification provides valuable physical insight and reasonably accurate results. MMOSS-I and a previously-developed steady-state model THERMOSYPHON are used to predict thermosyphoning flow stability maps for RD-14M and the Gentilly 2 reactor. (author). 11 refs., 7 figs.
A collapse pressure prediction model for horizontal shale gas wells with multiple weak planes
Directory of Open Access Journals (Sweden)
Ping Chen
2015-01-01
Full Text Available Since collapse of horizontal wellbore through long brittle shale interval is a major problem, the occurrence characteristics of weak planes were analyzed according to outcrop, core, and SEM and FMI data of shale rocks. A strength analysis method was developed for shale rocks with multiple weak planes based on weak-plane strength theory. An analysis was also conducted of the strength characteristics of shale rocks with uniform distribution of multiple weak planes. A collapse pressure prediction model for horizontal wells in shale formation with multiple weak planes was established, which takes into consideration the occurrence of each weak plane, wellbore stress condition, borehole azimuth, and in-situ stress azimuth. Finally, a case study of a horizontal shale gas well in southern Sichuan Basin was conducted. The results show that the intersection angle between the shale bedding plane and the structural fracture is generally large (nearly orthogonal; with the increase of weak plane number, the strength of rock mass declines sharply and is more heavily influenced by weak planes; when there are more than four weak planes, the rock strength tends to be isotropic and the whole strength of rock mass is greatly weakened, significantly increasing the risk of wellbore collapse. With the increase of weak plane number, the drilling fluid density (collapse pressure to keep borehole stability goes up gradually. For instance, the collapse pressure is 1.04 g/cm3 when there are no weak planes, and 1.55 g/cm3 when there is one weak plane, and 1.84 g/cm3 when there are two weak planes. The collapse pressure prediction model for horizontal wells proposed in this paper presented results in better agreement with those in actual situation. This model, more accurate and practical than traditional models, can effectively improve the accuracy of wellbore collapse pressure prediction of horizontal shale gas wells.
Directory of Open Access Journals (Sweden)
Wang Shu-Qiang
2012-07-01
Full Text Available Abstract Background A key challenge in the post genome era is to identify genome-wide transcriptional regulatory networks, which specify the interactions between transcription factors and their target genes. Numerous methods have been developed for reconstructing gene regulatory networks from expression data. However, most of them are based on coarse grained qualitative models, and cannot provide a quantitative view of regulatory systems. Results A binding affinity based regulatory model is proposed to quantify the transcriptional regulatory network. Multiple quantities, including binding affinity and the activity level of transcription factor (TF are incorporated into a general learning model. The sequence features of the promoter and the possible occupancy of nucleosomes are exploited to estimate the binding probability of regulators. Comparing with the previous models that only employ microarray data, the proposed model can bridge the gap between the relative background frequency of the observed nucleotide and the gene's transcription rate. Conclusions We testify the proposed approach on two real-world microarray datasets. Experimental results show that the proposed model can effectively identify the parameters and the activity level of TF. Moreover, the kinetic parameters introduced in the proposed model can reveal more biological sense than previous models can do.
A composite state method for ensemble data assimilation with multiple limited-area models
Directory of Open Access Journals (Sweden)
Matthew Kretschmer
2015-04-01
Full Text Available Limited-area models (LAMs allow high-resolution forecasts to be made for geographic regions of interest when resources are limited. Typically, boundary conditions for these models are provided through one-way boundary coupling from a coarser resolution global model. Here, data assimilation is considered in a situation in which a global model supplies boundary conditions to multiple LAMs. The data assimilation method presented combines information from all of the models to construct a single ‘composite state’, on which data assimilation is subsequently performed. The analysis composite state is then used to form the initial conditions of the global model and all of the LAMs for the next forecast cycle. The method is tested by using numerical experiments with simple, chaotic models. The results of the experiments show that there is a clear forecast benefit to allowing LAM states to influence one another during the analysis. In addition, adding LAM information at analysis time has a strong positive impact on global model forecast performance, even at points not covered by the LAMs.
Adaptive behaviour and multiple equilibrium states in a predator-prey model.
Pimenov, Alexander; Kelly, Thomas C; Korobeinikov, Andrei; O'Callaghan, Michael J A; Rachinskii, Dmitrii
2015-05-01
There is evidence that multiple stable equilibrium states are possible in real-life ecological systems. Phenomenological mathematical models which exhibit such properties can be constructed rather straightforwardly. For instance, for a predator-prey system this result can be achieved through the use of non-monotonic functional response for the predator. However, while formal formulation of such a model is not a problem, the biological justification for such functional responses and models is usually inconclusive. In this note, we explore a conjecture that a multitude of equilibrium states can be caused by an adaptation of animal behaviour to changes of environmental conditions. In order to verify this hypothesis, we consider a simple predator-prey model, which is a straightforward extension of the classic Lotka-Volterra predator-prey model. In this model, we made an intuitively transparent assumption that the prey can change a mode of behaviour in response to the pressure of predation, choosing either "safe" of "risky" (or "business as usual") behaviour. In order to avoid a situation where one of the modes gives an absolute advantage, we introduce the concept of the "cost of a policy" into the model. A simple conceptual two-dimensional predator-prey model, which is minimal with this property, and is not relying on odd functional responses, higher dimensionality or behaviour change for the predator, exhibits two stable co-existing equilibrium states with basins of attraction separated by a separatrix of a saddle point. Copyright © 2015 Elsevier Inc. All rights reserved.
Impossibility of Classically Simulating One-Clean-Qubit Model with Multiplicative Error
Fujii, Keisuke; Kobayashi, Hirotada; Morimae, Tomoyuki; Nishimura, Harumichi; Tamate, Shuhei; Tani, Seiichiro
2018-05-01
The one-clean-qubit model (or the deterministic quantum computation with one quantum bit model) is a restricted model of quantum computing where all but a single input qubits are maximally mixed. It is known that the probability distribution of measurement results on three output qubits of the one-clean-qubit model cannot be classically efficiently sampled within a constant multiplicative error unless the polynomial-time hierarchy collapses to the third level [T. Morimae, K. Fujii, and J. F. Fitzsimons, Phys. Rev. Lett. 112, 130502 (2014), 10.1103/PhysRevLett.112.130502]. It was open whether we can keep the no-go result while reducing the number of output qubits from three to one. Here, we solve the open problem affirmatively. We also show that the third-level collapse of the polynomial-time hierarchy can be strengthened to the second-level one. The strengthening of the collapse level from the third to the second also holds for other subuniversal models such as the instantaneous quantum polynomial model [M. Bremner, R. Jozsa, and D. J. Shepherd, Proc. R. Soc. A 467, 459 (2011), 10.1098/rspa.2010.0301] and the boson sampling model [S. Aaronson and A. Arkhipov, STOC 2011, p. 333]. We additionally study the classical simulatability of the one-clean-qubit model with further restrictions on the circuit depth or the gate types.
Application of Soft Computing Techniques and Multiple Regression Models for CBR prediction of Soils
Directory of Open Access Journals (Sweden)
Fatimah Khaleel Ibrahim
2017-08-01
Full Text Available The techniques of soft computing technique such as Artificial Neutral Network (ANN have improved the predicting capability and have actually discovered application in Geotechnical engineering. The aim of this research is to utilize the soft computing technique and Multiple Regression Models (MLR for forecasting the California bearing ratio CBR( of soil from its index properties. The indicator of CBR for soil could be predicted from various soils characterizing parameters with the assist of MLR and ANN methods. The data base that collected from the laboratory by conducting tests on 86 soil samples that gathered from different projects in Basrah districts. Data gained from the experimental result were used in the regression models and soft computing techniques by using artificial neural network. The liquid limit, plastic index , modified compaction test and the CBR test have been determined. In this work, different ANN and MLR models were formulated with the different collection of inputs to be able to recognize their significance in the prediction of CBR. The strengths of the models that were developed been examined in terms of regression coefficient (R2, relative error (RE% and mean square error (MSE values. From the results of this paper, it absolutely was noticed that all the proposed ANN models perform better than that of MLR model. In a specific ANN model with all input parameters reveals better outcomes than other ANN models.
New transient-flow modelling of a multiple-fractured horizontal well
International Nuclear Information System (INIS)
Jia, Yong-Lu; Wang, Ben-Cheng; Nie, Ren-Shi; Wang, Dan-Ling
2014-01-01
A new transient-flow modelling of a multiple-fractured horizontal well is presented. Compared to conventional modelling, the new modelling considered more practical physical conditions, such as various inclined angles for different fractures, different fracture intervals, different fracture lengths and partially penetrating fractures to formation. A kind of new mathematical method, including a three-dimensional eigenvalue and orthogonal transform, was created to deduce the exact analytical solutions of pressure transients for constant-rate production in real space. In order to consider a wellbore storage coefficient and skin factor, we used a Laplace-transform approach to convert the exact analytical solutions to the solutions in Laplace space. Then the numerical solutions of pressure transients in real space were gained using a Stehfest numerical inversion. Standard type curves were plotted to describe the transient-flow characteristics. Flow regimes were clearly identified from type curves. Furthermore, the differences between the new modelling and the conventional modelling in pressure transients were especially compared and discussed. Finally, an example application to show the accordance of the new modelling with real conditions was implemented. Our new modelling is different from, but more practical than, conventional modelling. (paper)
Aviation Safety Risk Modeling: Lessons Learned From Multiple Knowledge Elicitation Sessions
Luxhoj, J. T.; Ancel, E.; Green, L. L.; Shih, A. T.; Jones, S. M.; Reveley, M. S.
2014-01-01
Aviation safety risk modeling has elements of both art and science. In a complex domain, such as the National Airspace System (NAS), it is essential that knowledge elicitation (KE) sessions with domain experts be performed to facilitate the making of plausible inferences about the possible impacts of future technologies and procedures. This study discusses lessons learned throughout the multiple KE sessions held with domain experts to construct probabilistic safety risk models for a Loss of Control Accident Framework (LOCAF), FLightdeck Automation Problems (FLAP), and Runway Incursion (RI) mishap scenarios. The intent of these safety risk models is to support a portfolio analysis of NASA's Aviation Safety Program (AvSP). These models use the flexible, probabilistic approach of Bayesian Belief Networks (BBNs) and influence diagrams to model the complex interactions of aviation system risk factors. Each KE session had a different set of experts with diverse expertise, such as pilot, air traffic controller, certification, and/or human factors knowledge that was elicited to construct a composite, systems-level risk model. There were numerous "lessons learned" from these KE sessions that deal with behavioral aggregation, conditional probability modeling, object-oriented construction, interpretation of the safety risk results, and model verification/validation that are presented in this paper.
Hybrid approaches for multiple-species stochastic reaction-diffusion models
Spill, Fabian; Guerrero, Pilar; Alarcon, Tomas; Maini, Philip K.; Byrne, Helen
2015-10-01
Reaction-diffusion models are used to describe systems in fields as diverse as physics, chemistry, ecology and biology. The fundamental quantities in such models are individual entities such as atoms and molecules, bacteria, cells or animals, which move and/or react in a stochastic manner. If the number of entities is large, accounting for each individual is inefficient, and often partial differential equation (PDE) models are used in which the stochastic behaviour of individuals is replaced by a description of the averaged, or mean behaviour of the system. In some situations the number of individuals is large in certain regions and small in others. In such cases, a stochastic model may be inefficient in one region, and a PDE model inaccurate in another. To overcome this problem, we develop a scheme which couples a stochastic reaction-diffusion system in one part of the domain with its mean field analogue, i.e. a discretised PDE model, in the other part of the domain. The interface in between the two domains occupies exactly one lattice site and is chosen such that the mean field description is still accurate there. In this way errors due to the flux between the domains are small. Our scheme can account for multiple dynamic interfaces separating multiple stochastic and deterministic domains, and the coupling between the domains conserves the total number of particles. The method preserves stochastic features such as extinction not observable in the mean field description, and is significantly faster to simulate on a computer than the pure stochastic model.
Directory of Open Access Journals (Sweden)
Tabitha A Graves
Full Text Available Using multiple detection methods can increase the number, kind, and distribution of individuals sampled, which may increase accuracy and precision and reduce cost of population abundance estimates. However, when variables influencing abundance are of interest, if individuals detected via different methods are influenced by the landscape differently, separate analysis of multiple detection methods may be more appropriate. We evaluated the effects of combining two detection methods on the identification of variables important to local abundance using detections of grizzly bears with hair traps (systematic and bear rubs (opportunistic. We used hierarchical abundance models (N-mixture models with separate model components for each detection method. If both methods sample the same population, the use of either data set alone should (1 lead to the selection of the same variables as important and (2 provide similar estimates of relative local abundance. We hypothesized that the inclusion of 2 detection methods versus either method alone should (3 yield more support for variables identified in single method analyses (i.e. fewer variables and models with greater weight, and (4 improve precision of covariate estimates for variables selected in both separate and combined analyses because sample size is larger. As expected, joint analysis of both methods increased precision as well as certainty in variable and model selection. However, the single-method analyses identified different variables and the resulting predicted abundances had different spatial distributions. We recommend comparing single-method and jointly modeled results to identify the presence of individual heterogeneity between detection methods in N-mixture models, along with consideration of detection probabilities, correlations among variables, and tolerance to risk of failing to identify variables important to a subset of the population. The benefits of increased precision should be weighed
Hybrid approaches for multiple-species stochastic reaction-diffusion models.
Spill, Fabian
2015-10-01
Reaction-diffusion models are used to describe systems in fields as diverse as physics, chemistry, ecology and biology. The fundamental quantities in such models are individual entities such as atoms and molecules, bacteria, cells or animals, which move and/or react in a stochastic manner. If the number of entities is large, accounting for each individual is inefficient, and often partial differential equation (PDE) models are used in which the stochastic behaviour of individuals is replaced by a description of the averaged, or mean behaviour of the system. In some situations the number of individuals is large in certain regions and small in others. In such cases, a stochastic model may be inefficient in one region, and a PDE model inaccurate in another. To overcome this problem, we develop a scheme which couples a stochastic reaction-diffusion system in one part of the domain with its mean field analogue, i.e. a discretised PDE model, in the other part of the domain. The interface in between the two domains occupies exactly one lattice site and is chosen such that the mean field description is still accurate there. In this way errors due to the flux between the domains are small. Our scheme can account for multiple dynamic interfaces separating multiple stochastic and deterministic domains, and the coupling between the domains conserves the total number of particles. The method preserves stochastic features such as extinction not observable in the mean field description, and is significantly faster to simulate on a computer than the pure stochastic model.
Hybrid approaches for multiple-species stochastic reaction-diffusion models.
Spill, Fabian; Guerrero, Pilar; Alarcon, Tomas; Maini, Philip K; Byrne, Helen
2015-01-01
Reaction-diffusion models are used to describe systems in fields as diverse as physics, chemistry, ecology and biology. The fundamental quantities in such models are individual entities such as atoms and molecules, bacteria, cells or animals, which move and/or react in a stochastic manner. If the number of entities is large, accounting for each individual is inefficient, and often partial differential equation (PDE) models are used in which the stochastic behaviour of individuals is replaced by a description of the averaged, or mean behaviour of the system. In some situations the number of individuals is large in certain regions and small in others. In such cases, a stochastic model may be inefficient in one region, and a PDE model inaccurate in another. To overcome this problem, we develop a scheme which couples a stochastic reaction-diffusion system in one part of the domain with its mean field analogue, i.e. a discretised PDE model, in the other part of the domain. The interface in between the two domains occupies exactly one lattice site and is chosen such that the mean field description is still accurate there. In this way errors due to the flux between the domains are small. Our scheme can account for multiple dynamic interfaces separating multiple stochastic and deterministic domains, and the coupling between the domains conserves the total number of particles. The method preserves stochastic features such as extinction not observable in the mean field description, and is significantly faster to simulate on a computer than the pure stochastic model.
DEFF Research Database (Denmark)
Cordua, Knud Skou; Hansen, Thomas Mejer; Lange, Katrine
In order to move beyond simplified covariance based a priori models, which are typically used for inverse problems, more complex multiple-point-based a priori models have to be considered. By means of marginal probability distributions ‘learned’ from a training image, sequential simulation has...... proven to be an efficient way of obtaining multiple realizations that honor the same multiple-point statistics as the training image. The frequency matching method provides an alternative way of formulating multiple-point-based a priori models. In this strategy the pattern frequency distributions (i.......e. marginals) of the training image and a subsurface model are matched in order to obtain a solution with the same multiple-point statistics as the training image. Sequential Gibbs sampling is a simulation strategy that provides an efficient way of applying sequential simulation based algorithms as a priori...
Classical Logic and Quantum Logic with Multiple and Common Lattice Models
Directory of Open Access Journals (Sweden)
Mladen Pavičić
2016-01-01
Full Text Available We consider a proper propositional quantum logic and show that it has multiple disjoint lattice models, only one of which is an orthomodular lattice (algebra underlying Hilbert (quantum space. We give an equivalent proof for the classical logic which turns out to have disjoint distributive and nondistributive ortholattices. In particular, we prove that both classical logic and quantum logic are sound and complete with respect to each of these lattices. We also show that there is one common nonorthomodular lattice that is a model of both quantum and classical logic. In technical terms, that enables us to run the same classical logic on both a digital (standard, two-subset, 0-1-bit computer and a nondigital (say, a six-subset computer (with appropriate chips and circuits. With quantum logic, the same six-element common lattice can serve us as a benchmark for an efficient evaluation of equations of bigger lattice models or theorems of the logic.
A Neural Network Model to Learn Multiple Tasks under Dynamic Environments
Tsumori, Kenji; Ozawa, Seiichi
When environments are dynamically changed for agents, the knowledge acquired in an environment might be useless in future. In such dynamic environments, agents should be able to not only acquire new knowledge but also modify old knowledge in learning. However, modifying all knowledge acquired before is not efficient because the knowledge once acquired may be useful again when similar environment reappears and some knowledge can be shared among different environments. To learn efficiently in such environments, we propose a neural network model that consists of the following modules: resource allocating network, long-term & short-term memory, and environment change detector. We evaluate the model under a class of dynamic environments where multiple function approximation tasks are sequentially given. The experimental results demonstrate that the proposed model possesses stable incremental learning, accurate environmental change detection, proper association and recall of old knowledge, and efficient knowledge transfer.
DEFF Research Database (Denmark)
Freiesleben, Trine Holm; Sohbati, Reza; Murray, Andrew
2015-01-01
Interest in the optically stimulated luminescence (OSL) dating of rock surfaces has increased significantly over the last few years, as the potential of the method has been explored. It has been realized that luminescence-depth profiles show qualitative evidence for multiple daylight exposure...... and burial events. To quantify both burial and exposure events a new mathematical model is developed by expanding the existing models of evolution of luminescenceedepth profiles, to include repeated sequential events of burial and exposure to daylight. This new model is applied to an infrared stimulated...... events. This study confirms the suggestion that rock surfaces contain a record of exposure and burial history, and that these events can be quantified. The burial age of rock surfaces can thus be dated with confidence, based on a knowledge of their pre-burial light exposure; it may also be possible...
Multiple-Input Subject-Specific Modeling of Plasma Glucose Concentration for Feedforward Control.
Kotz, Kaylee; Cinar, Ali; Mei, Yong; Roggendorf, Amy; Littlejohn, Elizabeth; Quinn, Laurie; Rollins, Derrick K
2014-11-26
The ability to accurately develop subject-specific, input causation models, for blood glucose concentration (BGC) for large input sets can have a significant impact on tightening control for insulin dependent diabetes. More specifically, for Type 1 diabetics (T1Ds), it can lead to an effective artificial pancreas (i.e., an automatic control system that delivers exogenous insulin) under extreme changes in critical disturbances. These disturbances include food consumption, activity variations, and physiological stress changes. Thus, this paper presents a free-living, outpatient, multiple-input, modeling method for BGC with strong causation attributes that is stable and guards against overfitting to provide an effective modeling approach for feedforward control (FFC). This approach is a Wiener block-oriented methodology, which has unique attributes for meeting critical requirements for effective, long-term, FFC.
Generation of connectivity-preserving surface models of multiple sclerosis lesions.
Meruvia-Pastor, Oscar; Xiao, Mei; Soh, Jung; Sensen, Christoph W
2011-01-01
Progression of multiple sclerosis (MS) results in brain lesions caused by white matter inflammation. MS lesions have various shapes, sizes and locations, affecting cognitive abilities of patients to different extents. To facilitate the visualization of the brain lesion distribution, we have developed a software tool to build 3D surface models of MS lesions. This tool allows users to create 3D models of lesions quickly and to visualize the lesions and brain tissues using various visual attributes and configurations. The software package is based on breadth-first search based 3D connected component analysis and a 3D flood-fill based region growing algorithm to generate 3D models from binary or non-binary segmented medical image stacks.
DEFF Research Database (Denmark)
Holst, René; Jørgensen, Bent
2015-01-01
The paper proposes a versatile class of multiplicative generalized linear longitudinal mixed models (GLLMM) with additive dispersion components, based on explicit modelling of the covariance structure. The class incorporates a longitudinal structure into the random effects models and retains...... a marginal as well as a conditional interpretation. The estimation procedure is based on a computationally efficient quasi-score method for the regression parameters combined with a REML-like bias-corrected Pearson estimating function for the dispersion and correlation parameters. This avoids...... the multidimensional integral of the conventional GLMM likelihood and allows an extension of the robust empirical sandwich estimator for use with both association and regression parameters. The method is applied to a set of otholit data, used for age determination of fish....
Arab, Ali; Holan, Scott H.; Wikle, Christopher K.; Wildhaber, Mark L.
2012-01-01
Ecological studies involving counts of abundance, presence–absence or occupancy rates often produce data having a substantial proportion of zeros. Furthermore, these types of processes are typically multivariate and only adequately described by complex nonlinear relationships involving externally measured covariates. Ignoring these aspects of the data and implementing standard approaches can lead to models that fail to provide adequate scientific understanding of the underlying ecological processes, possibly resulting in a loss of inferential power. One method of dealing with data having excess zeros is to consider the class of univariate zero-inflated generalized linear models. However, this class of models fails to address the multivariate and nonlinear aspects associated with the data usually encountered in practice. Therefore, we propose a semiparametric bivariate zero-inflated Poisson model that takes into account both of these data attributes. The general modeling framework is hierarchical Bayes and is suitable for a broad range of applications. We demonstrate the effectiveness of our model through a motivating example on modeling catch per unit area for multiple species using data from the Missouri River Benthic Fishes Study, implemented by the United States Geological Survey.
Morrison, Kathryn T; Shaddick, Gavin; Henderson, Sarah B; Buckeridge, David L
2016-08-15
This paper outlines a latent process model for forecasting multiple health outcomes arising from a common environmental exposure. Traditionally, surveillance models in environmental health do not link health outcome measures, such as morbidity or mortality counts, to measures of exposure, such as air pollution. Moreover, different measures of health outcomes are treated as independent, while it is known that they are correlated with one another over time as they arise in part from a common underlying exposure. We propose modelling an environmental exposure as a latent process, and we describe the implementation of such a model within a hierarchical Bayesian framework and its efficient computation using integrated nested Laplace approximations. Through a simulation study, we compare distinct univariate models for each health outcome with a bivariate approach. The bivariate model outperforms the univariate models in bias and coverage of parameter estimation, in forecast accuracy and in computational efficiency. The methods are illustrated with a case study using healthcare utilization and air pollution data from British Columbia, Canada, 2003-2011, where seasonal wildfires produce high levels of air pollution, significantly impacting population health. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Analyzing Multiple-Choice Questions by Model Analysis and Item Response Curves
Wattanakasiwich, P.; Ananta, S.
2010-07-01
In physics education research, the main goal is to improve physics teaching so that most students understand physics conceptually and be able to apply concepts in solving problems. Therefore many multiple-choice instruments were developed to probe students' conceptual understanding in various topics. Two techniques including model analysis and item response curves were used to analyze students' responses from Force and Motion Conceptual Evaluation (FMCE). For this study FMCE data from more than 1000 students at Chiang Mai University were collected over the past three years. With model analysis, we can obtain students' alternative knowledge and the probabilities for students to use such knowledge in a range of equivalent contexts. The model analysis consists of two algorithms—concentration factor and model estimation. This paper only presents results from using the model estimation algorithm to obtain a model plot. The plot helps to identify a class model state whether it is in the misconception region or not. Item response curve (IRC) derived from item response theory is a plot between percentages of students selecting a particular choice versus their total score. Pros and cons of both techniques are compared and discussed.
Rodriguez-Castro, Benedicto; Glaser, Hugh; Carr, Leslie
2007-01-01
This report presents a study on the practical modelling of the conceptual overlap that might exist among the multiple facets that define a particular ontology domain concept. The notions of conceptual overlap and facet are defined, together with their relation to scenarios of multiple inheritance in ontology models. Starting from the notion of a value partition, a terminology of ontology modelling constructs is introduced that allows the characterization of two types of conceptual overlap wit...
Integrating multiple distribution models to guide conservation efforts of an endangered toad
Treglia, Michael L.; Fisher, Robert N.; Fitzgerald, Lee A.
2015-01-01
Species distribution models are used for numerous purposes such as predicting changes in species’ ranges and identifying biodiversity hotspots. Although implications of distribution models for conservation are often implicit, few studies use these tools explicitly to inform conservation efforts. Herein, we illustrate how multiple distribution models developed using distinct sets of environmental variables can be integrated to aid in identification sites for use in conservation. We focus on the endangered arroyo toad (Anaxyrus californicus), which relies on open, sandy streams and surrounding floodplains in southern California, USA, and northern Baja California, Mexico. Declines of the species are largely attributed to habitat degradation associated with vegetation encroachment, invasive predators, and altered hydrologic regimes. We had three main goals: 1) develop a model of potential habitat for arroyo toads, based on long-term environmental variables and all available locality data; 2) develop a model of the species’ current habitat by incorporating recent remotely-sensed variables and only using recent locality data; and 3) integrate results of both models to identify sites that may be employed in conservation efforts. We used a machine learning technique, Random Forests, to develop the models, focused on riparian zones in southern California. We identified 14.37% and 10.50% of our study area as potential and current habitat for the arroyo toad, respectively. Generally, inclusion of remotely-sensed variables reduced modeled suitability of sites, thus many areas modeled as potential habitat were not modeled as current habitat. We propose such sites could be made suitable for arroyo toads through active management, increasing current habitat by up to 67.02%. Our general approach can be employed to guide conservation efforts of virtually any species with sufficient data necessary to develop appropriate distribution models.
Climate change and watershed mercury export: a multiple projection and model analysis.
Golden, Heather E; Knightes, Christopher D; Conrads, Paul A; Feaster, Toby D; Davis, Gary M; Benedict, Stephen T; Bradley, Paul M
2013-09-01
Future shifts in climatic conditions may impact watershed mercury (Hg) dynamics and transport. An ensemble of watershed models was applied in the present study to simulate and evaluate the responses of hydrological and total Hg (THg) fluxes from the landscape to the watershed outlet and in-stream THg concentrations to contrasting climate change projections for a watershed in the southeastern coastal plain of the United States. Simulations were conducted under stationary atmospheric deposition and land cover conditions to explicitly evaluate the effect of projected precipitation and temperature on watershed Hg export (i.e., the flux of Hg at the watershed outlet). Based on downscaled inputs from 2 global circulation models that capture extremes of projected wet (Community Climate System Model, Ver 3 [CCSM3]) and dry (ECHAM4/HOPE-G [ECHO]) conditions for this region, watershed model simulation results suggest a decrease of approximately 19% in ensemble-averaged mean annual watershed THg fluxes using the ECHO climate-change model and an increase of approximately 5% in THg fluxes with the CCSM3 model. Ensemble-averaged mean annual ECHO in-stream THg concentrations increased 20%, while those of CCSM3 decreased by 9% between the baseline and projected simulation periods. Watershed model simulation results using both climate change models suggest that monthly watershed THg fluxes increase during the summer, when projected flow is higher than baseline conditions. The present study's multiple watershed model approach underscores the uncertainty associated with climate change response projections and their use in climate change management decisions. Thus, single-model predictions can be misleading, particularly in developmental stages of watershed Hg modeling. Copyright © 2013 SETAC.
Climate change and watershed mercury export: a multiple projection and model analysis
Golden, Heather E.; Knightes, Christopher D.; Conrads, Paul; Feaster, Toby D.; Davis, Gary M.; Benedict, Stephen T.; Bradley, Paul M.
2013-01-01
Future shifts in climatic conditions may impact watershed mercury (Hg) dynamics and transport. An ensemble of watershed models was applied in the present study to simulate and evaluate the responses of hydrological and total Hg (THg) fluxes from the landscape to the watershed outlet and in-stream THg concentrations to contrasting climate change projections for a watershed in the southeastern coastal plain of the United States. Simulations were conducted under stationary atmospheric deposition and land cover conditions to explicitly evaluate the effect of projected precipitation and temperature on watershed Hg export (i.e., the flux of Hg at the watershed outlet). Based on downscaled inputs from 2 global circulation models that capture extremes of projected wet (Community Climate System Model, Ver 3 [CCSM3]) and dry (ECHAM4/HOPE-G [ECHO]) conditions for this region, watershed model simulation results suggest a decrease of approximately 19% in ensemble-averaged mean annual watershed THg fluxes using the ECHO climate-change model and an increase of approximately 5% in THg fluxes with the CCSM3 model. Ensemble-averaged mean annual ECHO in-stream THg concentrations increased 20%, while those of CCSM3 decreased by 9% between the baseline and projected simulation periods. Watershed model simulation results using both climate change models suggest that monthly watershed THg fluxes increase during the summer, when projected flow is higher than baseline conditions. The present study's multiple watershed model approach underscores the uncertainty associated with climate change response projections and their use in climate change management decisions. Thus, single-model predictions can be misleading, particularly in developmental stages of watershed Hg modeling.
Multiple Model Predictive Hybrid Feedforward Control of Fuel Cell Power Generation System
Directory of Open Access Journals (Sweden)
Long Wu
2018-02-01
Full Text Available Solid oxide fuel cell (SOFC is widely considered as an alternative solution among the family of the sustainable distributed generation. Its load flexibility enables it adjusting the power output to meet the requirements from power grid balance. Although promising, its control is challenging when faced with load changes, during which the output voltage is required to be maintained as constant and fuel utilization rate kept within a safe range. Moreover, it makes the control even more intractable because of the multivariable coupling and strong nonlinearity within the wide-range operating conditions. To this end, this paper developed a multiple model predictive control strategy for reliable SOFC operation. The resistance load is regarded as a measurable disturbance, which is an input to the model predictive control as feedforward compensation. The coupling is accommodated by the receding horizon optimization. The nonlinearity is mitigated by the multiple linear models, the weighted sum of which serves as the final control execution. The merits of the proposed control structure are demonstrated by the simulation results.
A Hidden Markov Model Representing the Spatial and Temporal Correlation of Multiple Wind Farms
DEFF Research Database (Denmark)
Fang, Jiakun; Su, Chi; Hu, Weihao
2015-01-01
To accommodate the increasing wind energy with stochastic nature becomes a major issue on power system reliability. This paper proposes a methodology to characterize the spatiotemporal correlation of multiple wind farms. First, a hierarchical clustering method based on self-organizing maps is ado....... The proposed statistical modeling framework is compatible with the sequential power system reliability analysis. A case study on optimal sizing and location of fast-response regulation sources is presented.......To accommodate the increasing wind energy with stochastic nature becomes a major issue on power system reliability. This paper proposes a methodology to characterize the spatiotemporal correlation of multiple wind farms. First, a hierarchical clustering method based on self-organizing maps...... is adopted to categorize the similar output patterns of several wind farms into joint states. Then the hidden Markov model (HMM) is then designed to describe the temporal correlations among these joint states. Unlike the conventional Markov chain model, the accumulated wind power is taken into consideration...
Parisi Kern, Andrea; Ferreira Dias, Michele; Piva Kulakowski, Marlova; Paulo Gomes, Luciana
2015-05-01
Reducing construction waste is becoming a key environmental issue in the construction industry. The quantification of waste generation rates in the construction sector is an invaluable management tool in supporting mitigation actions. However, the quantification of waste can be a difficult process because of the specific characteristics and the wide range of materials used in different construction projects. Large variations are observed in the methods used to predict the amount of waste generated because of the range of variables involved in construction processes and the different contexts in which these methods are employed. This paper proposes a statistical model to determine the amount of waste generated in the construction of high-rise buildings by assessing the influence of design process and production system, often mentioned as the major culprits behind the generation of waste in construction. Multiple regression was used to conduct a case study based on multiple sources of data of eighteen residential buildings. The resulting statistical model produced dependent (i.e. amount of waste generated) and independent variables associated with the design and the production system used. The best regression model obtained from the sample data resulted in an adjusted R(2) value of 0.694, which means that it predicts approximately 69% of the factors involved in the generation of waste in similar constructions. Most independent variables showed a low determination coefficient when assessed in isolation, which emphasizes the importance of assessing their joint influence on the response (dependent) variable. Copyright © 2015 Elsevier Ltd. All rights reserved.
Directory of Open Access Journals (Sweden)
Guoshi Li
2017-10-01
Full Text Available The thalamus plays a critical role in the genesis of thalamocortical oscillations, yet the underlying mechanisms remain elusive. To understand whether the isolated thalamus can generate multiple distinct oscillations, we developed a biophysical thalamic model to test the hypothesis that generation of and transition between distinct thalamic oscillations can be explained as a function of neuromodulation by acetylcholine (ACh and norepinephrine (NE and afferent synaptic excitation. Indeed, the model exhibited four distinct thalamic rhythms (delta, sleep spindle, alpha and gamma oscillations that span the physiological states corresponding to different arousal levels from deep sleep to focused attention. Our simulation results indicate that generation of these distinct thalamic oscillations is a result of both intrinsic oscillatory cellular properties and specific network connectivity patterns. We then systematically varied the ACh/NE and input levels to generate a complete map of the different oscillatory states and their transitions. Lastly, we applied periodic stimulation to the thalamic network and found that entrainment of thalamic oscillations is highly state-dependent. Our results support the hypothesis that ACh/NE modulation and afferent excitation define thalamic oscillatory states and their response to brain stimulation. Our model proposes a broader and more central role of the thalamus in the genesis of multiple distinct thalamo-cortical rhythms than previously assumed.
Parker, L. N.; Zank, G. P.
2013-12-01
Successful forecasting of energetic particle events in space weather models require algorithms for correctly predicting the spectrum of ions accelerated from a background population of charged particles. We present preliminary results from a model that diffusively accelerates particles at multiple shocks. Our basic approach is related to box models (Protheroe and Stanev, 1998; Moraal and Axford, 1983; Ball and Kirk, 1992; Drury et al., 1999) in which a distribution of particles is diffusively accelerated inside the box while simultaneously experiencing decompression through adiabatic expansion and losses from the convection and diffusion of particles outside the box (Melrose and Pope, 1993; Zank et al., 2000). We adiabatically decompress the accelerated particle distribution between each shock by either the method explored in Melrose and Pope (1993) and Pope and Melrose (1994) or by the approach set forth in Zank et al. (2000) where we solve the transport equation by a method analogous to operator splitting. The second method incorporates the additional loss terms of convection and diffusion and allows for the use of a variable time between shocks. We use a maximum injection energy (Emax) appropriate for quasi-parallel and quasi-perpendicular shocks (Zank et al., 2000, 2006; Dosch and Shalchi, 2010) and provide a preliminary application of the diffusive acceleration of particles by multiple shocks with frequencies appropriate for solar maximum (i.e., a non-Markovian process).
Merging for Particle-Mesh Complex Particle Kinetic Modeling of the Multiple Plasma Beams
Lipatov, Alexander S.
2011-01-01
We suggest a merging procedure for the Particle-Mesh Complex Particle Kinetic (PMCPK) method in case of inter-penetrating flow (multiple plasma beams). We examine the standard particle-in-cell (PIC) and the PMCPK methods in the case of particle acceleration by shock surfing for a wide range of the control numerical parameters. The plasma dynamics is described by a hybrid (particle-ion-fluid-electron) model. Note that one may need a mesh if modeling with the computation of an electromagnetic field. Our calculations use specified, time-independent electromagnetic fields for the shock, rather than self-consistently generated fields. While a particle-mesh method is a well-verified approach, the CPK method seems to be a good approach for multiscale modeling that includes multiple regions with various particle/fluid plasma behavior. However, the CPK method is still in need of a verification for studying the basic plasma phenomena: particle heating and acceleration by collisionless shocks, magnetic field reconnection, beam dynamics, etc.
Modeling and validation of multiple joint reflections for ultra- narrow gap laser welding
Energy Technology Data Exchange (ETDEWEB)
Milewski, J.; Keel, G. [Los Alamos National Lab., NM (United States); Sklar, E. [Opticad Corp., Santa Fe, New Mexico (United States)
1995-12-01
The effects of multiple internal reflections within a laser weld joint as a function of joint geometry and processing conditions have been characterized. A computer model utilizing optical ray tracing is used to predict the reflective propagation of laser beam energy focused into the narrow gap of a metal joint for the purpose of predicting the location of melting and coalescence which form the weld. The model allows quantitative analysis of the effects of changes to joint geometry, laser design, materials and processing variables. This analysis method is proposed as a way to enhance process efficiency and design laser welds which display deep penetration and high depth to width aspect ratios, reduced occurrence of defects and enhanced melting. Of particular interest to laser welding is the enhancement of energy coupling to highly reflective materials. The weld joint is designed to act as an optical element which propagates and concentrates the laser energy deep within the joint to be welded. Experimentation has shown that it is possible to produce welds using multiple passes to achieve deep penetration and high depth to width aspect ratios without the use of filler material. The enhanced laser melting and welding of aluminum has been demonstrated. Optimization through modeling and experimental validation has resulted in the development of a laser welding process variant we refer to as Ultra-Narrow Gap Laser Welding.
Efficient surrogate models for reliability analysis of systems with multiple failure modes
International Nuclear Information System (INIS)
Bichon, Barron J.; McFarland, John M.; Mahadevan, Sankaran
2011-01-01
Despite many advances in the field of computational reliability analysis, the efficient estimation of the reliability of a system with multiple failure modes remains a persistent challenge. Various sampling and analytical methods are available, but they typically require accepting a tradeoff between accuracy and computational efficiency. In this work, a surrogate-based approach is presented that simultaneously addresses the issues of accuracy, efficiency, and unimportant failure modes. The method is based on the creation of Gaussian process surrogate models that are required to be locally accurate only in the regions of the component limit states that contribute to system failure. This approach to constructing surrogate models is demonstrated to be both an efficient and accurate method for system-level reliability analysis. - Highlights: → Extends efficient global reliability analysis to systems with multiple failure modes. → Constructs locally accurate Gaussian process models of each response. → Highly efficient and accurate method for assessing system reliability. → Effectiveness is demonstrated on several test problems from the literature.
Fast solar radiation pressure modelling with ray tracing and multiple reflections
Li, Zhen; Ziebart, Marek; Bhattarai, Santosh; Harrison, David; Grey, Stuart
2018-05-01
Physics based SRP (Solar Radiation Pressure) models using ray tracing methods are powerful tools when modelling the forces on complex real world space vehicles. Currently high resolution (1 mm) ray tracing with secondary intersections is done on high performance computers at UCL (University College London). This study introduces the BVH (Bounding Volume Hierarchy) into the ray tracing approach for physics based SRP modelling and makes it possible to run high resolution analysis on personal computers. The ray tracer is both general and efficient enough to cope with the complex shape of satellites and multiple reflections (three or more, with no upper limit). In this study, the traditional ray tracing technique is introduced in the first place and then the BVH is integrated into the ray tracing. Four aspects of the ray tracer were tested for investigating the performance including runtime, accuracy, the effects of multiple reflections and the effects of pixel array resolution.Test results in runtime on GPS IIR and Galileo IOV (In Orbit Validation) satellites show that the BVH can make the force model computation 30-50 times faster. The ray tracer has an absolute accuracy of several nanonewtons by comparing the test results for spheres and planes with the analytical computations. The multiple reflection effects are investigated both in the intersection number and acceleration on GPS IIR, Galileo IOV and Sentinel-1 spacecraft. Considering the number of intersections, the 3rd reflection can capture 99.12 %, 99.14 % , and 91.34 % of the total reflections for GPS IIR, Galileo IOV satellite bus and the Sentinel-1 spacecraft respectively. In terms of the multiple reflection effects on the acceleration, the secondary reflection effect for Galileo IOV satellite and Sentinel-1 can reach 0.2 nm /s2 and 0.4 nm /s2 respectively. The error percentage in the accelerations magnitude results show that the 3rd reflection should be considered in order to make it less than 0.035 % . The
Multi-Frame Rate Based Multiple-Model Training for Robust Speaker Identification of Disguised Voice
DEFF Research Database (Denmark)
Prasad, Swati; Tan, Zheng-Hua; Prasad, Ramjee
2013-01-01
Speaker identification systems are prone to attack when voice disguise is adopted by the user. To address this issue,our paper studies the effect of using different frame rates on the accuracy of the speaker identification system for disguised voice.In addition, a multi-frame rate based multiple......-model training method is proposed. The experimental results show the superior performance of the proposed method compared to the commonly used single frame rate method for three types of disguised voice taken from the CHAINS corpus....
Ansorge, Heather L; Adams, Sheila; Jawad, Abbas F; Birk, David E; Soslowsky, Louis J
2012-04-30
During neonatal development, tendons undergo a well orchestrated process whereby extensive structural and compositional changes occur in synchrony to produce a normal tissue. Conversely, during the repair response to injury, structural and compositional changes occur, but a mechanically inferior tendon is produced. As a result, developmental processes have been postulated as a potential paradigm for elucidation of mechanistic insight required to develop treatment modalities to improve adult tissue healing. The objective of this study was to compare and contrast normal development with injury during early and late developmental healing. Using backwards multiple linear regressions, quantitative and objective information was obtained into the structure-function relationships in tendon. Specifically, proteoglycans were shown to be significant predictors of modulus during early developmental healing but not during late developmental healing or normal development. Multiple independent parameters predicted percent relaxation during normal development, however, only biglycan and fibril diameter parameters predicted percent relaxation during early developmental healing. Lastly, multiple differential predictors were observed between early development and early developmental healing; however, no differential predictors were observed between late development and late developmental healing. This study presents a model through which objective analysis of how compositional and structural parameters that affect the development of mechanical parameters can be quantitatively measured. In addition, information from this study can be used to develop new treatment and therapies through which improved adult tendon healing can be obtained. Copyright © 2012 Elsevier Ltd. All rights reserved.
Directory of Open Access Journals (Sweden)
Wasaye Muhammad Abdul
2017-01-01
Full Text Available An algorithm for the Monte Carlo simulation of electron multiple elastic scattering based on the framework of SuperMC (Super Monte Carlo simulation program for nuclear and radiation process is presented. This paper describes efficient and accurate methods by which the multiple scattering angular deflections are sampled. The Goudsmit-Saunderson theory of multiple scattering has been used for sampling angular deflections. Differential cross-sections of electrons and positrons by neutral atoms have been calculated by using Dirac partial wave program ELSEPA. The Legendre coefficients are accurately computed by using the Gauss-Legendre integration method. Finally, a novel hybrid method for sampling angular distribution has been developed. The model uses efficient rejection sampling method for low energy electrons (500 mean free paths. For small path lengths, a simple, efficient and accurate analytical distribution function has been proposed. The later uses adjustable parameters determined from the fitting of Goudsmith-Saunderson angular distribution. A discussion of the sampling efficiency and accuracy of this newly developed algorithm is given. The efficiency of rejection sampling algorithm is at least 50 % for electron kinetic energies less than 500 keV and longer path lengths (>500 mean free paths. Monte Carlo Simulation results are then compared with measured angular distributions of Ross et al. The comparison shows that our results are in good agreement with experimental measurements.
Directory of Open Access Journals (Sweden)
Dai Hongying
2013-01-01
Full Text Available Abstract Background Multifactor Dimensionality Reduction (MDR has been widely applied to detect gene-gene (GxG interactions associated with complex diseases. Existing MDR methods summarize disease risk by a dichotomous predisposing model (high-risk/low-risk from one optimal GxG interaction, which does not take the accumulated effects from multiple GxG interactions into account. Results We propose an Aggregated-Multifactor Dimensionality Reduction (A-MDR method that exhaustively searches for and detects significant GxG interactions to generate an epistasis enriched gene network. An aggregated epistasis enriched risk score, which takes into account multiple GxG interactions simultaneously, replaces the dichotomous predisposing risk variable and provides higher resolution in the quantification of disease susceptibility. We evaluate this new A-MDR approach in a broad range of simulations. Also, we present the results of an application of the A-MDR method to a data set derived from Juvenile Idiopathic Arthritis patients treated with methotrexate (MTX that revealed several GxG interactions in the folate pathway that were associated with treatment response. The epistasis enriched risk score that pooled information from 82 significant GxG interactions distinguished MTX responders from non-responders with 82% accuracy. Conclusions The proposed A-MDR is innovative in the MDR framework to investigate aggregated effects among GxG interactions. New measures (pOR, pRR and pChi are proposed to detect multiple GxG interactions.
LES of n-Dodecane Spray Combustion Using a Multiple Representative Interactive Flamelets Model
Directory of Open Access Journals (Sweden)
Davidovic Marco
2017-09-01
Full Text Available A single-hole n-dodecane spray flame is studied in a Large-Eddy Simulation (LES framework under Diesel-relevant conditions using a Multiple Representative Interactive Flamelets (MRIF combustion model. Diesel spray combustion is strongly affected by the mixture formation process, which is dominated by several physical processes such as the flow within the injector, break-up of the liquid fuel jet, evaporation and turbulent mixing with the surrounding gas. While the effects of nozzle-internal flow and primary breakup are captured within tuned model parameters in traditional Lagrangian spray models, an alternative approach is applied in this study, where the initial droplet conditions and primary fuel jet breakup are modeled based on results from highly resolved multiphase simulations with resolved interface. A highly reduced chemical mechanism consisting of 57 species and 217 reactions has been developed for n-dodecane achiving a good computational performance at solving the chemical reactions. The MRIF model, which has demonstrated its capability of capturing combustion and pollutant formation under typical Diesel conditions in Reynolds-Averaged Navier-Stokes (RANS simulations is extended for the application in LES. In the standard RIF combustion model, representative chemistry conditioned on mixture fraction is solved interactively with the flow. Subfilter-scale mixing is modeled by the scalar dissipation rate. While the standard RIF model only includes temporal changes of the scalar dissipation rate, the spatial distribution can be accounted for by extending the model to multiple flamelets, which also enables the possibility of capturing different fuel residence times. Overall, the model shows good agreement with experimental data regarding both, low and high temperature combustion characteristics. It is shown that the ignition process and pollutant formation are affected by turbulent mixing. First, a cool flame is initiated at approximately
Modeling a historical mountain pine beetle outbreak using Landsat MSS and multiple lines of evidence
Assal, Timothy J.; Sibold, Jason; Reich, Robin M.
2014-01-01
Mountain pine beetles are significant forest disturbance agents, capable of inducing widespread mortality in coniferous forests in western North America. Various remote sensing approaches have assessed the impacts of beetle outbreaks over the last two decades. However, few studies have addressed the impacts of historical mountain pine beetle outbreaks, including the 1970s event that impacted Glacier National Park. The lack of spatially explicit data on this disturbance represents both a major data gap and a critical research challenge in that wildfire has removed some of the evidence from the landscape. We utilized multiple lines of evidence to model forest canopy mortality as a proxy for outbreak severity. We incorporate historical aerial and landscape photos, aerial detection survey data, a nine-year collection of satellite imagery and abiotic data. This study presents a remote sensing based framework to (1) relate measurements of canopy mortality from fine-scale aerial photography to coarse-scale multispectral imagery and (2) classify the severity of mountain pine beetle affected areas using a temporal sequence of Landsat data and other landscape variables. We sampled canopy mortality in 261 plots from aerial photos and found that insect effects on mortality were evident in changes to the Normalized Difference Vegetation Index (NDVI) over time. We tested multiple spectral indices and found that a combination of NDVI and the green band resulted in the strongest model. We report a two-step process where we utilize a generalized least squares model to account for the large-scale variability in the data and a binary regression tree to describe the small-scale variability. The final model had a root mean square error estimate of 9.8% canopy mortality, a mean absolute error of 7.6% and an R2 of 0.82. The results demonstrate that a model of percent canopy mortality as a continuous variable can be developed to identify a gradient of mountain pine beetle severity on the
Directory of Open Access Journals (Sweden)
Linhong Wang
2013-01-01
Full Text Available As an important component of the urban adaptive traffic control system, subarea partition algorithm divides the road network into some small subareas and then determines the optimal signal control mode for each signalized intersection. Correlation model is the core of subarea partition algorithm because it can quantify the correlation degree of adjacent signalized intersections and decides whether these intersections can be grouped into one subarea. In most cases, there are more than two intersections in one subarea. However, current researches only focus on the correlation model for two adjacent intersections. The objective of this study is to develop a model which can calculate the correlation degree of multiple intersections adaptively. The cycle lengths, link lengths, number of intersections, and path flow between upstream and downstream coordinated phases were selected as the contributing factors of the correlation model. Their jointly impacts on the performance of the coordinated control mode relative to the isolated control mode were further studied using numerical experiments. The paper then proposed a correlation index (CI as an alternative to relative performance. The relationship between CI and the four contributing factors was established in order to predict the correlation, which determined whether adjacent intersections could be partitioned into one subarea. A value of 0 was set as the threshold of CI. If CI was larger than 0, multiple intersections could be partitioned into one subarea; otherwise, they should be separated. Finally, case studies were conducted in a real-life signalized network to evaluate the performance of the model. The results show that the CI simulates the relative performance well and could be a reliable index for subarea partition.
An Application of Robust Method in Multiple Linear Regression Model toward Credit Card Debt
Amira Azmi, Nur; Saifullah Rusiman, Mohd; Khalid, Kamil; Roslan, Rozaini; Sufahani, Suliadi; Mohamad, Mahathir; Salleh, Rohayu Mohd; Hamzah, Nur Shamsidah Amir
2018-04-01
Credit card is a convenient alternative replaced cash or cheque, and it is essential component for electronic and internet commerce. In this study, the researchers attempt to determine the relationship and significance variables between credit card debt and demographic variables such as age, household income, education level, years with current employer, years at current address, debt to income ratio and other debt. The provided data covers 850 customers information. There are three methods that applied to the credit card debt data which are multiple linear regression (MLR) models, MLR models with least quartile difference (LQD) method and MLR models with mean absolute deviation method. After comparing among three methods, it is found that MLR model with LQD method became the best model with the lowest value of mean square error (MSE). According to the final model, it shows that the years with current employer, years at current address, household income in thousands and debt to income ratio are positively associated with the amount of credit debt. Meanwhile variables for age, level of education and other debt are negatively associated with amount of credit debt. This study may serve as a reference for the bank company by using robust methods, so that they could better understand their options and choice that is best aligned with their goals for inference regarding to the credit card debt.
Pham-The, Hai; Nam, Nguyen-Hai; Nga, Doan-Viet; Hai, Dang Thanh; Dieguez-Santana, Karel; Marrero-Poncee, Yovani; Castillo-Garit, Juan A; Casanola-Martin, Gerardo M; Le-Thi-Thu, Huong
2018-02-09
Quantitative Structure - Activity Relationship (QSAR) modeling has been widely used in medicinal chemistry and computational toxicology for many years. Today, as the amount of chemicals is increasing dramatically, QSAR methods have become pivotal for the purpose of handling the data, identifying a decision, and gathering useful information from data processing. The advances in this field have paved a way for numerous alternative approaches that require deep mathematics in order to enhance the learning capability of QSAR models. One of these directions is the use of Multiple Classifier Systems (MCSs) that potentially provide a means to exploit the advantages of manifold learning through decomposition frameworks, while improving generalization and predictive performance. In this paper, we presented MCS as a next generation of QSAR modeling techniques and discuss the chance to mining the vast number of models already published in the literature. We systematically revisited the theoretical frameworks of MCS as well as current advances in MCS application for QSAR practice. Furthermore, we illustrated our idea by describing ensemble approaches on modeling histone deacetylase (HDACs) inhibitors. We expect that our analysis would contribute to a better understanding about MCS application and its future perspectives for improving the decision making of QSAR models. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
A MULTIPLE SCATTERING POLARIZED RADIATIVE TRANSFER MODEL: APPLICATION TO HD 189733b
Energy Technology Data Exchange (ETDEWEB)
Kopparla, Pushkar; Yung, Yuk L. [Division of Geological and Planetary Sciences, California Institute of Technology, Pasadena, CA (United States); Natraj, Vijay; Swain, Mark R. [Jet Propulsion Laboratory (NASA-JPL), Pasadena, CA (United States); Zhang, Xi [Lunar and Planetary Laboratory, University of Arizona, Tucson, AZ (United States); Wiktorowicz, Sloane J., E-mail: pkk@gps.caltech.edu [Department of Astronomy and Astrophysics, University of California, Santa Cruz, CA (United States)
2016-01-20
We present a multiple scattering vector radiative transfer model that produces disk integrated, full phase polarized light curves for reflected light from an exoplanetary atmosphere. We validate our model against results from published analytical and computational models and discuss a small number of cases relevant to the existing and possible near-future observations of the exoplanet HD 189733b. HD 189733b is arguably the most well observed exoplanet to date and the only exoplanet to be observed in polarized light, yet it is debated if the planet’s atmosphere is cloudy or clear. We model reflected light from clear atmospheres with Rayleigh scattering, and cloudy or hazy atmospheres with Mie and fractal aggregate particles. We show that clear and cloudy atmospheres have large differences in polarized light as compared to simple flux measurements, though existing observations are insufficient to make this distinction. Futhermore, we show that atmospheres that are spatially inhomogeneous, such as being partially covered by clouds or hazes, exhibit larger contrasts in polarized light when compared to clear atmospheres. This effect can potentially be used to identify patchy clouds in exoplanets. Given a set of full phase polarimetric measurements, this model can constrain the geometric albedo, properties of scattering particles in the atmosphere, and the longitude of the ascending node of the orbit. The model is used to interpret new polarimetric observations of HD 189733b in a companion paper.
Directory of Open Access Journals (Sweden)
Nabeela Nathoo
2014-01-01
Full Text Available There are exciting new advances in multiple sclerosis (MS resulting in a growing understanding of both the complexity of the disorder and the relative involvement of grey matter, white matter and inflammation. Increasing need for preclinical imaging is anticipated, as animal models provide insights into the pathophysiology of the disease. Magnetic resonance (MR is the key imaging tool used to diagnose and to monitor disease progression in MS, and thus will be a cornerstone for future research. Although gadolinium-enhancing and T2 lesions on MRI have been useful for detecting MS pathology, they are not correlative of disability. Therefore, new MRI methods are needed. Such methods require validation in animal models. The increasing necessity for MRI of animal models makes it critical and timely to understand what research has been conducted in this area and what potential there is for use of MRI in preclinical models of MS. Here, we provide a review of MRI and magnetic resonance spectroscopy (MRS studies that have been carried out in animal models of MS that focus on pathology. We compare the MRI phenotypes of animals and patients and provide advice on how best to use animal MR studies to increase our understanding of the linkages between MR and pathology in patients. This review describes how MRI studies of animal models have been, and will continue to be, used in the ongoing effort to understand MS.
Mariano, Adrian V.; Grossmann, John M.
2010-11-01
Reflectance-domain methods convert hyperspectral data from radiance to reflectance using an atmospheric compensation model. Material detection and identification are performed by comparing the compensated data to target reflectance spectra. We introduce two radiance-domain approaches, Single atmosphere Adaptive Cosine Estimator (SACE) and Multiple atmosphere ACE (MACE) in which the target reflectance spectra are instead converted into sensor-reaching radiance using physics-based models. For SACE, known illumination and atmospheric conditions are incorporated in a single atmospheric model. For MACE the conditions are unknown so the algorithm uses many atmospheric models to cover the range of environmental variability, and it approximates the result using a subspace model. This approach is sometimes called the invariant method, and requires the choice of a subspace dimension for the model. We compare these two radiance-domain approaches to a Reflectance-domain ACE (RACE) approach on a HYDICE image featuring concealed materials. All three algorithms use the ACE detector, and all three techniques are able to detect most of the hidden materials in the imagery. For MACE we observe a strong dependence on the choice of the material subspace dimension. Increasing this value can lead to a decline in performance.
Directory of Open Access Journals (Sweden)
Da-Ming Yeh
Full Text Available This study examined the feasibility of quantitatively evaluating multiple biokinetic models and established the validity of the different compartment models using an assembled water phantom. Most commercialized phantoms are made to survey the imaging system since this is essential to increase the diagnostic accuracy for quality assurance. In contrast, few customized phantoms are specifically made to represent multi-compartment biokinetic models. This is because the complicated calculations as defined to solve the biokinetic models and the time-consuming verifications of the obtained solutions are impeded greatly the progress over the past decade. Nevertheless, in this work, five biokinetic models were separately defined by five groups of simultaneous differential equations to obtain the time-dependent radioactive concentration changes inside the water phantom. The water phantom was assembled by seven acrylic boxes in four different sizes, and the boxes were linked to varying combinations of hoses to signify the multiple biokinetic models from the biomedical perspective. The boxes that were connected by hoses were then regarded as a closed water loop with only one infusion and drain. 129.1±24.2 MBq of Tc-99m labeled methylene diphosphonate (MDP solution was thoroughly infused into the water boxes before gamma scanning; then the water was replaced with de-ionized water to simulate the biological removal rate among the boxes. The water was driven by an automatic infusion pump at 6.7 c.c./min, while the biological half-life of the four different-sized boxes (64, 144, 252, and 612 c.c. was 4.8, 10.7, 18.8, and 45.5 min, respectively. The five models of derived time-dependent concentrations for the boxes were estimated either by a self-developed program run in MATLAB or by scanning via a gamma camera facility. Either agreement or disagreement between the practical scanning and the theoretical prediction in five models was thoroughly discussed. The
A systematic study of multiple minerals precipitation modelling in wastewater treatment.
Kazadi Mbamba, Christian; Tait, Stephan; Flores-Alsina, Xavier; Batstone, Damien J
2015-11-15
Mineral solids precipitation is important in wastewater treatment. However approaches to minerals precipitation modelling are varied, often empirical, and mostly focused on single precipitate classes. A common approach, applicable to multi-species precipitates, is needed to integrate into existing wastewater treatment models. The present study systematically tested a semi-mechanistic modelling approach, using various experimental platforms with multiple minerals precipitation. Experiments included dynamic titration with addition of sodium hydroxide to synthetic wastewater, and aeration to progressively increase pH and induce precipitation in real piggery digestate and sewage sludge digestate. The model approach consisted of an equilibrium part for aqueous phase reactions and a kinetic part for minerals precipitation. The model was fitted to dissolved calcium, magnesium, total inorganic carbon and phosphate. Results indicated that precipitation was dominated by the mineral struvite, forming together with varied and minor amounts of calcium phosphate and calcium carbonate. The model approach was noted to have the advantage of requiring a minimal number of fitted parameters, so the model was readily identifiable. Kinetic rate coefficients, which were statistically fitted, were generally in the range 0.35-11.6 h(-1) with confidence intervals of 10-80% relative. Confidence regions for the kinetic rate coefficients were often asymmetric with model-data residuals increasing more gradually with larger coefficient values. This suggests that a large kinetic coefficient could be used when actual measured data is lacking for a particular precipitate-matrix combination. Correlation between the kinetic rate coefficients of different minerals was low, indicating that parameter values for individual minerals could be independently fitted (keeping all other model parameters constant). Implementation was therefore relatively flexible, and would be readily expandable to include other
Eekhout, I.; Wiel, M.A. van de; Heymans, M.W.
2017-01-01
Background. Multiple imputation is a recommended method to handle missing data. For significance testing after multiple imputation, Rubin’s Rules (RR) are easily applied to pool parameter estimates. In a logistic regression model, to consider whether a categorical covariate with more than two levels
International Nuclear Information System (INIS)
Hatanaka, Koichiro; Watari, Shingo; Ijiri, Yuji
1999-11-01
Safety assessment of the geological isolation system according to the groundwater scenario has traditionally been conducted based on the signal canister configuration and then the safety of total system has been evaluated based on the dose rates which were obtained by multiplying the migration rates released from the engineered barrier and/or the natural barrier by dose conversion factors and total number of canisters disposed in the repository. The dose conversion factors can be obtained from the biosphere analysis. In this study, we focused on the effect of multiple sources due to the disposal of canisters at different positions in the repository. By taking the effect of multiple sources into consideration, concentration interference in the repository region is possible to take place. Therefore, radionuclide transport model/code considering the effect of concentration interference due to the multiple sources was developed to make assessments of the effect quantitatively. The newly developed model/code was verified through the comparison analysis with the existing radionuclide transport analysis code used in the second progress report. In addition, the effect of the concentration interference was evaluated by setting a simple problem using the newly developed analysis code. This results shows that the maximum park value of the migration rates from the repository was about two orders of magnitude lower than that based on single canister configuration. Since the analysis code was developed by assuming that all canisters disposed of along the one-dimensional groundwater flow contribute to the concentration interference in the repository region, the assumption should be verified by conducting two or three-dimensional analysis considering heterogeneous geological structure as a future work. (author)
Towards an Iterated Game Model with Multiple Adversaries in Smart-World Systems
Directory of Open Access Journals (Sweden)
Xiaofei He
2018-02-01
Full Text Available Diverse and varied cyber-attacks challenge the operation of the smart-world system that is supported by Internet-of-Things (IoT (smart cities, smart grid, smart transportation, etc. and must be carefully and thoughtfully addressed before widespread adoption of the smart-world system can be fully realized. Although a number of research efforts have been devoted to defending against these threats, a majority of existing schemes focus on the development of a specific defensive strategy to deal with specific, often singular threats. In this paper, we address the issue of coalitional attacks, which can be launched by multiple adversaries cooperatively against the smart-world system such as smart cities. Particularly, we propose a game-theory based model to capture the interaction among multiple adversaries, and quantify the capacity of the defender based on the extended Iterated Public Goods Game (IPGG model. In the formalized game model, in each round of the attack, a participant can either cooperate by participating in the coalitional attack, or defect by standing aside. In our work, we consider the generic defensive strategy that has a probability to detect the coalitional attack. When the coalitional attack is detected, all participating adversaries are penalized. The expected payoff of each participant is derived through the equalizer strategy that provides participants with competitive benefits. The multiple adversaries with the collusive strategy are also considered. Via a combination of theoretical analysis and experimentation, our results show that no matter which strategies the adversaries choose (random strategy, win-stay-lose-shift strategy, or even the adaptive equalizer strategy, our formalized game model is capable of enabling the defender to greatly reduce the maximum value of the expected average payoff to the adversaries via provisioning sufficient defensive resources, which is reflected by setting a proper penalty factor against
Towards an Iterated Game Model with Multiple Adversaries in Smart-World Systems †
Yang, Xinyu; Yu, Wei; Lin, Jie; Yang, Qingyu
2018-01-01
Diverse and varied cyber-attacks challenge the operation of the smart-world system that is supported by Internet-of-Things (IoT) (smart cities, smart grid, smart transportation, etc.) and must be carefully and thoughtfully addressed before widespread adoption of the smart-world system can be fully realized. Although a number of research efforts have been devoted to defending against these threats, a majority of existing schemes focus on the development of a specific defensive strategy to deal with specific, often singular threats. In this paper, we address the issue of coalitional attacks, which can be launched by multiple adversaries cooperatively against the smart-world system such as smart cities. Particularly, we propose a game-theory based model to capture the interaction among multiple adversaries, and quantify the capacity of the defender based on the extended Iterated Public Goods Game (IPGG) model. In the formalized game model, in each round of the attack, a participant can either cooperate by participating in the coalitional attack, or defect by standing aside. In our work, we consider the generic defensive strategy that has a probability to detect the coalitional attack. When the coalitional attack is detected, all participating adversaries are penalized. The expected payoff of each participant is derived through the equalizer strategy that provides participants with competitive benefits. The multiple adversaries with the collusive strategy are also considered. Via a combination of theoretical analysis and experimentation, our results show that no matter which strategies the adversaries choose (random strategy, win-stay-lose-shift strategy, or even the adaptive equalizer strategy), our formalized game model is capable of enabling the defender to greatly reduce the maximum value of the expected average payoff to the adversaries via provisioning sufficient defensive resources, which is reflected by setting a proper penalty factor against the adversaries
Towards an Iterated Game Model with Multiple Adversaries in Smart-World Systems.
He, Xiaofei; Yang, Xinyu; Yu, Wei; Lin, Jie; Yang, Qingyu
2018-02-24
Diverse and varied cyber-attacks challenge the operation of the smart-world system that is supported by Internet-of-Things (IoT) (smart cities, smart grid, smart transportation, etc.) and must be carefully and thoughtfully addressed before widespread adoption of the smart-world system can be fully realized. Although a number of research efforts have been devoted to defending against these threats, a majority of existing schemes focus on the development of a specific defensive strategy to deal with specific, often singular threats. In this paper, we address the issue of coalitional attacks, which can be launched by multiple adversaries cooperatively against the smart-world system such as smart cities. Particularly, we propose a game-theory based model to capture the interaction among multiple adversaries, and quantify the capacity of the defender based on the extended Iterated Public Goods Game (IPGG) model. In the formalized game model, in each round of the attack, a participant can either cooperate by participating in the coalitional attack, or defect by standing aside. In our work, we consider the generic defensive strategy that has a probability to detect the coalitional attack. When the coalitional attack is detected, all participating adversaries are penalized. The expected payoff of each participant is derived through the equalizer strategy that provides participants with competitive benefits. The multiple adversaries with the collusive strategy are also considered. Via a combination of theoretical analysis and experimentation, our results show that no matter which strategies the adversaries choose (random strategy, win-stay-lose-shift strategy, or even the adaptive equalizer strategy), our formalized game model is capable of enabling the defender to greatly reduce the maximum value of the expected average payoff to the adversaries via provisioning sufficient defensive resources, which is reflected by setting a proper penalty factor against the adversaries
A Relational Encoding of a Conceptual Model with Multiple Temporal Dimensions
Gubiani, Donatella; Montanari, Angelo
The theoretical interest and the practical relevance of a systematic treatment of multiple temporal dimensions is widely recognized in the database and information system communities. Nevertheless, most relational databases have no temporal support at all. A few of them provide a limited support, in terms of temporal data types and predicates, constructors, and functions for the management of time values (borrowed from the SQL standard). One (resp., two) temporal dimensions are supported by historical and transaction-time (resp., bitemporal) databases only. In this paper, we provide a relational encoding of a conceptual model featuring four temporal dimensions, namely, the classical valid and transaction times, plus the event and availability times. We focus our attention on the distinctive technical features of the proposed temporal extension of the relation model. In the last part of the paper, we briefly show how to implement it in a standard DBMS.
Directory of Open Access Journals (Sweden)
Pradeepa Yahampath
2008-03-01
Full Text Available Speech coding techniques capable of generating encoded representations which are robust against channel losses play an important role in enabling reliable voice communication over packet networks and mobile wireless systems. In this paper, we investigate the use of multiple description index assignments (MDIAs for loss-tolerant transmission of line spectral frequency (LSF coefficients, typically generated by state-of-the-art speech coders. We propose a simulated annealing-based approach for optimizing MDIAs for Markov-model-based decoders which exploit inter- and intraframe correlations in LSF coefficients to reconstruct the quantized LSFs from coded bit streams corrupted by channel losses. Experimental results are presented which compare the performance of a number of novel LSF transmission schemes. These results clearly demonstrate that Markov-model-based decoders, when used in conjunction with optimized MDIA, can yield average spectral distortion much lower than that produced by methods such as interleaving/interpolation, commonly used to combat the packet losses.
Multiple-parameter bifurcation analysis in a Kuramoto model with time delay and distributed shear
Niu, Ben; Zhang, Jiaming; Wei, Junjie
2018-05-01
In this paper, time delay effect and distributed shear are considered in the Kuramoto model. On the Ott-Antonsen's manifold, through analyzing the associated characteristic equation of the reduced functional differential equation, the stability boundary of the incoherent state is derived in multiple-parameter space. Moreover, very rich dynamical behavior such as stability switches inducing synchronization switches can occur in this equation. With the loss of stability, Hopf bifurcating coherent states arise, and the criticality of Hopf bifurcations is determined by applying the normal form theory and the center manifold theorem. On one hand, theoretical analysis indicates that the width of shear distribution and time delay can both eliminate the synchronization then lead the Kuramoto model to incoherence. On the other, time delay can induce several coexisting coherent states. Finally, some numerical simulations are given to support the obtained results where several bifurcation diagrams are drawn, and the effect of time delay and shear is discussed.
Multiple periodic solutions for a discrete time model of plankton allelopathy
Zhang Jianbao; Fang Hui
2006-01-01
We study a discrete time model of the growth of two species of plankton with competitive and allelopathic effects on each other N1(k+1) = N1(k)exp{r1(k)-a11(k)N1(k)-a12(k)N2(k)-b1(k)N1(k)N2(k)}, N2(k+1) = N2(k)exp{r2(k)-a21(k)N2(k)-b2(k)N1(k)N1(k)N2(k)}. A set of sufficient conditions is obtained for the existence of multiple positive periodic solutions for this model. The approach is based on Mawhin's continuation theorem of coincidence degree theory as well as some a priori estimates. Some...
International Nuclear Information System (INIS)
Karlberg, Louise; Gustafsson, David; Jansson, Per-Erik
2006-01-01
Estimates of carbon fluxes and turnover in ecosystems are key elements in the understanding of climate change and in predicting the accumulation of trace elements in the biosphere. In this paper we present estimates of carbon fluxes and turnover times for five terrestrial ecosystems using a modeling approach. Multiple criteria of acceptance were used to parameterize the model, thus incorporating large amounts of multi-faceted empirical data in the simulations in a standardized manner. Mean turnover times of carbon were found to be rather similar between systems with a few exceptions, even though the size of both the pools and the fluxes varied substantially. Depending on the route of the carbon through the ecosystem, turnover times varied from less than one year to more than one hundred, which may be of importance when considering trace element transport and retention. The parameterization method was useful both in the estimation of unknown parameters, and to identify variability in carbon turnover in the selected ecosystems
Directory of Open Access Journals (Sweden)
Rondeau Paul
2008-01-01
Full Text Available Speech coding techniques capable of generating encoded representations which are robust against channel losses play an important role in enabling reliable voice communication over packet networks and mobile wireless systems. In this paper, we investigate the use of multiple description index assignments (MDIAs for loss-tolerant transmission of line spectral frequency (LSF coefficients, typically generated by state-of-the-art speech coders. We propose a simulated annealing-based approach for optimizing MDIAs for Markov-model-based decoders which exploit inter- and intraframe correlations in LSF coefficients to reconstruct the quantized LSFs from coded bit streams corrupted by channel losses. Experimental results are presented which compare the performance of a number of novel LSF transmission schemes. These results clearly demonstrate that Markov-model-based decoders, when used in conjunction with optimized MDIA, can yield average spectral distortion much lower than that produced by methods such as interleaving/interpolation, commonly used to combat the packet losses.
Anderson, D. E., Jr.; Meier, R. R.; Hodges, R. R., Jr.; Tinsley, B. A.
1987-01-01
The H Balmer alpha nightglow is investigated by using Monte Carlo models of asymmetric geocoronal atomic hydrogen distributions as input to a radiative transfer model of solar Lyman-beta radiation in the thermosphere and atmosphere. It is shown that it is essential to include multiple scattering of Lyman-beta radiation in the interpretation of Balmer alpha airglow data. Observations of diurnal variation in the Balmer alpha airglow showing slightly greater intensities in the morning relative to evening are consistent with theory. No evidence is found for anything other than a single sinusoidal diurnal variation of exobase density. Dramatic changes in effective temperature derived from the observed Balmer alpha line profiles are expected on the basis of changing illumination conditions in the thermosphere and exosphere as different regions of the sky are scanned.
CALCULUS FROM THE PAST: MULTIPLE DELAY SYSTEMS ARISING IN CANCER CELL MODELLING
WAKE, G. C.; BYRNE, H. M.
2013-01-01
Nonlocal calculus is often overlooked in the mathematics curriculum. In this paper we present an interesting new class of nonlocal problems that arise from modelling the growth and division of cells, especially cancer cells, as they progress through the cell cycle. The cellular biomass is assumed to be unstructured in size or position, and its evolution governed by a time-dependent system of ordinary differential equations with multiple time delays. The system is linear and taken to be autonomous. As a result, it is possible to reduce its solution to that of a nonlinear matrix eigenvalue problem. This method is illustrated by considering case studies, including a model of the cell cycle developed recently by Simms, Bean and Koeber. The paper concludes by explaining how asymptotic expressions for the distribution of cells across the compartments can be determined and used to assess the impact of different chemotherapeutic agents. Copyright © 2013 Australian Mathematical Society.
Study on validation method for femur finite element model under multiple loading conditions
Guan, Fengjiao; Zhang, Guanjun; Liu, Jie; Wang, Shujing; Luo, Xu
2018-03-01
Acquisition of accurate and reliable constitutive parameters related to bio-tissue materials was beneficial to improve biological fidelity of a Finite Element (FE) model and predict impact damages more effectively. In this paper, a femur FE model was established under multiple loading conditions with diverse impact positions. Then, based on sequential response surface method and genetic algorithms, the material parameters identification was transformed to a multi-response optimization problem. Finally, the simulation results successfully coincided with force-displacement curves obtained by numerous experiments. Thus, computational accuracy and efficiency of the entire inverse calculation process were enhanced. This method was able to effectively reduce the computation time in the inverse process of material parameters. Meanwhile, the material parameters obtained by the proposed method achieved higher accuracy.
Monte Carlo simulation of a statistical mechanical model of multiple protein sequence alignment.
Kinjo, Akira R
2017-01-01
A grand canonical Monte Carlo (MC) algorithm is presented for studying the lattice gas model (LGM) of multiple protein sequence alignment, which coherently combines long-range interactions and variable-length insertions. MC simulations are used for both parameter optimization of the model and production runs to explore the sequence subspace around a given protein family. In this Note, I describe the details of the MC algorithm as well as some preliminary results of MC simulations with various temperatures and chemical potentials, and compare them with the mean-field approximation. The existence of a two-state transition in the sequence space is suggested for the SH3 domain family, and inappropriateness of the mean-field approximation for the LGM is demonstrated.
Probability distributions in conservative energy exchange models of multiple interacting agents
International Nuclear Information System (INIS)
Scafetta, Nicola; West, Bruce J
2007-01-01
Herein we study energy exchange models of multiple interacting agents that conserve energy in each interaction. The models differ regarding the rules that regulate the energy exchange and boundary effects. We find a variety of stochastic behaviours that manifest energy equilibrium probability distributions of different types and interaction rules that yield not only the exponential distributions such as the familiar Maxwell-Boltzmann-Gibbs distribution of an elastically colliding ideal particle gas, but also uniform distributions, truncated exponential distributions, Gaussian distributions, Gamma distributions, inverse power law distributions, mixed exponential and inverse power law distributions, and evolving distributions. This wide variety of distributions should be of value in determining the underlying mechanisms generating the statistical properties of complex phenomena including those to be found in complex chemical reactions
Shared Authentic Leadership in Research Teams: Testing a Multiple Mediation Model.
Guenter, Hannes; Gardner, William L; Davis McCauley, Kelly; Randolph-Seng, Brandon; Prabhu, Veena P
2017-12-01
Research teams face complex leadership and coordination challenges. We propose shared authentic leadership (SAL) as a timely approach to addressing these challenges. Drawing from authentic and functional leadership theories, we posit a multiple mediation model that suggests three mechanisms whereby SAL influences team effectiveness: shared mental models (SMM), team trust, and team coordination. To test our hypotheses, we collected survey data on leadership and teamwork within 142 research teams that recently published an article in a peer-reviewed management journal. The results indicate team coordination represents the primary mediating mechanism accounting for the relationship between SAL and research team effectiveness. While teams with high trust and SMM felt more successful and were more satisfied, they were less successful in publishing in high-impact journals. We also found the four SAL dimensions (i.e., self-awareness, relational transparency, balanced processing, and internalized moral perspective) to associate differently with team effectiveness.
Ship Detection Based on Multiple Features in Random Forest Model for Hyperspectral Images
Li, N.; Ding, L.; Zhao, H.; Shi, J.; Wang, D.; Gong, X.
2018-04-01
A novel method for detecting ships which aim to make full use of both the spatial and spectral information from hyperspectral images is proposed. Firstly, the band which is high signal-noise ratio in the range of near infrared or short-wave infrared spectrum, is used to segment land and sea on Otsu threshold segmentation method. Secondly, multiple features that include spectral and texture features are extracted from hyperspectral images. Principal components analysis (PCA) is used to extract spectral features, the Grey Level Co-occurrence Matrix (GLCM) is used to extract texture features. Finally, Random Forest (RF) model is introduced to detect ships based on the extracted features. To illustrate the effectiveness of the method, we carry out experiments over the EO-1 data by comparing single feature and different multiple features. Compared with the traditional single feature method and Support Vector Machine (SVM) model, the proposed method can stably achieve the target detection of ships under complex background and can effectively improve the detection accuracy of ships.
Directory of Open Access Journals (Sweden)
Aaron L. Leppin
2015-01-01
Full Text Available An increasing proportion of healthcare resources in the United States are directed toward an expanding group of complex and multimorbid patients. Federal stakeholders have called for new models of care to meet the needs of these patients. Minimally Disruptive Medicine (MDM is a theory-based, patient-centered, and context-sensitive approach to care that focuses on achieving patient goals for life and health while imposing the smallest possible treatment burden on patients’ lives. The MDM Care Model is designed to be pragmatically comprehensive, meaning that it aims to address any and all factors that impact the implementation and effectiveness of care for patients with multiple chronic conditions. It comprises core activities that map to an underlying and testable theoretical framework. This encourages refinement and future study. Here, we present the conceptual rationale for and a practical approach to minimally disruptive care for patients with multiple chronic conditions. We introduce some of the specific tools and strategies that can be used to identify the right care for these patients and to put it into practice.
Directory of Open Access Journals (Sweden)
Hojjat A. Izadi
2011-01-01
Full Text Available The decentralized model predictive control (DMPC of multiple cooperative vehicles with the possibility of communication loss/delay is investigated. The neighboring vehicles exchange their predicted trajectories at every sample time to maintain the cooperation objectives. In the event of a communication loss (packet dropout, the most recent available information, which is potentially delayed, is used. Then the communication loss problem changes to a cooperative problem when random large communication delays are present. Such large communication delays can lead to poor cooperation performance and unsafe behaviors such as collisions. A new DMPC approach is developed to improve the cooperation performance and achieve safety in the presence of the large communication delays. The proposed DMPC architecture estimates the tail of neighbor's trajectory which is not available due to the large communication delays for improving the performance. The concept of the tube MPC is also employed to provide the safety of the fleet against collisions, in the presence of large intervehicle communication delays. In this approach, a tube shaped trajectory set is assumed around the trajectory of the neighboring vehicles whose trajectory is delayed/lost. The radius of tube is a function of the communication delay and vehicle's maneuverability (in the absence of model uncertainty. The simulation of formation problem of multiple vehicles is employed to illustrate the effectiveness of the proposed approach.
Therapeutic effects of D-aspartate in a mouse model of multiple sclerosis
Directory of Open Access Journals (Sweden)
Sanaz Afraei
2017-07-01
Full Text Available Experimental autoimmune encephalomyelitis (EAE is an animal model of multiple sclerosis. EAE is mainly mediated by adaptive and innate immune responses that leads to an inflammatory demyelization and axonal damage. The aim of the present research was to examine the therapeutic efficacy of D-aspartic acid (D-Asp on a mouse EAE model. EAE induction was performed in female C57BL/6 mice by myelin 40 oligodendrocyte glycoprotein (35-55 in a complete Freund's adjuvant emulsion, and D-Asp was used to test its efficiency in the reduction of EAE. During the course of study, clinical evaluation was assessed, and on Day 21, post-immunization blood samples were taken from the heart of mice for the evaluation of interleukin 6 and other chemical molecules. The mice were sacrificed, and their brain and cerebellum were removed for histological analysis. Our findings indicated that D-Asp had beneficial effects on EAE by attenuation in the severity and delay in the onset of the disease. Histological analysis showed that treatment with D-Asp can reduce inflammation. Moreover, in D-Asp-treated mice, the serum level of interleukin 6 was significantly lower than that in control animals, whereas the total antioxidant capacity was significantly higher. The data indicates that D-Asp possess neuroprotective property to prevent the onset of the multiple sclerosis.
Tokuda, Tomoki; Yoshimoto, Junichiro; Shimizu, Yu; Okada, Go; Takamura, Masahiro; Okamoto, Yasumasa; Yamawaki, Shigeto; Doya, Kenji
2017-01-01
We propose a novel method for multiple clustering, which is useful for analysis of high-dimensional data containing heterogeneous types of features. Our method is based on nonparametric Bayesian mixture models in which features are automatically partitioned (into views) for each clustering solution. This feature partition works as feature selection for a particular clustering solution, which screens out irrelevant features. To make our method applicable to high-dimensional data, a co-clustering structure is newly introduced for each view. Further, the outstanding novelty of our method is that we simultaneously model different distribution families, such as Gaussian, Poisson, and multinomial distributions in each cluster block, which widens areas of application to real data. We apply the proposed method to synthetic and real data, and show that our method outperforms other multiple clustering methods both in recovering true cluster structures and in computation time. Finally, we apply our method to a depression dataset with no true cluster structure available, from which useful inferences are drawn about possible clustering structures of the data.
Directory of Open Access Journals (Sweden)
Tomoki Tokuda
Full Text Available We propose a novel method for multiple clustering, which is useful for analysis of high-dimensional data containing heterogeneous types of features. Our method is based on nonparametric Bayesian mixture models in which features are automatically partitioned (into views for each clustering solution. This feature partition works as feature selection for a particular clustering solution, which screens out irrelevant features. To make our method applicable to high-dimensional data, a co-clustering structure is newly introduced for each view. Further, the outstanding novelty of our method is that we simultaneously model different distribution families, such as Gaussian, Poisson, and multinomial distributions in each cluster block, which widens areas of application to real data. We apply the proposed method to synthetic and real data, and show that our method outperforms other multiple clustering methods both in recovering true cluster structures and in computation time. Finally, we apply our method to a depression dataset with no true cluster structure available, from which useful inferences are drawn about possible clustering structures of the data.
Yoshimoto, Junichiro; Shimizu, Yu; Okada, Go; Takamura, Masahiro; Okamoto, Yasumasa; Yamawaki, Shigeto; Doya, Kenji
2017-01-01
We propose a novel method for multiple clustering, which is useful for analysis of high-dimensional data containing heterogeneous types of features. Our method is based on nonparametric Bayesian mixture models in which features are automatically partitioned (into views) for each clustering solution. This feature partition works as feature selection for a particular clustering solution, which screens out irrelevant features. To make our method applicable to high-dimensional data, a co-clustering structure is newly introduced for each view. Further, the outstanding novelty of our method is that we simultaneously model different distribution families, such as Gaussian, Poisson, and multinomial distributions in each cluster block, which widens areas of application to real data. We apply the proposed method to synthetic and real data, and show that our method outperforms other multiple clustering methods both in recovering true cluster structures and in computation time. Finally, we apply our method to a depression dataset with no true cluster structure available, from which useful inferences are drawn about possible clustering structures of the data. PMID:29049392
Sweeney, R.E.; Langenberg, J.P.; Maxwell, D.M.
2006-01-01
A physiologically based pharmacokinetic (PB/PK) model has been developed in advanced computer simulation language (ACSL) to describe blood and tissue concentration-time profiles of the C(±)P(-) stereoisomers of soman after inhalation, subcutaneous and intravenous exposures at low (0.8-1.0 × LD50),
Liu, Hua; Wu, Wen
2017-06-13
For improving the tracking accuracy and model switching speed of maneuvering target tracking in nonlinear systems, a new algorithm named the interacting multiple model fifth-degree spherical simplex-radial cubature Kalman filter (IMM5thSSRCKF) is proposed in this paper. The new algorithm is a combination of the interacting multiple model (IMM) filter and the fifth-degree spherical simplex-radial cubature Kalman filter (5thSSRCKF). The proposed algorithm makes use of Markov process to describe the switching probability among the models, and uses 5thSSRCKF to deal with the state estimation of each model. The 5thSSRCKF is an improved filter algorithm, which utilizes the fifth-degree spherical simplex-radial rule to improve the filtering accuracy. Finally, the tracking performance of the IMM5thSSRCKF is evaluated by simulation in a typical maneuvering target tracking scenario. Simulation results show that the proposed algorithm has better tracking performance and quicker model switching speed when disposing maneuver models compared with the interacting multiple model unscented Kalman filter (IMMUKF), the interacting multiple model cubature Kalman filter (IMMCKF) and the interacting multiple model fifth-degree cubature Kalman filter (IMM5thCKF).
Directory of Open Access Journals (Sweden)
Hua Liu
2017-06-01
Full Text Available For improving the tracking accuracy and model switching speed of maneuvering target tracking in nonlinear systems, a new algorithm named the interacting multiple model fifth-degree spherical simplex-radial cubature Kalman filter (IMM5thSSRCKF is proposed in this paper. The new algorithm is a combination of the interacting multiple model (IMM filter and the fifth-degree spherical simplex-radial cubature Kalman filter (5thSSRCKF. The proposed algorithm makes use of Markov process to describe the switching probability among the models, and uses 5thSSRCKF to deal with the state estimation of each model. The 5thSSRCKF is an improved filter algorithm, which utilizes the fifth-degree spherical simplex-radial rule to improve the filtering accuracy. Finally, the tracking performance of the IMM5thSSRCKF is evaluated by simulation in a typical maneuvering target tracking scenario. Simulation results show that the proposed algorithm has better tracking performance and quicker model switching speed when disposing maneuver models compared with the interacting multiple model unscented Kalman filter (IMMUKF, the interacting multiple model cubature Kalman filter (IMMCKF and the interacting multiple model fifth-degree cubature Kalman filter (IMM5thCKF.
Multiple attractors and crisis route to chaos in a model food-chain
International Nuclear Information System (INIS)
Upadhyay, Ranjit Kumar
2003-01-01
An attempt has been made to identify the mechanism, which is responsible for the existence of chaos in narrow parameter range in a realistic ecological model food-chain. Analytical and numerical studies of a three species food-chain model similar to a situation likely to be seen in terrestrial ecosystems has been carried out. The study of the model food chain suggests that the existence of chaos in narrow parameter ranges is caused by the crisis-induced sudden death of chaotic attractors. Varying one of the critical parameters in its range while keeping all the others constant, one can monitor the changes in the dynamical behaviour of the system, thereby fixing the regimes in which the system exhibits chaotic dynamics. The computed bifurcation diagrams and basin boundary calculations indicate that crisis is the underlying factor which generates chaotic dynamics in this model food-chain. We investigate sudden qualitative changes in chaotic dynamical behaviour, which occur at a parameter value a 1 =1.7804 at which the chaotic attractor destroyed by boundary crisis with an unstable periodic orbit created by the saddle-node bifurcation. Multiple attractors with riddled basins and fractal boundaries are also observed. If ecological systems of interacting species do indeed exhibit multiple attractors etc., the long term dynamics of such systems may undergo vast qualitative changes following epidemics or environmental catastrophes due to the system being pushed into the basin of a new attractor by the perturbation. Coupled with stochasticity, such complex behaviours may render such systems practically unpredictable
Standardization of milk mid-infrared spectrometers for the transfer and use of multiple models.
Grelet, C; Pierna, J A Fernández; Dardenne, P; Soyeurt, H; Vanlierde, A; Colinet, F; Bastin, C; Gengler, N; Baeten, V; Dehareng, F
2017-10-01
An increasing number of models are being developed to provide information from milk Fourier transform mid-infrared (FT-MIR) spectra on fine milk composition, technological properties of milk, or even cows' physiological status. In this context, and to take advantage of these existing models, the purpose of this work was to evaluate whether a spectral standardization method can enable the use of multiple equations within a network of different FT-MIR spectrometers. The piecewise direct standardization method was used, matching "slave" instruments to a common reference, the "master." The effect of standardization on network reproducibility was assessed on 66 instruments from 3 different brands by comparing the spectral variability of the slaves and the master with and without standardization. With standardization, the global Mahalanobis distance from the slave spectra to the master spectra was reduced on average from 2,655.9 to 14.3, representing a significant reduction of noninformative spectral variability. The transfer of models from instrument to instrument was tested using 3 FT-MIR models predicting (1) the quantity of daily methane emitted by dairy cows, (2) the concentration of polyunsaturated fatty acids in milk, and (3) the fresh cheese yield. The differences, in terms of root mean squared error, between master predictions and slave predictions were reduced after standardization on average from 103 to 17 g/d, from 0.0315 to 0.0045 g/100 mL of milk, and from 2.55 to 0.49 g of curd/100 g of milk, respectively. For all the models, standard deviations of predictions among all the instruments were also reduced by 5.11 times for methane, 5.01 times for polyunsaturated fatty acids, and 7.05 times for fresh cheese yield, showing an improvement of prediction reproducibility within the network. Regarding the results obtained, spectral standardization allows the transfer and use of multiple models on all instruments as well as the improvement of spectral and prediction
Alfi, V.; Cristelli, M.; Pietronero, L.; Zaccaria, A.
2009-02-01
We present a detailed study of the statistical properties of the Agent Based Model introduced in paper I [Eur. Phys. J. B, DOI: 10.1140/epjb/e2009-00028-4] and of its generalization to the multiplicative dynamics. The aim of the model is to consider the minimal elements for the understanding of the origin of the stylized facts and their self-organization. The key elements are fundamentalist agents, chartist agents, herding dynamics and price behavior. The first two elements correspond to the competition between stability and instability tendencies in the market. The herding behavior governs the possibility of the agents to change strategy and it is a crucial element of this class of models. We consider a linear approximation for the price dynamics which permits a simple interpretation of the model dynamics and, for many properties, it is possible to derive analytical results. The generalized non linear dynamics results to be extremely more sensible to the parameter space and much more difficult to analyze and control. The main results for the nature and self-organization of the stylized facts are, however, very similar in the two cases. The main peculiarity of the non linear dynamics is an enhancement of the fluctuations and a more marked evidence of the stylized facts. We will also discuss some modifications of the model to introduce more realistic elements with respect to the real markets.
MULTIPLE HUMAN TRACKING IN COMPLEX SITUATION BY DATA ASSIMILATION WITH PEDESTRIAN BEHAVIOR MODEL
Directory of Open Access Journals (Sweden)
W. Nakanishi
2012-07-01
Full Text Available A new method of multiple human tracking is proposed. The key concept is that to assume a tracking process as a data assimilation process. Despite the importance of understanding pedestrian behavior in public space with regard to achieving more sophisticated space design and flow control, automatic human tracking in complex situation is still challenging when people move close to each other or are occluded by others. For this difficulty, we stochastically combine existing tracking method by image processing with simulation models of walking behavior. We describe a system in a form of general state space model and define the components of the model according to the review on related works. Then we apply the proposed method to the data acquired at the ticket gate of the railway station. We show the high performance of the method, as well as compare the result with other model to present the advantage of integrating the behavior model to the tracking method. We also show the method's ability to acquire passenger flow information such as ticket gate choice and OD data automatically from the tracking result.
Estimation in a multiplicative mixed model involving a genetic relationship matrix
Directory of Open Access Journals (Sweden)
Eccleston John A
2009-04-01
Full Text Available Abstract Genetic models partitioning additive and non-additive genetic effects for populations tested in replicated multi-environment trials (METs in a plant breeding program have recently been presented in the literature. For these data, the variance model involves the direct product of a large numerator relationship matrix A, and a complex structure for the genotype by environment interaction effects, generally of a factor analytic (FA form. With MET data, we expect a high correlation in genotype rankings between environments, leading to non-positive definite covariance matrices. Estimation methods for reduced rank models have been derived for the FA formulation with independent genotypes, and we employ these estimation methods for the more complex case involving the numerator relationship matrix. We examine the performance of differing genetic models for MET data with an embedded pedigree structure, and consider the magnitude of the non-additive variance. The capacity of existing software packages to fit these complex models is largely due to the use of the sparse matrix methodology and the average information algorithm. Here, we present an extension to the standard formulation necessary for estimation with a factor analytic structure across multiple environments.
Directory of Open Access Journals (Sweden)
C. Makendran
2015-01-01
Full Text Available Prediction models for low volume village roads in India are developed to evaluate the progression of different types of distress such as roughness, cracking, and potholes. Even though the Government of India is investing huge quantum of money on road construction every year, poor control over the quality of road construction and its subsequent maintenance is leading to the faster road deterioration. In this regard, it is essential that scientific maintenance procedures are to be evolved on the basis of performance of low volume flexible pavements. Considering the above, an attempt has been made in this research endeavor to develop prediction models to understand the progression of roughness, cracking, and potholes in flexible pavements exposed to least or nil routine maintenance. Distress data were collected from the low volume rural roads covering about 173 stretches spread across Tamil Nadu state in India. Based on the above collected data, distress prediction models have been developed using multiple linear regression analysis. Further, the models have been validated using independent field data. It can be concluded that the models developed in this study can serve as useful tools for the practicing engineers maintaining flexible pavements on low volume roads.
DEFF Research Database (Denmark)
Gaspar, Jozsef; Fosbøl, Philip Loldrup
2017-01-01
Reactive absorption is a key process for gas separation and purification and it is the main technology for CO2 capture. Thus, reliable and simple mathematical models for mass transfer rate calculation are essential. Models which apply to parallel interacting and non-interacting reactions, for all......, desorption and pinch conditions.In this work, we apply the GM model to multiple parallel reactions. We deduce the model for piperazine (PZ) CO2 capture and we validate it against wetted-wall column measurements using 2, 5 and 8 molal PZ for temperatures between 40 °C and 100 °C and CO2 loadings between 0.......23 and 0.41 mol CO2/2 mol PZ. We show that overall second order kinetics describes well the reaction between CO2 and PZ accounting for the carbamate and bicarbamate reactions. Here we prove the GM model for piperazine and MEA but we expect that this practical approach is applicable for various amines...
A location-routing problem model with multiple periods and fuzzy demands
Directory of Open Access Journals (Sweden)
Ali Nadizadeh
2014-08-01
Full Text Available This paper puts forward a dynamic capacitated location-routing problem with fuzzy demands (DCLRP-FD. It is given on input a set of identical vehicles (each having a capacity, a fixed cost and availability level, a set of depots with restricted capacities and opening costs, a set of customers with fuzzy demands, and a planning horizon with multiple periods. The problem consists of determining the depots to be opened only in the first period of the planning horizon, the customers and the vehicles to be assigned to each opened depot, and performing the routes that may be changed in each time period due to fuzzy demands. A fuzzy chance-constrained programming (FCCP model has been designed using credibility theory and a hybrid heuristic algorithm with four phases is presented in order to solve the problem. To obtain the best value of the fuzzy parameters of the model and show the influence of the availability level of vehicles on final solution, some computational experiments are carried out. The validity of the model is then evaluated in contrast with CLRP-FD's models in the literature. The results indicate that the model and the proposed algorithm are robust and could be used in real world problems.
Directory of Open Access Journals (Sweden)
Rui Xue
2015-01-01
Full Text Available Although bus passenger demand prediction has attracted increased attention during recent years, limited research has been conducted in the context of short-term passenger demand forecasting. This paper proposes an interactive multiple model (IMM filter algorithm-based model to predict short-term passenger demand. After aggregated in 15 min interval, passenger demand data collected from a busy bus route over four months were used to generate time series. Considering that passenger demand exhibits various characteristics in different time scales, three time series were developed, named weekly, daily, and 15 min time series. After the correlation, periodicity, and stationarity analyses, time series models were constructed. Particularly, the heteroscedasticity of time series was explored to achieve better prediction performance. Finally, IMM filter algorithm was applied to combine individual forecasting models with dynamically predicted passenger demand for next interval. Different error indices were adopted for the analyses of individual and hybrid models. The performance comparison indicates that hybrid model forecasts are superior to individual ones in accuracy. Findings of this study are of theoretical and practical significance in bus scheduling.
Liu, Pudong; Shi, Runhe; Wang, Hong; Bai, Kaixu; Gao, Wei
2014-10-01
Leaf pigments are key elements for plant photosynthesis and growth. Traditional manual sampling of these pigments is labor-intensive and costly, which also has the difficulty in capturing their temporal and spatial characteristics. The aim of this work is to estimate photosynthetic pigments at large scale by remote sensing. For this purpose, inverse model were proposed with the aid of stepwise multiple linear regression (SMLR) analysis. Furthermore, a leaf radiative transfer model (i.e. PROSPECT model) was employed to simulate the leaf reflectance where wavelength varies from 400 to 780 nm at 1 nm interval, and then these values were treated as the data from remote sensing observations. Meanwhile, simulated chlorophyll concentration (Cab), carotenoid concentration (Car) and their ratio (Cab/Car) were taken as target to build the regression model respectively. In this study, a total of 4000 samples were simulated via PROSPECT with different Cab, Car and leaf mesophyll structures as 70% of these samples were applied for training while the last 30% for model validation. Reflectance (r) and its mathematic transformations (1/r and log (1/r)) were all employed to build regression model respectively. Results showed fair agreements between pigments and simulated reflectance with all adjusted coefficients of determination (R2) larger than 0.8 as 6 wavebands were selected to build the SMLR model. The largest value of R2 for Cab, Car and Cab/Car are 0.8845, 0.876 and 0.8765, respectively. Meanwhile, mathematic transformations of reflectance showed little influence on regression accuracy. We concluded that it was feasible to estimate the chlorophyll and carotenoids and their ratio based on statistical model with leaf reflectance data.
Directory of Open Access Journals (Sweden)
Rebecca A Brady
Full Text Available Staphylococcus aureus is a major human pathogen and a leading cause of nosocomial and community-acquired infections. Development of a vaccine against this pathogen is an important goal. While S. aureus protective antigens have been identified in the literature, the majority have only been tested in a single animal model of disease. We wished to evaluate the ability of one S. aureus vaccine antigen to protect in multiple mouse models, thus assessing whether protection in one model translates to protection in other models encompassing the full breadth of infections the pathogen can cause. We chose to focus on genetically inactivated alpha toxin mutant HlaH35L. We evaluated the protection afforded by this antigen in three models of infection using the same vaccine dose, regimen, route of immunization, adjuvant, and challenge strain. When mice were immunized with HlaH35L and challenged via a skin and soft tissue infection model, HlaH35L immunization led to a less severe infection and decreased S. aureus levels at the challenge site when compared to controls. Challenge of HlaH35L-immunized mice using a systemic infection model resulted in a limited, but statistically significant decrease in bacterial colonization as compared to that observed with control mice. In contrast, in a prosthetic implant model of chronic biofilm infection, there was no significant difference in bacterial levels when compared to controls. These results demonstrate that vaccines may confer protection against one form of S. aureus disease without conferring protection against other disease presentations and thus underscore a significant challenge in S. aureus vaccine development.
Directory of Open Access Journals (Sweden)
Jingjin Liu
2017-10-01
Full Text Available Problems continue to be encountered concerning the traditional vacuum preloading method in field during the treatment of newly deposited dredger fills. In this paper, an improved multiple-vacuum preloading method was developed to consolidate newly dredger fills that are hydraulically placed in seawater for land reclamation in Lingang Industrial Zone of Tianjin City, China. With this multiple-vacuum preloading method, the newly deposited dredger fills could be treated effectively by adopting a novel moisture separator and a rapid improvement technique without sand cushion. A series of model tests was conducted in the laboratory for comparing the results from the multiple-vacuum preloading method and the traditional one. Ten piezometers and settlement plates were installed to measure the variations in excess pore water pressures and moisture content, and vane shear strength was measured at different positions. The testing results indicate that water discharge–time curves obtained by the traditional vacuum preloading method can be divided into three phases: rapid growth phase, slow growth phase, and steady phase. According to the process of fluid flow concentrated along tiny ripples and building of larger channels inside soils during the whole vacuum loading process, the fluctuations of pore water pressure during each loading step are divided into three phases: steady phase, rapid dissipation phase, and slow dissipation phase. An optimal loading pattern which could have a best treatment effect was proposed for calculating the water discharge and pore water pressure of soil using the improved multiple-vacuum preloading method. For the newly deposited dredger fills at Lingang Industrial Zone of Tianjin City, the best loading step was 20 kPa and the loading of 40–50 kPa produced the highest drainage consolidation. The measured moisture content and vane shear strength were discussed in terms of the effect of reinforcement, both of which indicate
International Nuclear Information System (INIS)
Chen Qiang; Ren Xuemei; Na Jing
2011-01-01
Highlights: Model uncertainty of the system is approximated by multiple-kernel LSSVM. Approximation errors and disturbances are compensated in the controller design. Asymptotical anti-synchronization is achieved with model uncertainty and disturbances. Abstract: In this paper, we propose a robust anti-synchronization scheme based on multiple-kernel least squares support vector machine (MK-LSSVM) modeling for two uncertain chaotic systems. The multiple-kernel regression, which is a linear combination of basic kernels, is designed to approximate system uncertainties by constructing a multiple-kernel Lagrangian function and computing the corresponding regression parameters. Then, a robust feedback control based on MK-LSSVM modeling is presented and an improved update law is employed to estimate the unknown bound of the approximation error. The proposed control scheme can guarantee the asymptotic convergence of the anti-synchronization errors in the presence of system uncertainties and external disturbances. Numerical examples are provided to show the effectiveness of the proposed method.
Digital Repository Service at National Institute of Oceanography (India)
Balachandran, K.K.; Jayalakshmy, K.V.; Laluraj, C.M.; Nair, M.; Joseph, T.; Sheeba, P.
The interaction effects of abiotic processes in the production of phytoplankton in a coastal marine region off Cochin are evaluated using multiple regression models. The study shows that chlorophyll production is not limited by nutrients...
Multiple time scales in modeling the incidence of infections acquired in intensive care units
Directory of Open Access Journals (Sweden)
Martin Wolkewitz
2016-09-01
Full Text Available Abstract Background When patients are admitted to an intensive care unit (ICU their risk of getting an infection will be highly depend on the length of stay at-risk in the ICU. In addition, risk of infection is likely to vary over calendar time as a result of fluctuations in the prevalence of the pathogen on the ward. Hence risk of infection is expected to depend on two time scales (time in ICU and calendar time as well as competing events (discharge or death and their spatial location. The purpose of this paper is to develop and apply appropriate statistical models for the risk of ICU-acquired infection accounting for multiple time scales, competing risks and the spatial clustering of the data. Methods A multi-center data base from a Spanish surveillance network was used to study the occurrence of an infection due to Methicillin-resistant Staphylococcus aureus (MRSA. The analysis included 84,843 patient admissions between January 2006 and December 2011 from 81 ICUs. Stratified Cox models were used to study multiple time scales while accounting for spatial clustering of the data (patients within ICUs and for death or discharge as competing events for MRSA infection. Results Both time scales, time in ICU and calendar time, are highly associated with the MRSA hazard rate and cumulative risk. When using only one basic time scale, the interpretation and magnitude of several patient-individual risk factors differed. Risk factors concerning the severity of illness were more pronounced when using only calendar time. These differences disappeared when using both time scales simultaneously. Conclusions The time-dependent dynamics of infections is complex and should be studied with models allowing for multiple time scales. For patient individual risk-factors we recommend stratified Cox regression models for competing events with ICU time as the basic time scale and calendar time as a covariate. The inclusion of calendar time and stratification by ICU
Faught, Austin M; Davidson, Scott E; Fontenot, Jonas; Kry, Stephen F; Etzel, Carol; Ibbott, Geoffrey S; Followill, David S
2017-09-01
The Imaging and Radiation Oncology Core Houston (IROC-H) (formerly the Radiological Physics Center) has reported varying levels of agreement in their anthropomorphic phantom audits. There is reason to believe one source of error in this observed disagreement is the accuracy of the dose calculation algorithms and heterogeneity corrections used. To audit this component of the radiotherapy treatment process, an independent dose calculation tool is needed. Monte Carlo multiple source models for Elekta 6 MV and 10 MV therapeutic x-ray beams were commissioned based on measurement of central axis depth dose data for a 10 × 10 cm 2 field size and dose profiles for a 40 × 40 cm 2 field size. The models were validated against open field measurements consisting of depth dose data and dose profiles for field sizes ranging from 3 × 3 cm 2 to 30 × 30 cm 2 . The models were then benchmarked against measurements in IROC-H's anthropomorphic head and neck and lung phantoms. Validation results showed 97.9% and 96.8% of depth dose data passed a ±2% Van Dyk criterion for 6 MV and 10 MV models respectively. Dose profile comparisons showed an average agreement using a ±2%/2 mm criterion of 98.0% and 99.0% for 6 MV and 10 MV models respectively. Phantom plan comparisons were evaluated using ±3%/2 mm gamma criterion, and averaged passing rates between Monte Carlo and measurements were 87.4% and 89.9% for 6 MV and 10 MV models respectively. Accurate multiple source models for Elekta 6 MV and 10 MV x-ray beams have been developed for inclusion in an independent dose calculation tool for use in clinical trial audits. © 2017 American Association of Physicists in Medicine.
Fasni, Nurli; Fatimah, Siti; Yulanda, Syerli
2017-05-01
This research aims to achieve some purposes such as: to know whether mathematical problem solving ability of students who have learned mathematics using Multiple Intelligences based teaching model is higher than the student who have learned mathematics using cooperative learning; to know the improvement of the mathematical problem solving ability of the student who have learned mathematics using Multiple Intelligences based teaching model., to know the improvement of the mathematical problem solving ability of the student who have learned mathematics using cooperative learning; to know the attitude of the students to Multiple Intelligences based teaching model. The method employed here is quasi-experiment which is controlled by pre-test and post-test. The population of this research is all of VII grade in SMP Negeri 14 Bandung even-term 2013/2014, later on two classes of it were taken for the samples of this research. A class was taught using Multiple Intelligences based teaching model and the other one was taught using cooperative learning. The data of this research were gotten from the test in mathematical problem solving, scale questionnaire of the student attitudes, and observation. The results show the mathematical problem solving of the students who have learned mathematics using Multiple Intelligences based teaching model learning is higher than the student who have learned mathematics using cooperative learning, the mathematical problem solving ability of the student who have learned mathematics using cooperative learning and Multiple Intelligences based teaching model are in intermediate level, and the students showed the positive attitude in learning mathematics using Multiple Intelligences based teaching model. As for the recommendation for next author, Multiple Intelligences based teaching model can be tested on other subject and other ability.
DEFF Research Database (Denmark)
Silvennoinen, Annestiina; Terasvirta, Timo
A new multivariate volatility model that belongs to the family of conditional correlation GARCH models is introduced. The GARCH equations of this model contain a multiplicative deterministic component to describe long-run movements in volatility and, in addition, the correlations...
Bayesian state space models for dynamic genetic network construction across multiple tissues.
Liang, Yulan; Kelemen, Arpad
2016-08-01
Construction of gene-gene interaction networks and potential pathways is a challenging and important problem in genomic research for complex diseases while estimating the dynamic changes of the temporal correlations and non-stationarity are the keys in this process. In this paper, we develop dynamic state space models with hierarchical Bayesian settings to tackle this challenge for inferring the dynamic profiles and genetic networks associated with disease treatments. We treat both the stochastic transition matrix and the observation matrix time-variant and include temporal correlation structures in the covariance matrix estimations in the multivariate Bayesian state space models. The unevenly spaced short time courses with unseen time points are treated as hidden state variables. Hierarchical Bayesian approaches with various prior and hyper-prior models with Monte Carlo Markov Chain and Gibbs sampling algorithms are used to estimate the model parameters and the hidden state variables. We apply the proposed Hierarchical Bayesian state space models to multiple tissues (liver, skeletal muscle, and kidney) Affymetrix time course data sets following corticosteroid (CS) drug administration. Both simulation and real data analysis results show that the genomic changes over time and gene-gene interaction in response to CS treatment can be well captured by the proposed models. The proposed dynamic Hierarchical Bayesian state space modeling approaches could be expanded and applied to other large scale genomic data, such as next generation sequence (NGS) combined with real time and time varying electronic health record (EHR) for more comprehensive and robust systematic and network based analysis in order to transform big biomedical data into predictions and diagnostics for precision medicine and personalized healthcare with better decision making and patient outcomes.
Novakovic, A M; Krekels, E H J; Munafo, A; Ueckert, S; Karlsson, M O
2017-01-01
In this study, we report the development of the first item response theory (IRT) model within a pharmacometrics framework to characterize the disease progression in multiple sclerosis (MS), as measured by Expanded Disability Status Score (EDSS). Data were collected quarterly from a 96-week phase III clinical study by a blinder rater, involving 104,206 item-level observations from 1319 patients with relapsing-remitting MS (RRMS), treated with placebo or cladribine. Observed scores for each EDSS item were modeled describing the probability of a given score as a function of patients' (unobserved) disability using a logistic model. Longitudinal data from placebo arms were used to describe the disease progression over time, and the model was then extended to cladribine arms to characterize the drug effect. Sensitivity with respect to patient disability was calculated as Fisher information for each EDSS item, which were ranked according to the amount of information they contained. The IRT model was able to describe baseline and longitudinal EDSS data on item and total level. The final model suggested that cladribine treatment significantly slows disease-progression rate, with a 20% decrease in disease-progression rate compared to placebo, irrespective of exposure, and effects an additional exposure-dependent reduction in disability progression. Four out of eight items contained 80% of information for the given range of disabilities. This study has illustrated that IRT modeling is specifically suitable for accurate quantification of disease status and description and prediction of disease progression in phase 3 studies on RRMS, by integrating EDSS item-level data in a meaningful manner.
Assessments of Maize Yield Potential in the Korean Peninsula Using Multiple Crop Models
Kim, S. H.; Myoung, B.; Lim, C. H.; Lee, S. G.; Lee, W. K.; Kafatos, M.
2015-12-01
The Korean Peninsular has unique agricultural environments due to the differences in the political and socio-economical systems between the Republic of Korea (SK, hereafter) and the Democratic Peoples' Republic of Korea (NK, hereafter). NK has been suffering from the lack of food supplies caused by natural disasters, land degradation and failed political system. The neighboring developed country SK has a better agricultural system but very low food self-sufficiency rate (around 1% of maize). Maize is an important crop in both countries since it is staple food for NK and SK is No. 2 maize importing country in the world after Japan. Therefore evaluating maize yield potential (Yp) in the two distinct regions is essential to assess food security under climate change and variability. In this study, we have utilized multiple process-based crop models capable of regional-scale assessments to evaluate maize Yp over the Korean Peninsula - the GIS version of EPIC model (GEPIC) and APSIM model that can be expanded to regional scales (APSIM regions). First we evaluated model performance and skill for 20 years from 1991 to 2010 using reanalysis data (Local Data Assimilation and Prediction System (LDAPS); 1.5km resolution) and observed data. Each model's performances were compared over different regions within the Korean Peninsula of different regional climate characteristics. To quantify the major influence of individual climate variables, we also conducted a sensitivity test using 20 years of climatology. Lastly, a multi-model ensemble analysis was performed to reduce crop model uncertainties. The results will provide valuable information for estimating the climate change or variability impacts on Yp over the Korean Peninsula.
A Monte Carlo multiple source model applied to radiosurgery narrow photon beams
International Nuclear Information System (INIS)
Chaves, A.; Lopes, M.C.; Alves, C.C.; Oliveira, C.; Peralta, L.; Rodrigues, P.; Trindade, A.
2004-01-01
Monte Carlo (MC) methods are nowadays often used in the field of radiotherapy. Through successive steps, radiation fields are simulated, producing source Phase Space Data (PSD) that enable a dose calculation with good accuracy. Narrow photon beams used in radiosurgery can also be simulated by MC codes. However, the poor efficiency in simulating these narrow photon beams produces PSD whose quality prevents calculating dose with the required accuracy. To overcome this difficulty, a multiple source model was developed that enhances the quality of the reconstructed PSD, reducing also the time and storage capacities. This multiple source model was based on the full MC simulation, performed with the MC code MCNP4C, of the Siemens Mevatron KD2 (6 MV mode) linear accelerator head and additional collimators. The full simulation allowed the characterization of the particles coming from the accelerator head and from the additional collimators that shape the narrow photon beams used in radiosurgery treatments. Eight relevant photon virtual sources were identified from the full characterization analysis. Spatial and energy distributions were stored in histograms for the virtual sources representing the accelerator head components and the additional collimators. The photon directions were calculated for virtual sources representing the accelerator head components whereas, for the virtual sources representing the additional collimators, they were recorded into histograms. All these histograms were included in the MC code, DPM code and using a sampling procedure that reconstructed the PSDs, dose distributions were calculated in a water phantom divided in 20000 voxels of 1x1x5 mm 3 . The model accurately calculates dose distributions in the water phantom for all the additional collimators; for depth dose curves, associated errors at 2σ were lower than 2.5% until a depth of 202.5 mm for all the additional collimators and for profiles at various depths, deviations between measured
Cantone, Martina; Santos, Guido; Wentker, Pia; Lai, Xin; Vera, Julio
2017-01-01
Even today two bacterial lung infections, namely pneumonia and tuberculosis, are among the 10 most frequent causes of death worldwide. These infections still lack effective treatments in many developing countries and in immunocompromised populations like infants, elderly people and transplanted patients. The interaction between bacteria and the host is a complex system of interlinked intercellular and the intracellular processes, enriched in regulatory structures like positive and negative feedback loops. Severe pathological condition can emerge when the immune system of the host fails to neutralize the infection. This failure can result in systemic spreading of pathogens or overwhelming immune response followed by a systemic inflammatory response. Mathematical modeling is a promising tool to dissect the complexity underlying pathogenesis of bacterial lung infection at the molecular, cellular and tissue levels, and also at the interfaces among levels. In this article, we introduce mathematical and computational modeling frameworks that can be used for investigating molecular and cellular mechanisms underlying bacterial lung infection. Then, we compile and discuss published results on the modeling of regulatory pathways and cell populations relevant for lung infection and inflammation. Finally, we discuss how to make use of this multiplicity of modeling approaches to open new avenues in the search of the molecular and cellular mechanisms underlying bacterial infection in the lung.
Electricity supply industry modelling for multiple objectives under demand growth uncertainty
International Nuclear Information System (INIS)
Heinrich, G.; Basson, L.; Howells, M.; Petrie, J.
2007-01-01
Appropriate energy-environment-economic (E3) modelling provides key information for policy makers in the electricity supply industry (ESI) faced with navigating a sustainable development path. Key challenges include engaging with stakeholder values and preferences, and exploring trade-offs between competing objectives in the face of underlying uncertainty. As a case study we represent the South African ESI using a partial equilibrium E3 modelling approach, and extend the approach to include multiple objectives under selected future uncertainties. This extension is achieved by assigning cost penalties to non-cost attributes to force the model's least-cost objective function to better satisfy non-cost criteria. This paper incorporates aspects of flexibility to demand growth uncertainty into each future expansion alternative by introducing stochastic programming with recourse into the model. Technology lead times are taken into account by the inclusion of a decision node along the time horizon where aspects of real options theory are considered within the planning process. Hedging in the recourse programming is automatically translated from being purely financial, to include the other attributes that the cost penalties represent. From a retrospective analysis of the cost penalties, the correct market signals, can be derived to meet policy goal, with due regard to demand uncertainty. (author)
Estimating ambiguity preferences and perceptions in multiple prior models: Evidence from the field.
Dimmock, Stephen G; Kouwenberg, Roy; Mitchell, Olivia S; Peijnenburg, Kim
2015-12-01
We develop a tractable method to estimate multiple prior models of decision-making under ambiguity. In a representative sample of the U.S. population, we measure ambiguity attitudes in the gain and loss domains. We find that ambiguity aversion is common for uncertain events of moderate to high likelihood involving gains, but ambiguity seeking prevails for low likelihoods and for losses. We show that choices made under ambiguity in the gain domain are best explained by the α-MaxMin model, with one parameter measuring ambiguity aversion (ambiguity preferences) and a second parameter quantifying the perceived degree of ambiguity (perceptions about ambiguity). The ambiguity aversion parameter α is constant and prior probability sets are asymmetric for low and high likelihood events. The data reject several other models, such as MaxMin and MaxMax, as well as symmetric probability intervals. Ambiguity aversion and the perceived degree of ambiguity are both higher for men and for the college-educated. Ambiguity aversion (but not perceived ambiguity) is also positively related to risk aversion. In the loss domain, we find evidence of reflection, implying that ambiguity aversion for gains tends to reverse into ambiguity seeking for losses. Our model's estimates for preferences and perceptions about ambiguity can be used to analyze the economic and financial implications of such preferences.
A new adaptive control scheme based on the interacting multiple model (IMM) estimation
International Nuclear Information System (INIS)
Afshari, Hamed H.; Al-Ani, Dhafar; Habibi, Saeid
2016-01-01
In this paper, an Interacting multiple model (IMM) adaptive estimation approach is incorporated to design an optimal adaptive control law for stabilizing an Unmanned vehicle. Due to variations of the forward velocity of the Unmanned vehicle, its aerodynamic derivatives are constantly changing. In order to stabilize the unmanned vehicle and achieve the control objectives for in-flight conditions, one seeks for an adaptive control strategy that can adjust itself to varying flight conditions. In this context, a bank of linear models is used to describe the vehicle dynamics in different operating modes. Each operating mode represents a particular dynamic with a different forward velocity. These models are then used within an IMM filter containing a bank of Kalman filters (KF) in a parallel operating mechanism. To regulate and stabilize the vehicle, a Linear quadratic regulator (LQR) law is designed and implemented for each mode. The IMM structure determines the particular mode based on the stored models and in-flight input-output measurements. The LQR controller also provides a set of controllers; each corresponds to a particular flight mode and minimizes the tracking error. Finally, the ultimate control law is obtained as a weighted summation of all individual controllers whereas weights are obtained using mode probabilities of each operating mode.
Optimal design of hydraulic excavator working device based on multiple surrogate models
Directory of Open Access Journals (Sweden)
Qingying Qiu
2016-05-01
Full Text Available The optimal design of hydraulic excavator working device is often characterized by computationally expensive analysis methods such as finite element analysis. Significant difficulties also exist when using a sensitivity-based decomposition approach to such practical engineering problems because explicit mathematical formulas between the objective function and design variables are impossible to formulate. An effective alternative is known as the surrogate model. The purpose of this article is to provide a comparative study on multiple surrogate models, including the response surface methodology, Kriging, radial basis function, and support vector machine, and select the one that best fits the optimization of the working device. In this article, a new modeling strategy based on the combination of the dimension variables between hinge joints and the forces loaded on hinge joints of the working device is proposed. In addition, the extent to which the accuracy of the surrogate models depends on different design variables is presented. The bionic intelligent optimization algorithm is then used to obtain the optimal results, which demonstrate that the maximum stresses calculated by the predicted method and finite element analysis are quite similar, but the efficiency of the former is much higher than that of the latter.
Abad, Cesar C C; Barros, Ronaldo V; Bertuzzi, Romulo; Gagliardi, João F L; Lima-Silva, Adriano E; Lambert, Mike I; Pires, Flavio O
2016-06-01
The aim of this study was to verify the power of VO 2max , peak treadmill running velocity (PTV), and running economy (RE), unadjusted or allometrically adjusted, in predicting 10 km running performance. Eighteen male endurance runners performed: 1) an incremental test to exhaustion to determine VO 2max and PTV; 2) a constant submaximal run at 12 km·h -1 on an outdoor track for RE determination; and 3) a 10 km running race. Unadjusted (VO 2max , PTV and RE) and adjusted variables (VO 2max 0.72 , PTV 0.72 and RE 0.60 ) were investigated through independent multiple regression models to predict 10 km running race time. There were no significant correlations between 10 km running time and either the adjusted or unadjusted VO 2max . Significant correlations (p 0.84 and power > 0.88. The allometrically adjusted predictive model was composed of PTV 0.72 and RE 0.60 and explained 83% of the variance in 10 km running time with a standard error of the estimate (SEE) of 1.5 min. The unadjusted model composed of a single PVT accounted for 72% of the variance in 10 km running time (SEE of 1.9 min). Both regression models provided powerful estimates of 10 km running time; however, the unadjusted PTV may provide an uncomplicated estimation.
The blackboard model - A framework for integrating multiple cooperating expert systems
Erickson, W. K.
1985-01-01
The use of an artificial intelligence (AI) architecture known as the blackboard model is examined as a framework for designing and building distributed systems requiring the integration of multiple cooperating expert systems (MCXS). Aerospace vehicles provide many examples of potential systems, ranging from commercial and military aircraft to spacecraft such as satellites, the Space Shuttle, and the Space Station. One such system, free-flying, spaceborne telerobots to be used in construction, servicing, inspection, and repair tasks around NASA's Space Station, is examined. The major difficulties found in designing and integrating the individual expert system components necessary to implement such a robot are outlined. The blackboard model, a general expert system architecture which seems to address many of the problems found in designing and building such a system, is discussed. A progress report on a prototype system under development called DBB (Distributed BlackBoard model) is given. The prototype will act as a testbed for investigating the feasibility, utility, and efficiency of MCXS-based designs developed under the blackboard model.
Chiang, Fu-Tsai; Li, Pei-Jung; Chung, Shih-Ping; Pan, Lung-Fa; Pan, Lung-Kwang
2016-01-01
ABSTRACT This study analyzed multiple biokinetic models using a dynamic water phantom. The phantom was custom-made with acrylic materials to model metabolic mechanisms in the human body. It had 4 spherical chambers of different sizes, connected by 8 ditches to form a complex and adjustable water loop. One infusion and drain pole connected the chambers to an auxiliary silicon-based hose, respectively. The radio-active compound solution (TC-99m-MDP labeled) formed a sealed and static water loop inside the phantom. As clean feed water was infused to replace the original solution, the system mimicked metabolic mechanisms for data acquisition. Five cases with different water loop settings were tested and analyzed, with case settings changed by controlling valve poles located in the ditches. The phantom could also be changed from model A to model B by transferring its vertical configuration. The phantom was surveyed with a clinical gamma camera to determine the time-dependent intensity of every chamber. The recorded counts per pixel in each chamber were analyzed and normalized to compare with theoretical estimations from the MATLAB program. Every preset case was represented by uniquely defined, time-dependent, simultaneous differential equations, and a corresponding MATLAB program optimized the solutions by comparing theoretical calculations and practical measurements. A dimensionless agreement (AT) index was recommended to evaluate the comparison in each case. ATs varied from 5.6 to 48.7 over the 5 cases, indicating that this work presented an acceptable feasibility study. PMID:27286096
Comparison of Geant4 multiple Coulomb scattering models with theory for radiotherapy protons.
Makarova, Anastasia; Gottschalk, Bernard; Sauerwein, Wolfgang
2017-07-06
Usually, Monte Carlo models are validated against experimental data. However, models of multiple Coulomb scattering (MCS) in the Gaussian approximation are exceptional in that we have theories which are probably more accurate than the experiments which have, so far, been done to test them. In problems directly sensitive to the distribution of angles leaving the target, the relevant theory is the Molière/Fano/Hanson variant of Molière theory (Gottschalk et al 1993 Nucl. Instrum. Methods Phys. Res. B 74 467-90). For transverse spreading of the beam in the target itself, the theory of Preston and Koehler (Gottschalk (2012 arXiv:1204.4470)) holds. Therefore, in this paper we compare Geant4 simulations, using the Urban and Wentzel models of MCS, with theory rather than experiment, revealing trends which would otherwise be obscured by experimental scatter. For medium-energy (radiotherapy) protons, and low-Z (water-like) target materials, Wentzel appears to be better than Urban in simulating the distribution of outgoing angles. For beam spreading in the target itself, the two models are essentially equal.
DEFF Research Database (Denmark)
Vermesi, Izabella; Rein, Guillermo; Colella, Francesco
2017-01-01
Multiscale modelling of tunnel fires that uses a coupled 3D (fire area) and 1D (the rest of the tunnel) model is seen as the solution to the numerical problem of the large domains associated with long tunnels. The present study demonstrates the feasibility of the implementation of this method...... in FDS version 6.0, a widely used fire-specific, open source CFD software. Furthermore, it compares the reduction in simulation time given by multiscale modelling with the one given by the use of multiple processor calculation. This was done using a 1200m long tunnel with a rectangular cross......-section as a demonstration case. The multiscale implementation consisted of placing a 30MW fire in the centre of a 400m long 3D domain, along with two 400m long 1D ducts on each side of it, that were again bounded by two nodes each. A fixed volume flow was defined in the upstream duct and the two models were coupled...
Bramsen, Rikke H; Lasgaard, Mathias; Koss, Mary P; Shevlin, Mark; Elklit, Ask; Banner, Jytte
2013-01-01
The present study modeled the direct relationship between child sexual abuse (CSA) and adolescent peer-to-peer sexual victimization (APSV) and the mediated effect via variables representing the number of sexual partners, sexual risk behavior, and signaling sexual boundaries. A cross-sectional study on the effect of CSA on APSV was conducted, utilizing a multiple mediator model. Mediated and direct effects in the model were estimated employing Mplus using bootstrapped percentile based confidence intervals to test for significance of mediated effects. The study employed 327 Danish female adolescents with a mean age of 14.9 years (SD = 0.5). The estimates from the mediational model indicated full mediation of the effect of CSA on APSV via number of sexual partners and sexual risk behavior. The current study suggests that the link between CSA and APSV was mediated by sexual behaviors specifically pertaining to situations of social peer interaction, rather than directly on prior experiences of sexual victimization. The present study identifies a modifiable target area for intervention to reduce adolescent sexual revictimization. © 2013 American Orthopsychiatric Association.
Global and 3D spatial assessment of neuroinflammation in rodent models of Multiple Sclerosis.
Directory of Open Access Journals (Sweden)
Shashank Gupta
Full Text Available Multiple Sclerosis (MS is a progressive autoimmune inflammatory and demyelinating disease of the central nervous system (CNS. T cells play a key role in the progression of neuroinflammation in MS and also in the experimental autoimmune encephalomyelitis (EAE animal models for the disease. A technology for quantitative and 3 dimensional (3D spatial assessment of inflammation in this and other CNS inflammatory conditions is much needed. Here we present a procedure for 3D spatial assessment and global quantification of the development of neuroinflammation based on Optical Projection Tomography (OPT. Applying this approach to the analysis of rodent models of MS, we provide global quantitative data of the major inflammatory component as a function of the clinical course. Our data demonstrates a strong correlation between the development and progression of neuroinflammation and clinical disease in several mouse and a rat model of MS refining the information regarding the spatial dynamics of the inflammatory component in EAE. This method provides a powerful tool to investigate the effect of environmental and genetic forces and for assessing the therapeutic effects of drug therapy in animal models of MS and other neuroinflammatory/neurodegenerative disorders.
Song-Gui Chen; Chuan-Hu Zhang; Yun-Tian Feng; Qi-Cheng Sun; Feng Jin
2016-01-01
This paper presents a three-dimensional (3D) parallel multiple-relaxation-time lattice Boltzmann model (MRT-LBM) for Bingham plastics which overcomes numerical instabilities in the simulation of non-Newtonian fluids for the Bhatnagar–Gross–Krook (BGK) model. The MRT-LBM and several related mathematical models are briefly described. Papanastasiou’s modified model is incorporated for better numerical stability. The impact of the relaxation parameters of the model is studied in detail. The MRT-L...
Symstad, Amy J.; Fisichelli, Nicholas A.; Miller, Brian W.; Rowland, Erika; Schuurman, Gregor W.
2017-01-01
Scenario planning helps managers incorporate climate change into their natural resource decision making through a structured “what-if” process of identifying key uncertainties and potential impacts and responses. Although qualitative scenarios, in which ecosystem responses to climate change are derived via expert opinion, often suffice for managers to begin addressing climate change in their planning, this approach may face limits in resolving the responses of complex systems to altered climate conditions. In addition, this approach may fall short of the scientific credibility managers often require to take actions that differ from current practice. Quantitative simulation modeling of ecosystem response to climate conditions and management actions can provide this credibility, but its utility is limited unless the modeling addresses the most impactful and management-relevant uncertainties and incorporates realistic management actions. We use a case study to compare and contrast management implications derived from qualitative scenario narratives and from scenarios supported by quantitative simulations. We then describe an analytical framework that refines the case study’s integrated approach in order to improve applicability of results to management decisions. The case study illustrates the value of an integrated approach for identifying counterintuitive system dynamics, refining understanding of complex relationships, clarifying the magnitude and timing of changes, identifying and checking the validity of assumptions about resource responses to climate, and refining management directions. Our proposed analytical framework retains qualitative scenario planning as a core element because its participatory approach builds understanding for both managers and scientists, lays the groundwork to focus quantitative simulations on key system dynamics, and clarifies the challenges that subsequent decision making must address.
A coarse-grained model for synergistic action of multiple enzymes on cellulose
Directory of Open Access Journals (Sweden)
Asztalos Andrea
2012-08-01
Full Text Available Abstract Background Degradation of cellulose to glucose requires the cooperative action of three classes of enzymes, collectively known as cellulases. Endoglucanases randomly bind to cellulose surfaces and generate new chain ends by hydrolyzing β-1,4-D-glycosidic bonds. Exoglucanases bind to free chain ends and hydrolyze glycosidic bonds in a processive manner releasing cellobiose units. Then, β-glucosidases hydrolyze soluble cellobiose to glucose. Optimal synergistic action of these enzymes is essential for efficient digestion of cellulose. Experiments show that as hydrolysis proceeds and the cellulose substrate becomes more heterogeneous, the overall degradation slows down. As catalysis occurs on the surface of crystalline cellulose, several factors affect the overall hydrolysis. Therefore, spatial models of cellulose degradation must capture effects such as enzyme crowding and surface heterogeneity, which have been shown to lead to a reduction in hydrolysis rates. Results We present a coarse-grained stochastic model for capturing the key events associated with the enzymatic degradation of cellulose at the mesoscopic level. This functional model accounts for the mobility and action of a single cellulase enzyme as well as the synergy of multiple endo- and exo-cellulases on a cellulose surface. The quantitative description of cellulose degradation is calculated on a spatial model by including free and bound states of both endo- and exo-cellulases with explicit reactive surface terms (e.g., hydrogen bond breaking, covalent bond cleavages and corresponding reaction rates. The dynamical evolution of the system is simulated by including physical interactions between cellulases and cellulose. Conclusions Our coarse-grained model reproduces the qualitative behavior of endoglucanases and exoglucanases by accounting for the spatial heterogeneity of the cellulose surface as well as other spatial factors such as enzyme crowding. Importantly, it captures
Kolasa-Wiecek, Alicja
2015-04-01
The energy sector in Poland is the source of 81% of greenhouse gas (GHG) emissions. Poland, among other European Union countries, occupies a leading position with regard to coal consumption. Polish energy sector actively participates in efforts to reduce GHG emissions to the atmosphere, through a gradual decrease of the share of coal in the fuel mix and development of renewable energy sources. All evidence which completes the knowledge about issues related to GHG emissions is a valuable source of information. The article presents the results of modeling of GHG emissions which are generated by the energy sector in Poland. For a better understanding of the quantitative relationship between total consumption of primary energy and greenhouse gas emission, multiple stepwise regression model was applied. The modeling results of CO2 emissions demonstrate a high relationship (0.97) with the hard coal consumption variable. Adjustment coefficient of the model to actual data is high and equal to 95%. The backward step regression model, in the case of CH4 emission, indicated the presence of hard coal (0.66), peat and fuel wood (0.34), solid waste fuels, as well as other sources (-0.64) as the most important variables. The adjusted coefficient is suitable and equals R2=0.90. For N2O emission modeling the obtained coefficient of determination is low and equal to 43%. A significant variable influencing the amount of N2O emission is the peat and wood fuel consumption. Copyright © 2015. Published by Elsevier B.V.
Graves, T.A.; Kendall, Katherine C.; Royle, J. Andrew; Stetz, J.B.; Macleod, A.C.
2011-01-01
Few studies link habitat to grizzly bear Ursus arctos abundance and these have not accounted for the variation in detection or spatial autocorrelation. We collected and genotyped bear hair in and around Glacier National Park in northwestern Montana during the summer of 2000. We developed a hierarchical Markov chain Monte Carlo model that extends the existing occupancy and count models by accounting for (1) spatially explicit variables that we hypothesized might influence abundance; (2) separate sub-models of detection probability for two distinct sampling methods (hair traps and rub trees) targeting different segments of the population; (3) covariates to explain variation in each sub-model of detection; (4) a conditional autoregressive term to account for spatial autocorrelation; (5) weights to identify most important variables. Road density and per cent mesic habitat best explained variation in female grizzly bear abundance; spatial autocorrelation was not supported. More female bears were predicted in places with lower road density and with more mesic habitat. Detection rates of females increased with rub tree sampling effort. Road density best explained variation in male grizzly bear abundance and spatial autocorrelation was supported. More male bears were predicted in areas of low road density. Detection rates of males increased with rub tree and hair trap sampling effort and decreased over the sampling period. We provide a new method to (1) incorporate multiple detection methods into hierarchical models of abundance; (2) determine whether spatial autocorrelation should be included in final models. Our results suggest that the influence of landscape variables is consistent between habitat selection and abundance in this system.
Hosseiny, S. M. H.; Zarzar, C.; Gomez, M.; Siddique, R.; Smith, V.; Mejia, A.; Demir, I.
2016-12-01
The National Water Model (NWM) provides a platform for operationalize nationwide flood inundation forecasting and mapping. The ability to model flood inundation on a national scale will provide invaluable information to decision makers and local emergency officials. Often, forecast products use deterministic model output to provide a visual representation of a single inundation scenario, which is subject to uncertainty from various sources. While this provides a straightforward representation of the potential inundation, the inherent uncertainty associated with the model output should be considered to optimize this tool for decision making support. The goal of this study is to produce ensembles of future flood inundation conditions (i.e. extent, depth, and velocity) to spatially quantify and visually assess uncertainties associated with the predicted flood inundation maps. The setting for this study is located in a highly urbanized watershed along the Darby Creek in Pennsylvania. A forecasting framework coupling the NWM with multiple hydraulic models was developed to produce a suite ensembles of future flood inundation predictions. Time lagged ensembles from the NWM short range forecasts were used to account for uncertainty associated with the hydrologic forecasts. The forecasts from the NWM were input to iRIC and HEC-RAS two-dimensional software packages, from which water extent, depth, and flow velocity were output. Quantifying the agreement between output ensembles for each forecast grid provided the uncertainty metrics for predicted flood water inundation extent, depth, and flow velocity. For visualization, a series of flood maps that display flood extent, water depth, and flow velocity along with the underlying uncertainty associated with each of the forecasted variables were produced. The results from this study demonstrate the potential to incorporate and visualize model uncertainties in flood inundation maps in order to identify the high flood risk zones.
Multi-physics modeling of single/multiple-track defect mechanisms in electron beam selective melting
International Nuclear Information System (INIS)
Yan, Wentao; Ge, Wenjun; Qian, Ya; Lin, Stephen; Zhou, Bin; Liu, Wing Kam; Lin, Feng; Wagner, Gregory J.
2017-01-01
Metallic powder bed-based additive manufacturing technologies have many promising attributes. The single track acts as one fundamental building unit, which largely influences the final product quality such as the surface roughness and dimensional accuracy. A high-fidelity powder-scale model is developed to predict the detailed formation processes of single/multiple-track defects, including the balling effect, single track nonuniformity and inter-track voids. These processes are difficult to observe in experiments; previous studies have proposed different or even conflicting explanations. Our study clarifies the underlying formation mechanisms, reveals the influence of key factors, and guides the improvement of fabrication quality of single tracks. Additionally, the manufacturing processes of multiple tracks along S/Z-shaped scan paths with various hatching distance are simulated to further understand the defects in complex structures. The simulations demonstrate that the hatching distance should be no larger than the width of the remelted region within the substrate rather than the width of the melted region within the powder layer. Thus, single track simulations can provide valuable insight for complex structures.
Xie, Longhan; Li, Jiehong; Li, Xiaodong; Huang, Ledeng; Cai, Siqi
2018-01-01
Hydraulic dampers are used to decrease the vibration of a vehicle, where vibration energy is dissipated as heat. In addition to resulting in energy waste, the damping coefficient in hydraulic dampers cannot be changed during operation. In this paper, an energy-harvesting vehicle damper was proposed to replace traditional hydraulic dampers. The goal is not only to recover kinetic energy from suspension vibration but also to change the damping coefficient during operation according to road conditions. The energy-harvesting damper consists of multiple generators that are independently controlled by switches. One of these generators connects to a tunable resistor for fine tuning the damping coefficient, while the other generators are connected to a control and rectifying circuit, each of which both regenerates electricity and provides a constant damping coefficient. A mathematical model was built to investigate the performance of the energy-harvesting damper. By controlling the number of switched-on generators and adjusting the value of the external tunable resistor, the damping can be fine tuned according to the requirement. In addition to the capability of damping tuning, the multiple controlled generators can output a significant amount of electricity. A prototype was built to test the energy-harvesting damper design. Experiments on an MTS testing system were conducted, with results that validated the theoretical analysis. Experiments show that changing the number of switched-on generators can obviously tune the damping coefficient of the damper and simultaneously produce considerable electricity.
A bench top experimental model of bubble transport in multiple arteriole bifurcations
International Nuclear Information System (INIS)
Eshpuniyani, Brijesh; Fowlkes, J. Brian; Bull, Joseph L.
2005-01-01
Motivated by a novel gas embolotherapy technique, a bench top vascular bifurcation model is used to investigate the splitting of long bubbles in a series of liquid-filled bifurcations. The developmental gas embolotherapy technique aims to treat cancer by infarcting tumors with gas emboli that are formed by selective acoustic vaporization of ∼6 μm, intravascular, perfluorcarbon droplets. The resulting gas bubbles are large enough to extend through several vessel bifurcations. The current bench top experiments examine the effects of gravity and flow on bubble transport through multiple bifurcations. The effect of gravity is varied by changing the roll angle of the bifurcating network about its parent tube. Splitting at each bifurcation is nearly even when the roll angle is zero. It is demonstrated that bubbles can either stick at one of the second bifurcations or in the second generation daughter tubes, even though the flow rate in the parent tube is constant. The findings of this work indicate that both gravity and flow are important in determining the bubble transport, and suggest that a treatment strategy that includes multiple doses may be effective in delivering emboli to vessels not occluded by the initial dose
Jacobs, Ingo; Wollny, Anna; Sim, Chu-Won; Horsch, Antje
2016-06-01
In the present study, we tested a serial mindfulness facets-trait emotional intelligence (TEI)-emotional distress-multiple health behaviors mediation model in a sample of N = 427 German-speaking occupational therapists. The mindfulness facets-TEI-emotional distress section of the mediation model revealed partial mediation for the mindfulness facets Act with awareness (Act/Aware) and Accept without judgment (Accept); inconsistent mediation was found for the Describe facet. The serial two-mediator model included three mediational pathways that may link each of the four mindfulness facets with multiple health behaviors. Eight out of 12 indirect effects reached significance and fully mediated the links between Act/Aware and Describe to multiple health behaviors; partial mediation was found for Accept. The mindfulness facet Observe was most relevant for multiple health behaviors, but its relation was not amenable to mediation. Implications of the findings will be discussed. © 2016 Scandinavian Psychological Associations and John Wiley & Sons Ltd.
An extension of the Néel-Brown model for systems with multiple switching pathways
Energy Technology Data Exchange (ETDEWEB)
Roy, Arnab; Kumar, P.S. Anil, E-mail: anil@physics.iisc.ernet.in
2017-02-15
TheNéel-Brown model is the most widely accepted model for the description of magnetization reversal by thermal excitation. This model predicts a decreasing average switching field and an increasing width ΔH of switching field distribution as the temperature is increased, and has been found to hold good on several occasions. However, for a few classes of systems, the temperature dependence of ΔH shows the opposite trend, and so far no satisfactory explanation exists. We present here an experimental study of switching field statistics of permalloy (Ni{sub 80}Fe{sub 20}) thin films on Si(100) grown by pulsed laser ablation. It was seen that the sample deviates from the Neel-Brown behavior in the manner described above. We performed calculations based on a natural extension of the Néel-Brown model, which incorporated multiple reversal pathways characterized by a Gaussian distribution of coercive fields. Calculations based on this model for different values of the width parameter σ{sub HSW} show two distinct kinds of behavior. At low values of σ{sub HSW}, the total width ΔH is limited by thermal broadening according to the traditional Neel-Brown expression. This regime is characterized by an increasing ΔH with temperature. For high σ{sub HSW}, the broadening is dominated by σ{sub HSW}, which masks thermal broadening. In this regime, ΔH decreases with increasing temperature. Whereas the experimentally observed temperature dependence of the average switching field was found to be in good agreement with this model, qualitative agreement with regard to the temperature dependence of ΔH could be observed only for relaxation times lower than ~10{sup −40} s, which is much smaller than Néel-Brown relaxation times (10{sup −9}–10{sup −19} s) usually encountered in the literature. - Highlights: • The Néel-Brown model for magnetization reversal over an energy barrier due to thermal excitation is a widely accepted mechanism for magnetization reversal, and has
Baek, Eun Kyeng; Ferron, John M
2013-03-01
Multilevel models (MLM) have been used as a method for analyzing multiple-baseline single-case data. However, some concerns can be raised because the models that have been used assume that the Level-1 error covariance matrix is the same for all participants. The purpose of this study was to extend the application of MLM of single-case data in order to accommodate across-participant variation in the Level-1 residual variance and autocorrelation. This more general model was then used in the analysis of single-case data sets to illustrate the method, to estimate the degree to which the autocorrelation and residual variances differed across participants, and to examine whether inferences about treatment effects were sensitive to whether or not the Level-1 error covariance matrix was allowed to vary across participants. The results from the analyses of five published studies showed that when the Level-1 error covariance matrix was allowed to vary across participants, some relatively large differences in autocorrelation estimates and error variance estimates emerged. The changes in modeling the variance structure did not change the conclusions about which fixed effects were statistically significant in most of the studies, but there was one exception. The fit indices did not consistently support selecting either the more complex covariance structure, which allowed the covariance parameters to vary across participants, or the simpler covariance structure. Given the uncertainty in model specification that may arise when modeling single-case data, researchers should consider conducting sensitivity analyses to examine the degree to which their conclusions are sensitive to modeling choices.
Multi-sensor fusion with interacting multiple model filter for improved aircraft position accuracy.
Cho, Taehwan; Lee, Changho; Choi, Sangbang
2013-03-27
The International Civil Aviation Organization (ICAO) has decided to adopt Communications, Navigation, and Surveillance/Air Traffic Management (CNS/ATM) as the 21st century standard for navigation. Accordingly, ICAO members have provided an impetus to develop related technology and build sufficient infrastructure. For aviation surveillance with CNS/ATM, Ground-Based Augmentation System (GBAS), Automatic Dependent Surveillance-Broadcast (ADS-B), multilateration (MLAT) and wide-area multilateration (WAM) systems are being established. These sensors can track aircraft positions more accurately than existing radar and can compensate for the blind spots in aircraft surveillance. In this paper, we applied a novel sensor fusion method with Interacting Multiple Model (IMM) filter to GBAS, ADS-B, MLAT, and WAM data in order to improve the reliability of the aircraft position. Results of performance analysis show that the position accuracy is improved by the proposed sensor fusion method with the IMM filter.
Fission time-scale in experiments and in multiple initiation model
Energy Technology Data Exchange (ETDEWEB)
Karamian, S. A., E-mail: karamian@nrmail.jinr.ru [Joint Institute for Nuclear Research (Russian Federation)
2011-12-15
Rate of fission for highly-excited nuclei is affected by the viscose character of the systemmotion in deformation coordinates as was reported for very heavy nuclei with Z{sub C} > 90. The long time-scale of fission can be described in a model of 'fission by diffusion' that includes an assumption of the overdamped diabatic motion. The fission-to-spallation ratio at intermediate proton energy could be influenced by the viscosity, as well. Within a novel approach of the present work, the cross examination of the fission probability, time-scales, and pre-fission neutron multiplicities is resulted in the consistent interpretation of a whole set of the observables. Earlier, different aspects could be reproduced in partial simulations without careful coordination.
Effect of multiple Higgs fields on the phase structure of the SU(2)-Higgs model
International Nuclear Information System (INIS)
Wurtz, Mark; Steele, T. G.; Lewis, Randy
2009-01-01
The SU(2)-Higgs model, with a single Higgs field in the fundamental representation and a quartic self-interaction, has a Higgs region and a confinement region which are analytically connected in the parameter space of the theory; these regions thus represent a single phase. The effect of multiple Higgs fields on this phase structure is examined via Monte Carlo lattice simulations. For the case of N≥2 identical Higgs fields, there is no remaining analytic connection between the Higgs and confinement regions, at least when Lagrangian terms that directly couple different Higgs flavors are omitted. An explanation of this result in terms of enhancement from overlapping phase transitions is explored for N=2 by introducing an asymmetry in the hopping parameters of the Higgs fields. It is found that an enhancement of the phase transitions can still occur for a moderate (10%) asymmetry in the resulting hopping parameters.
Second-Generation Non-Covalent NAAA Inhibitors are Protective in a Model of Multiple Sclerosis.
Migliore, Marco; Pontis, Silvia; Fuentes de Arriba, Angel Luis; Realini, Natalia; Torrente, Esther; Armirotti, Andrea; Romeo, Elisa; Di Martino, Simona; Russo, Debora; Pizzirani, Daniela; Summa, Maria; Lanfranco, Massimiliano; Ottonello, Giuliana; Busquet, Perrine; Jung, Kwang-Mook; Garcia-Guzman, Miguel; Heim, Roger; Scarpelli, Rita; Piomelli, Daniele
2016-09-05
Palmitoylethanolamide (PEA) and oleoylethanolamide (OEA) are endogenous lipid mediators that suppress inflammation. Their actions are terminated by the intracellular cysteine amidase, N-acylethanolamine acid amidase (NAAA). Even though NAAA may offer a new target for anti-inflammatory therapy, the lipid-like structures and reactive warheads of current NAAA inhibitors limit the use of these agents as oral drugs. A series of novel benzothiazole-piperazine derivatives that inhibit NAAA in a potent and selective manner by a non-covalent mechanism are described. A prototype member of this class (8) displays high oral bioavailability, access to the central nervous system (CNS), and strong activity in a mouse model of multiple sclerosis (MS). This compound exemplifies a second generation of non-covalent NAAA inhibitors that may be useful in the treatment of MS and other chronic CNS disorders. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
detecting multiple sclerosis lesions with a fully bioinspired visual attention model
Villalon-Reina, Julio; Gutierrez-Carvajal, Ricardo; Thompson, Paul M.; Romero-Castro, Eduardo
2013-11-01
The detection, segmentation and quantification of multiple sclerosis (MS) lesions on magnetic resonance images (MRI) has been a very active field for the last two decades because of the urge to correlate these measures with the effectiveness of pharmacological treatment. A myriad of methods has been developed and most of these are non specific for the type of lesions and segment the lesions in their acute and chronic phases together. On the other hand, radiologists are able to distinguish between several stages of the disease on different types of MRI images. The main motivation of the work presented here is to computationally emulate the visual perception of the radiologist by using modeling principles of the neuronal centers along the visual system. By using this approach we are able to detect the lesions in the majority of the images in our population sample. This type of approach also allows us to study and improve the analysis of brain networks by introducing a priori information.
Modeling multiple visual words assignment for bag-of-features based medical image retrieval
Wang, Jim Jing-Yan
2012-01-01
In this paper, we investigate the bag-of-features based medical image retrieval methods, which represent an image as a collection of local features, such as image patch and key points with SIFT descriptor. To improve the bag-of-features method, we first model the assignment of local descriptor as contribution functions, and then propose a new multiple assignment strategy. By assuming the local feature can be reconstructed by its neighboring visual words in vocabulary, we solve the reconstruction weights as a QP problem and then use the solved weights as contribution functions, which results in a new assignment method called the QP assignment. We carry our experiments on ImageCLEFmed datasets. Experiments\\' results show that our proposed method exceeds the performances of traditional solutions and works well for the bag-of-features based medical image retrieval tasks.
Multiplicity of pre-scission charged particle emission by a statistical model
International Nuclear Information System (INIS)
Matsuse, Takehiro
1996-01-01
With introducing the limitation (E cut-off ) not to excite all statistically permitted scission parts in the phase integral at the scission point, we try to reproduce the multiplicity of pre-scission charged particle emission of 86 Kr (E lab =890 MeV)+ 27 Al by the cascade calculation of the extended Hauser-Feshbach method (EHM). The physical image is explained from a point of view of the life time for the statistical model of the compound nuclei. When E cut-off parameter is bout 80 MeV, the cross section of scission and the loss of pre-scission charged particle seemed to be reproduced. The average pre-scission time is about 1.7 x 10 -20 sec. The essential problem of the life time of compound nuclei is explained. (S.Y.)
Modeling multiple visual words assignment for bag-of-features based medical image retrieval
Wang, Jim Jing-Yan; Almasri, Islam
2012-01-01
In this paper, we investigate the bag-of-features based medical image retrieval methods, which represent an image as a collection of local features, such as image patch and key points with SIFT descriptor. To improve the bag-of-features method, we first model the assignment of local descriptor as contribution functions, and then propose a new multiple assignment strategy. By assuming the local feature can be reconstructed by its neighboring visual words in vocabulary, we solve the reconstruction weights as a QP problem and then use the solved weights as contribution functions, which results in a new assignment method called the QP assignment. We carry our experiments on ImageCLEFmed datasets. Experiments' results show that our proposed method exceeds the performances of traditional solutions and works well for the bag-of-features based medical image retrieval tasks.
Directory of Open Access Journals (Sweden)
Aleksandar Grkić
2009-01-01
Full Text Available Višelamelasti frikcioni sklopovi koriste se za promenu stepena prenosa u planetarnim menjačkim prenosnicima motornih vozila. Razvijeni simulacioni model frikcione spojnice i kočnice omogućava simulaciju rada menjačkog prenosnika pri promeni stepena prenosa. Primenom razvijenog modela moguće je na bazi simulacije analizirati prelazni proces pri promeni stepena prenosa i obezbediti identifikaciju relevantnih parametara bez izrade većeg broja fizičkih prototipova. Na taj način obezbeđuje se smanjenje troškova i skraćenje procesa razvoja novih prenosnika snage, uz poboljšanje upotrebnog kvaliteta. Simulacioni model može da se koristi i pri razvoju upravljačkog sistema menjačkog prenosnika za definisanje potrebnih karakteristika njegovih komponenata. / Multiple plate friction clutches and brakes are used for gear shifting within planetary gear trains of motor vehicles. The developed simulation model of the friction clutch and brake enables the simulation and the analysis of the planetary gear train transitional processes during gear shifting and provides identification of relevant parameters without making numerous physical prototypes. Costs are thus reduced and time for developing new gear trains shortened, while the product quality is increased. The simulation model can be use additionally in developing steering systems of planetary gear trains for defining characteristics of their components.