WorldWideScience

Sample records for model parameter choices

  1. Parameter Estimation for Thurstone Choice Models

    Energy Technology Data Exchange (ETDEWEB)

    Vojnovic, Milan [London School of Economics (United Kingdom); Yun, Seyoung [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-04-24

    We consider the estimation accuracy of individual strength parameters of a Thurstone choice model when each input observation consists of a choice of one item from a set of two or more items (so called top-1 lists). This model accommodates the well-known choice models such as the Luce choice model for comparison sets of two or more items and the Bradley-Terry model for pair comparisons. We provide a tight characterization of the mean squared error of the maximum likelihood parameter estimator. We also provide similar characterizations for parameter estimators defined by a rank-breaking method, which amounts to deducing one or more pair comparisons from a comparison of two or more items, assuming independence of these pair comparisons, and maximizing a likelihood function derived under these assumptions. We also consider a related binary classification problem where each individual parameter takes value from a set of two possible values and the goal is to correctly classify all items within a prescribed classification error. The results of this paper shed light on how the parameter estimation accuracy depends on given Thurstone choice model and the structure of comparison sets. In particular, we found that for unbiased input comparison sets of a given cardinality, when in expectation each comparison set of given cardinality occurs the same number of times, for a broad class of Thurstone choice models, the mean squared error decreases with the cardinality of comparison sets, but only marginally according to a diminishing returns relation. On the other hand, we found that there exist Thurstone choice models for which the mean squared error of the maximum likelihood parameter estimator can decrease much faster with the cardinality of comparison sets. We report empirical evaluation of some claims and key parameters revealed by theory using both synthetic and real-world input data from some popular sport competitions and online labor platforms.

  2. Modeling extreme events: Sample fraction adaptive choice in parameter estimation

    Science.gov (United States)

    Neves, Manuela; Gomes, Ivette; Figueiredo, Fernanda; Gomes, Dora Prata

    2012-09-01

    When modeling extreme events there are a few primordial parameters, among which we refer the extreme value index and the extremal index. The extreme value index measures the right tail-weight of the underlying distribution and the extremal index characterizes the degree of local dependence in the extremes of a stationary sequence. Most of the semi-parametric estimators of these parameters show the same type of behaviour: nice asymptotic properties, but a high variance for small values of k, the number of upper order statistics to be used in the estimation, and a high bias for large values of k. This shows a real need for the choice of k. Choosing some well-known estimators of those parameters we revisit the application of a heuristic algorithm for the adaptive choice of k. The procedure is applied to some simulated samples as well as to some real data sets.

  3. Multiobjective constraints for climate model parameter choices: Pragmatic Pareto fronts in CESM1

    Science.gov (United States)

    Langenbrunner, B.; Neelin, J. D.

    2017-09-01

    Global climate models (GCMs) are examples of high-dimensional input-output systems, where model output is a function of many variables, and an update in model physics commonly improves performance in one objective function (i.e., measure of model performance) at the expense of degrading another. Here concepts from multiobjective optimization in the engineering literature are used to investigate parameter sensitivity and optimization in the face of such trade-offs. A metamodeling technique called cut high-dimensional model representation (cut-HDMR) is leveraged in the context of multiobjective optimization to improve GCM simulation of the tropical Pacific climate, focusing on seasonal precipitation, column water vapor, and skin temperature. An evolutionary algorithm is used to solve for Pareto fronts, which are surfaces in objective function space along which trade-offs in GCM performance occur. This approach allows the modeler to visualize trade-offs quickly and identify the physics at play. In some cases, Pareto fronts are small, implying that trade-offs are minimal, optimal parameter value choices are more straightforward, and the GCM is well-functioning. In all cases considered here, the control run was found not to be Pareto-optimal (i.e., not on the front), highlighting an opportunity for model improvement through objectively informed parameter selection. Taylor diagrams illustrate that these improvements occur primarily in field magnitude, not spatial correlation, and they show that specific parameter updates can improve fields fundamental to tropical moist processes—namely precipitation and skin temperature—without significantly impacting others. These results provide an example of how basic elements of multiobjective optimization can facilitate pragmatic GCM tuning processes.

  4. The methodology of choice Cam-Clay model parameters for loess subsoil

    Science.gov (United States)

    Nepelski, Krzysztof; Błazik-Borowa, Ewa

    2018-01-01

    The paper deals with the calibration method of FEM subsoil model described by the constitutive Cam-Clay model. The four-storey residential building and solid substrate are modelled. Identification of the substrate is made using research drilling, CPT static tests, DMT Marchetti dilatometer, and laboratory tests. Latter are performed on the intact soil specimens which are taken from the wide planning trench at the depth of foundation. The real building settlements was measured as the vertical displacement of benchmarks. These measurements were carried out periodically during the erection of the building and its operation. Initially, the Cam Clay model parameters were determined on the basis of the laboratory tests, and later, they were corrected by taking into consideration numerical analyses results (whole building and its parts) and real building settlements.

  5. Implications of the subjectivity in hydrologic model choice and parameter identification on the portrayal of climate change impact

    Science.gov (United States)

    Mendoza, Pablo; Clark, Martyn; Rajagopalan, Balaji; MIzukami, Naoki; Gutmann, Ethan; Newman, Andy; Barlage, Michael; Brekke, Levi; Arnold, Jeffrey

    2014-05-01

    Climate change studies involve several methodological choices that affect the hydrological sensitivities obtained, including emission scenarios, climate models, downscaling techniques and hydrologic modeling approaches. Among these, hydrologic model structure selection (i.e. the set of equations that describe catchment processes) and parameter identification are particularly relevant and usually have a strong subjective component. This subjectivity is not only limited to engineering applications, but also extends to many of our research studies, resulting in problems such as missing processes in our models, inappropriate parameterizations and compensatory effects of model parameters (i.e. getting the right answers for the wrong reasons). The goal of this research is to assess the impact of our modeling decisions on projected changes in water balance and catchment behavior for future climate scenarios. Additionally, we aim to better understand the relative importance of hydrologic model structures and parameters on the portrayal of climate change impact. Therefore, we compare hydrologic sensitivities coming from four different models structures (PRMS, VIC, Noah and Noah-MP) with those coming from parameter sets identified using different decisions related to model calibration (objective function, multiple local optima and calibration forcing dataset). We found that both model structure selection and parameter estimation strategy (objective function and forcing dataset) affect the direction and magnitude of climate change signal. Furthermore, the relative effect of subjective decisions on projected variations of catchment behavior depends on the hydrologic signature measure analyzed. Finally, parameter sets with similar values of the objective function may not affect current and future changes in water balance, but may lead to very different sensitivities in hydrologic behavior.

  6. Discrete Choice Models with Random Parameters in R: The Rchoice Package

    Directory of Open Access Journals (Sweden)

    Mauricio Sarrias

    2016-10-01

    Full Text Available Rchoice is a package in R for estimating models with individual heterogeneity for both cross-sectional and panel (longitudinal data. In particular, the package allows binary, ordinal and count response, as well as continuous and discrete covariates. Individual heterogeneity is modeled by allowing the parameter associated with each observed variable (e.g., its coefficient to vary randomly across individuals according to some pre-specified distribution. Simulated maximum likelihood method is implemented for the estimation of the moments of the distributions. In addition, functions for plotting the conditional individual-specific coefficients and their confidence interval are provided. This article is a general description of Rchoice and all functionalities are illustrated using real databases.

  7. Nonlinear model-based control of the Czochralski process III: Proper choice of manipulated variables and controller parameter scheduling

    Science.gov (United States)

    Neubert, M.; Winkler, J.

    2012-12-01

    This contribution continues an article series [1,2] about the nonlinear model-based control of the Czochralski crystal growth process. The key idea of the presented approach is to use a sophisticated combination of nonlinear model-based and conventional (linear) PI controllers for tracking of both, crystal radius and growth rate. Using heater power and pulling speed as manipulated variables several controller structures are possible. The present part tries to systematize the properties of the materials to be grown in order to get unambiguous decision criteria for a most profitable choice of the controller structure. For this purpose a material specific constant M called interface mobility and a more process specific constant S called system response number are introduced. While the first one summarizes important material properties like thermal conductivity and latent heat the latter one characterizes the process by evaluating the average axial thermal gradients at the phase boundary and the actual growth rate at which the crystal is grown. Furthermore these characteristic numbers are useful for establishing a scheduling strategy for the PI controller parameters in order to improve the controller performance. Finally, both numbers give a better understanding of the general thermal system dynamics of the Czochralski technique.

  8. THE INFLUENCE OF CONVERSION MODEL CHOICE FOR EROSION RATE ESTIMATION AND THE SENSITIVITY OF THE RESULTS TO CHANGES IN THE MODEL PARAMETER

    Directory of Open Access Journals (Sweden)

    Nita Suhartini

    2010-06-01

    Full Text Available A study of soil erosion rates had been done on a slightly and long slope of cultivated area in Ciawi - Bogor, using 137Cs technique. The objective of the present study was to evaluate the applicability of the 137Cs technique in obtaining spatially distributed information of soil redistribution at small catchment. This paper reports the result of the choice of conversion model for erosion rate estimates and the sensitive of the changes in the model parameter. For this purpose, small site was selected, namely landuse I (LU-I. The top of a slope was chosen as a reference site. The erosion/deposit rate of individual sampling points was estimated using the conversion models, namely Proportional Model (PM, Mass Balance Model 1 (MBM1 and Mass Balance Model 2 (MBM2. A comparison of the conversion models showed that the lowest value is obtained by the PM. The MBM1 gave values closer to MBM2, but MBM2 gave a reliable values. In this study, a sensitivity analysis suggest that the conversion models are sensitive to changes in parameters that depend on the site conditions, but insensitive to changes in  parameters that interact to the onset of 137Cs fallout input.   Keywords: soil erosion, environmental radioisotope, cesium

  9. ParaChoice Model.

    Energy Technology Data Exchange (ETDEWEB)

    Heimer, Brandon Walter [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Levinson, Rebecca Sobel [Sandia National Lab. (SNL-CA), Livermore, CA (United States); West, Todd H. [Sandia National Lab. (SNL-CA), Livermore, CA (United States)

    2017-12-01

    Analysis with the ParaChoice model addresses three barriers from the VTO Multi-Year Program Plan: availability of alternative fuels and electric charging station infrastructure, availability of AFVs and electric drive vehicles, and consumer reluctance to purchase new technologies. In this fiscal year, we first examined the relationship between the availability of alternative fuels and station infrastructure. Specifically, we studied how electric vehicle charging infrastructure affects the ability of EVs to compete with vehicles that rely on mature, conventional petroleum-based fuels. Second, we studied how the availability of less costly AFVs promotes their representation in the LDV fleet. Third, we used ParaChoice trade space analyses to help inform which consumers are reluctant to purchase new technologies. Last, we began analysis of impacts of alternative energy technologies on Class 8 trucks to isolate those that may most efficaciously advance HDV efficiency and petroleum use reduction goals.

  10. Choice of pesticide fate models

    International Nuclear Information System (INIS)

    Balderacchi, Matteo; Trevisan, Marco; Vischetti, Costantino

    2006-01-01

    The choice of a pesticide fate model at field scale is linked to the available input data. The article describes the available pesticide fate models at a field scale and the guidelines for the choice of the suitable model as function of the data input requested [it

  11. A practical test for the choice of mixing distribution in discrete choice models

    DEFF Research Database (Denmark)

    Fosgerau, Mogens; Bierlaire, Michel

    2007-01-01

    The choice of a specific distribution for random parameters of discrete choice models is a critical issue in transportation analysis. Indeed, various pieces of research have demonstrated that an inappropriate choice of the distribution may lead to serious bias in model forecast and in the estimated...

  12. An adaptive regularization parameter choice strategy for multispectral bioluminescence tomography

    Energy Technology Data Exchange (ETDEWEB)

    Feng Jinchao; Qin Chenghu; Jia Kebin; Han Dong; Liu Kai; Zhu Shouping; Yang Xin; Tian Jie [Medical Image Processing Group, Institute of Automation, Chinese Academy of Sciences, P. O. Box 2728, Beijing 100190 (China); College of Electronic Information and Control Engineering, Beijing University of Technology, Beijing 100124 (China); Medical Image Processing Group, Institute of Automation, Chinese Academy of Sciences, P. O. Box 2728, Beijing 100190 (China); Medical Image Processing Group, Institute of Automation, Chinese Academy of Sciences, P. O. Box 2728, Beijing 100190 (China) and School of Life Sciences and Technology, Xidian University, Xi' an 710071 (China)

    2011-11-15

    Purpose: Bioluminescence tomography (BLT) provides an effective tool for monitoring physiological and pathological activities in vivo. However, the measured data in bioluminescence imaging are corrupted by noise. Therefore, regularization methods are commonly used to find a regularized solution. Nevertheless, for the quality of the reconstructed bioluminescent source obtained by regularization methods, the choice of the regularization parameters is crucial. To date, the selection of regularization parameters remains challenging. With regards to the above problems, the authors proposed a BLT reconstruction algorithm with an adaptive parameter choice rule. Methods: The proposed reconstruction algorithm uses a diffusion equation for modeling the bioluminescent photon transport. The diffusion equation is solved with a finite element method. Computed tomography (CT) images provide anatomical information regarding the geometry of the small animal and its internal organs. To reduce the ill-posedness of BLT, spectral information and the optimal permissible source region are employed. Then, the relationship between the unknown source distribution and multiview and multispectral boundary measurements is established based on the finite element method and the optimal permissible source region. Since the measured data are noisy, the BLT reconstruction is formulated as l{sub 2} data fidelity and a general regularization term. When choosing the regularization parameters for BLT, an efficient model function approach is proposed, which does not require knowledge of the noise level. This approach only requests the computation of the residual and regularized solution norm. With this knowledge, we construct the model function to approximate the objective function, and the regularization parameter is updated iteratively. Results: First, the micro-CT based mouse phantom was used for simulation verification. Simulation experiments were used to illustrate why multispectral data were used

  13. Parameter choice in Banach space regularization under variational inequalities

    International Nuclear Information System (INIS)

    Hofmann, Bernd; Mathé, Peter

    2012-01-01

    The authors study parameter choice strategies for the Tikhonov regularization of nonlinear ill-posed problems in Banach spaces. The effectiveness of any parameter choice for obtaining convergence rates depends on the interplay of the solution smoothness and the nonlinearity structure, and it can be expressed concisely in terms of variational inequalities. Such inequalities are link conditions between the penalty term, the norm misfit and the corresponding error measure. The parameter choices under consideration include an a priori choice, the discrepancy principle as well as the Lepskii principle. For the convenience of the reader, the authors review in an appendix a few instances where the validity of a variational inequality can be established. (paper)

  14. Lumped-parameter models

    Energy Technology Data Exchange (ETDEWEB)

    Ibsen, Lars Bo; Liingaard, M.

    2006-12-15

    A lumped-parameter model represents the frequency dependent soil-structure interaction of a massless foundation placed on or embedded into an unbounded soil domain. In this technical report the steps of establishing a lumped-parameter model are presented. Following sections are included in this report: Static and dynamic formulation, Simple lumped-parameter models and Advanced lumped-parameter models. (au)

  15. Misclassification in binary choice models

    Czech Academy of Sciences Publication Activity Database

    Meyer, B. D.; Mittag, Nikolas

    2017-01-01

    Roč. 200, č. 2 (2017), s. 295-311 ISSN 0304-4076 R&D Projects: GA ČR(CZ) GJ16-07603Y Institutional support: Progres-Q24 Keywords : measurement error * binary choice models * program take-up Subject RIV: AH - Economics OBOR OECD: Economic Theory Impact factor: 1.633, year: 2016

  16. Misclassification in binary choice models

    Czech Academy of Sciences Publication Activity Database

    Meyer, B. D.; Mittag, Nikolas

    2017-01-01

    Roč. 200, č. 2 (2017), s. 295-311 ISSN 0304-4076 Institutional support: RVO:67985998 Keywords : measurement error * binary choice models * program take-up Subject RIV: AH - Economics OBOR OECD: Economic Theory Impact factor: 1.633, year: 2016

  17. A nested recursive logit model for route choice analysis

    DEFF Research Database (Denmark)

    Mai, Tien; Frejinger, Emma; Fosgerau, Mogens

    2015-01-01

    We propose a route choice model that relaxes the independence from irrelevant alternatives property of the logit model by allowing scale parameters to be link specific. Similar to the recursive logit (RL) model proposed by Fosgerau et al. (2013), the choice of path is modeled as a sequence of lin...

  18. Modelling Choice of Information Sources

    Directory of Open Access Journals (Sweden)

    Agha Faisal Habib Pathan

    2013-04-01

    Full Text Available This paper addresses the significance of traveller information sources including mono-modal and multimodal websites for travel decisions. The research follows a decision paradigm developed earlier, involving an information acquisition process for travel choices, and identifies the abstract characteristics of new information sources that deserve further investigation (e.g. by incorporating these in models and studying their significance in model estimation. A Stated Preference experiment is developed and the utility functions are formulated by expanding the travellers' choice set to include different combinations of sources of information. In order to study the underlying choice mechanisms, the resulting variables are examined in models based on different behavioural strategies, including utility maximisation and minimising the regret associated with the foregone alternatives. This research confirmed that RRM (Random Regret Minimisation Theory can fruitfully be used and can provide important insights for behavioural studies. The study also analyses the properties of travel planning websites and establishes a link between travel choices and the content, provenance, design, presence of advertisements, and presentation of information. The results indicate that travellers give particular credence to governmentowned sources and put more importance on their own previous experiences than on any other single source of information. Information from multimodal websites is more influential than that on train-only websites. This in turn is more influential than information from friends, while information from coachonly websites is the least influential. A website with less search time, specific information on users' own criteria, and real time information is regarded as most attractive

  19. Comparison of Vehicle Choice Models

    Energy Technology Data Exchange (ETDEWEB)

    Stephens, Thomas S. [Argonne National Lab. (ANL), Argonne, IL (United States); Levinson, Rebecca S. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Brooker, Aaron [National Renewable Energy Lab. (NREL), Golden, CO (United States); Liu, Changzheng [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Lin, Zhenhong [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Birky, Alicia [Energetics Incorporated, Columbia, MD (United States); Kontou, Eleftheria [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2017-10-31

    Five consumer vehicle choice models that give projections of future sales shares of light-duty vehicles were compared by running each model using the same inputs, where possible, for two scenarios. The five models compared — LVCFlex, MA3T, LAVE-Trans, ParaChoice, and ADOPT — have been used in support of the Energy Efficiency and Renewable Energy (EERE) Vehicle Technologies Office in analyses of future light-duty vehicle markets under different assumptions about future vehicle technologies and market conditions. The models give projections of sales shares by powertrain technology. Projections made using common, but not identical, inputs showed qualitative agreement, with the exception of ADOPT. ADOPT estimated somewhat lower advanced vehicle shares, mostly composed of hybrid electric vehicles. Other models projected large shares of multiple advanced vehicle powertrains. Projections of models differed in significant ways, including how different technologies penetrated cars and light trucks. Since the models are constructed differently and take different inputs, not all inputs were identical, but were the same or very similar where possible. Projections by all models were in close agreement only in the first few years. Although the projections from LVCFlex, MA3T, LAVE-Trans, and ParaChoice were in qualitative agreement, there were significant differences in sales shares given by the different models for individual powertrain types, particularly in later years (2030 and later). For example, projected sales shares of conventional spark-ignition vehicles in 2030 for a given scenario ranged from 35% to 74%. Reasons for such differences are discussed, recognizing that these models were not developed to give quantitatively accurate predictions of future sales shares, but to represent vehicles markets realistically and capture the connections between sales and important influences. Model features were also compared at a high level, and suggestions for further comparison

  20. Comparison of Vehicle Choice Models

    Energy Technology Data Exchange (ETDEWEB)

    Stephens, Thomas S. [Argonne National Lab. (ANL), Argonne, IL (United States); Levinson, Rebecca S. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Brooker, Aaron [National Renewable Energy Lab. (NREL), Golden, CO (United States); Liu, Changzheng [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Lin, Zhenhong [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Birky, Alicia [Energetics Incorporated, Columbia, MD (United States); Kontou, Eleftheria [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2017-10-01

    Five consumer vehicle choice models that give projections of future sales shares of light-duty vehicles were compared by running each model using the same inputs, where possible, for two scenarios. The five models compared — LVCFlex, MA3T, LAVE-Trans, ParaChoice, and ADOPT — have been used in support of the Energy Efficiency and Renewable Energy (EERE) Vehicle Technologies Office in analyses of future light-duty vehicle markets under different assumptions about future vehicle technologies and market conditions. The models give projections of sales shares by powertrain technology. Projections made using common, but not identical, inputs showed qualitative agreement, with the exception of ADOPT. ADOPT estimated somewhat lower advanced vehicle shares, mostly composed of hybrid electric vehicles. Other models projected large shares of multiple advanced vehicle powertrains. Projections of models differed in significant ways, including how different technologies penetrated cars and light trucks. Since the models are constructed differently and take different inputs, not all inputs were identical, but were the same or very similar where possible.

  1. Model choice in nonnested families

    CERN Document Server

    Pereira, Basilio de Bragança

    2016-01-01

    This book discusses the problem of model choice when the statistical models are separate, also called nonnested. Chapter 1 provides an introduction, motivating examples and a general overview of the problem. Chapter 2 presents the classical or frequentist approach to the problem as well as several alternative procedures and their properties. Chapter 3 explores the Bayesian approach, the limitations of the classical Bayes factors and the proposed alternative Bayes factors to overcome these limitations. It also discusses a significance Bayesian procedure. Lastly, Chapter 4 examines the pure likelihood approach. Various real-data examples and computer simulations are provided throughout the text.

  2. Hybrid discrete choice models: Gained insights versus increasing effort

    International Nuclear Information System (INIS)

    Mariel, Petr; Meyerhoff, Jürgen

    2016-01-01

    Hybrid choice models expand the standard models in discrete choice modelling by incorporating psychological factors as latent variables. They could therefore provide further insights into choice processes and underlying taste heterogeneity but the costs of estimating these models often significantly increase. This paper aims at comparing the results from a hybrid choice model and a classical random parameter logit. Point of departure for this analysis is whether researchers and practitioners should add hybrid choice models to their suite of models routinely estimated. Our comparison reveals, in line with the few prior studies, that hybrid models gain in efficiency by the inclusion of additional information. The use of one of the two proposed approaches, however, depends on the objective of the analysis. If disentangling preference heterogeneity is most important, hybrid model seems to be preferable. If the focus is on predictive power, a standard random parameter logit model might be the better choice. Finally, we give recommendations for an adequate use of hybrid choice models based on known principles of elementary scientific inference. - Highlights: • The paper compares performance of a Hybrid Choice Model (HCM) and a classical Random Parameter Logit (RPL) model. • The HCM indeed provides insights regarding preference heterogeneity not gained from the RPL. • The RPL has similar predictive power as the HCM in our data. • The costs of estimating HCM seem to be justified when learning more on taste heterogeneity is a major study objective.

  3. Hybrid discrete choice models: Gained insights versus increasing effort

    Energy Technology Data Exchange (ETDEWEB)

    Mariel, Petr, E-mail: petr.mariel@ehu.es [UPV/EHU, Economía Aplicada III, Avda. Lehendakari Aguire, 83, 48015 Bilbao (Spain); Meyerhoff, Jürgen [Institute for Landscape Architecture and Environmental Planning, Technical University of Berlin, D-10623 Berlin, Germany and The Kiel Institute for the World Economy, Duesternbrooker Weg 120, 24105 Kiel (Germany)

    2016-10-15

    Hybrid choice models expand the standard models in discrete choice modelling by incorporating psychological factors as latent variables. They could therefore provide further insights into choice processes and underlying taste heterogeneity but the costs of estimating these models often significantly increase. This paper aims at comparing the results from a hybrid choice model and a classical random parameter logit. Point of departure for this analysis is whether researchers and practitioners should add hybrid choice models to their suite of models routinely estimated. Our comparison reveals, in line with the few prior studies, that hybrid models gain in efficiency by the inclusion of additional information. The use of one of the two proposed approaches, however, depends on the objective of the analysis. If disentangling preference heterogeneity is most important, hybrid model seems to be preferable. If the focus is on predictive power, a standard random parameter logit model might be the better choice. Finally, we give recommendations for an adequate use of hybrid choice models based on known principles of elementary scientific inference. - Highlights: • The paper compares performance of a Hybrid Choice Model (HCM) and a classical Random Parameter Logit (RPL) model. • The HCM indeed provides insights regarding preference heterogeneity not gained from the RPL. • The RPL has similar predictive power as the HCM in our data. • The costs of estimating HCM seem to be justified when learning more on taste heterogeneity is a major study objective.

  4. Gas-particle partitioning of semi-volatile organics on organic aerosols using a predictive activity coefficient model: analysis of the effects of parameter choices on model performance

    Science.gov (United States)

    Chandramouli, Bharadwaj; Jang, Myoseon; Kamens, Richard M.

    The partitioning of a diverse set of semivolatile organic compounds (SOCs) on a variety of organic aerosols was studied using smog chamber experimental data. Existing data on the partitioning of SOCs on aerosols from wood combustion, diesel combustion, and the α-pinene-O 3 reaction was augmented by carrying out smog chamber partitioning experiments on aerosols from meat cooking, and catalyzed and uncatalyzed gasoline engine exhaust. Model compositions for aerosols from meat cooking and gasoline combustion emissions were used to calculate activity coefficients for the SOCs in the organic aerosols and the Pankow absorptive gas/particle partitioning model was used to calculate the partitioning coefficient Kp and quantitate the predictive improvements of using the activity coefficient. The slope of the log K p vs. log p L0 correlation for partitioning on aerosols from meat cooking improved from -0.81 to -0.94 after incorporation of activity coefficients iγ om. A stepwise regression analysis of the partitioning model revealed that for the data set used in this study, partitioning predictions on α-pinene-O 3 secondary aerosol and wood combustion aerosol showed statistically significant improvement after incorporation of iγ om, which can be attributed to their overall polarity. The partitioning model was sensitive to changes in aerosol composition when updated compositions for α-pinene-O 3 aerosol and wood combustion aerosol were used. The octanol-air partitioning coefficient's ( KOA) effectiveness as a partitioning correlator over a variety of aerosol types was evaluated. The slope of the log K p- log K OA correlation was not constant over the aerosol types and SOCs used in the study and the use of KOA for partitioning correlations can potentially lead to significant deviations, especially for polar aerosols.

  5. Modeling one-choice and two-choice driving tasks.

    Science.gov (United States)

    Ratcliff, Roger

    2015-08-01

    An experiment is presented in which subjects were tested on both one-choice and two-choice driving tasks and on non-driving versions of them. Diffusion models for one- and two-choice tasks were successful in extracting model-based measures from the response time and accuracy data. These include measures of the quality of the information from the stimuli that drove the decision process (drift rate in the model), the time taken up by processes outside the decision process and, for the two-choice model, the speed/accuracy decision criteria that subjects set. Drift rates were only marginally different between the driving and non-driving tasks, indicating that nearly the same information was used in the two kinds of tasks. The tasks differed in the time taken up by other processes, reflecting the difference between them in response processing demands. Drift rates were significantly correlated across the two two-choice tasks showing that subjects that performed well on one task also performed well on the other task. Nondecision times were correlated across the two driving tasks, showing common abilities on motor processes across the two tasks. These results show the feasibility of using diffusion modeling to examine decision making in driving and so provide for a theoretical examination of factors that might impair driving, such as extreme aging, distraction, sleep deprivation, and so on.

  6. Meta-analysis of choice set generation effects on route choice model estimates and predictions

    DEFF Research Database (Denmark)

    Prato, Carlo Giacomo

    2012-01-01

    are applied for model estimation and results are compared to the ‘true model estimates’. Last, predictions from the simulation of models estimated with objective choice sets are compared to the ‘postulated predicted routes’. A meta-analytical approach allows synthesizing the effect of judgments......Large scale applications of behaviorally realistic transport models pose several challenges to transport modelers on both the demand and the supply sides. On the supply side, path-based solutions to the user assignment equilibrium problem help modelers in enhancing the route choice behavior...... modeling, but require them to generate choice sets by selecting a path generation technique and its parameters according to personal judgments. This paper proposes a methodology and an experimental setting to provide general indications about objective judgments for an effective route choice set generation...

  7. Hybrid discrete choice models: Gained insights versus increasing effort.

    Science.gov (United States)

    Mariel, Petr; Meyerhoff, Jürgen

    2016-10-15

    Hybrid choice models expand the standard models in discrete choice modelling by incorporating psychological factors as latent variables. They could therefore provide further insights into choice processes and underlying taste heterogeneity but the costs of estimating these models often significantly increase. This paper aims at comparing the results from a hybrid choice model and a classical random parameter logit. Point of departure for this analysis is whether researchers and practitioners should add hybrid choice models to their suite of models routinely estimated. Our comparison reveals, in line with the few prior studies, that hybrid models gain in efficiency by the inclusion of additional information. The use of one of the two proposed approaches, however, depends on the objective of the analysis. If disentangling preference heterogeneity is most important, hybrid model seems to be preferable. If the focus is on predictive power, a standard random parameter logit model might be the better choice. Finally, we give recommendations for an adequate use of hybrid choice models based on known principles of elementary scientific inference. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. Response model parameter linking

    NARCIS (Netherlands)

    Barrett, M.L.D.

    2015-01-01

    With a few exceptions, the problem of linking item response model parameters from different item calibrations has been conceptualized as an instance of the problem of equating observed scores on different test forms. This thesis argues, however, that the use of item response models does not require

  9. Dynamic cognitive models of intertemporal choice.

    Science.gov (United States)

    Dai, Junyi; Pleskac, Timothy J; Pachur, Thorsten

    2018-03-24

    Traditionally, descriptive accounts of intertemporal choice have relied on static and deterministic models that assume alternative-wise processing of the options. Recent research, by contrast, has highlighted the dynamic and probabilistic nature of intertemporal choice and provided support for attribute-wise processing. Currently, dynamic models of intertemporal choice-which account for both the resulting choice and the time course over which the construction of a choice develops-rely exclusively on the framework of evidence accumulation. In this article, we develop and rigorously compare several candidate schemes for dynamic models of intertemporal choice. Specifically, we consider an existing dynamic modeling scheme based on decision field theory and develop two novel modeling schemes-one assuming lexicographic, noncompensatory processing, and the other built on the classical concepts of random utility in economics and discrimination thresholds in psychophysics. We show that all three modeling schemes can accommodate key behavioral regularities in intertemporal choice. When empirical choice and response time data were fit simultaneously, the models built on random utility and discrimination thresholds performed best. The results also indicated substantial individual differences in the dynamics underlying intertemporal choice. Finally, model recovery analyses demonstrated the benefits of including both choice and response time data for more accurate model selection on the individual level. The present work shows how the classical concept of random utility can be extended to incorporate response dynamics in intertemporal choice. Moreover, the results suggest that this approach offers a successful alternative to the dominating evidence accumulation approach when modeling the dynamics of decision making. Copyright © 2018 Elsevier Inc. All rights reserved.

  10. An Application of Discrete Choice Analysis to the Modeling of Public Library Use and Choice Behavior.

    Science.gov (United States)

    Sone, Akio

    1988-01-01

    This study applied discrete choice analysis to the modeling of public library use and choice behavior. Five library use models and two library choice models were estimated from data obtained by a citizen survey in Kashiwa City, Japan. A library choice model was applied to predicting users' library choice under alternative library policies. (17…

  11. Discrete choice modeling of season choice for Minnesota turkey hunters

    Science.gov (United States)

    Schroeder, Susan A.; Fulton, David C.; Cornicelli, Louis; Merchant, Steven S.

    2018-01-01

    Recreational turkey hunting exemplifies the interdisciplinary nature of modern wildlife management. Turkey populations in Minnesota have reached social or biological carrying capacities in many areas, and changes to turkey hunting regulations have been proposed by stakeholders and wildlife managers. This study employed discrete stated choice modeling to enhance understanding of turkey hunter preferences about regulatory alternatives. We distributed mail surveys to 2,500 resident turkey hunters. Results suggest that, compared to season structure and lotteries, additional permits and level of potential interference from other hunters most influenced hunter preferences for regulatory alternatives. Low hunter interference was preferred to moderate or high interference. A second permit issued only to unsuccessful hunters was preferred to no second permit or permits for all hunters. Results suggest that utility is not strictly defined by harvest or an individual's material gain but can involve preference for other outcomes that on the surface do not materially benefit an individual. Discrete stated choice modeling offers wildlife managers an effective way to assess constituent preferences related to new regulations before implementing them. 

  12. Choice of the parameters of the cusum algorithms for parameter estimation in the markov modulated poisson process

    OpenAIRE

    Burkatovskaya, Yuliya Borisovna; Kabanova, T.; Khaustov, Pavel Aleksandrovich

    2016-01-01

    CUSUM algorithm for controlling chain state switching in the Markov modulated Poissonprocess was investigated via simulation. Recommendations concerning the parameter choice were givensubject to characteristics of the process. Procedure of the process parameter estimation was described.

  13. Process and Context in Choice Models

    DEFF Research Database (Denmark)

    Ben-Akiva, Moshe; Palma, André de; McFadden, Daniel

    2012-01-01

    We develop a general framework that extends choice models by including an explicit representation of the process and context of decision making. Process refers to the steps involved in decision making. Context refers to factors affecting the process, focusing in this paper on social networks....... The extended choice framework includes more behavioral richness through the explicit representation of the planning process preceding an action and its dynamics and the effects of context (family, friends, and market) on the process leading to a choice, as well as the inclusion of new types of subjective data...

  14. Consumer Vehicle Choice Model Documentation

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Changzheng [ORNL; Greene, David L [ORNL

    2012-08-01

    In response to the Fuel Economy and Greenhouse Gas (GHG) emissions standards, automobile manufacturers will need to adopt new technologies to improve the fuel economy of their vehicles and to reduce the overall GHG emissions of their fleets. The U.S. Environmental Protection Agency (EPA) has developed the Optimization Model for reducing GHGs from Automobiles (OMEGA) to estimate the costs and benefits of meeting GHG emission standards through different technology packages. However, the model does not simulate the impact that increased technology costs will have on vehicle sales or on consumer surplus. As the model documentation states, “While OMEGA incorporates functions which generally minimize the cost of meeting a specified carbon dioxide (CO2) target, it is not an economic simulation model which adjusts vehicle sales in response to the cost of the technology added to each vehicle.” Changes in the mix of vehicles sold, caused by the costs and benefits of added fuel economy technologies, could make it easier or more difficult for manufacturers to meet fuel economy and emissions standards, and impacts on consumer surplus could raise the costs or augment the benefits of the standards. Because the OMEGA model does not presently estimate such impacts, the EPA is investigating the feasibility of developing an adjunct to the OMEGA model to make such estimates. This project is an effort to develop and test a candidate model. The project statement of work spells out the key functional requirements for the new model.

  15. A simplified model of choice behavior under uncertainty

    Directory of Open Access Journals (Sweden)

    Ching-Hung Lin

    2016-08-01

    Full Text Available The Iowa Gambling Task (IGT has been standardized as a clinical assessment tool (Bechara, 2007. Nonetheless, numerous research groups have attempted to modify IGT models to optimize parameters for predicting the choice behavior of normal controls and patients. A decade ago, most researchers considered the expected utility (EU model (Busemeyer and Stout, 2002 to be the optimal model for predicting choice behavior under uncertainty. However, in recent years, studies have demonstrated the prospect utility (PU models (Ahn et al., 2008 to be more effective than the EU models in the IGT. Nevertheless, after some preliminary tests, we propose that Ahn et al. (2008 PU model is not optimal due to some incompatible results between our behavioral and modeling data. This study aims to modify Ahn et al. (2008 PU model to a simplified model and collected 145 subjects’ IGT performance as the benchmark data for comparison. In our simplified PU model, the best goodness-of-fit was found mostly while α approaching zero. More specifically, we retested the key parameters α, λ , and A in the PU model. Notably, the power of influence of the parameters α, λ, and A has a hierarchical order in terms of manipulating the goodness-of-fit in the PU model. Additionally, we found that the parameters λ and A may be ineffective when the parameter α is close to zero in the PU model. The present simplified model demonstrated that decision makers mostly adopted the strategy of gain-stay-loss-shift rather than foreseeing the long-term outcome. However, there still have other behavioral variables that are not well revealed under these dynamic uncertainty situations. Therefore, the optimal behavioral models may not have been found. In short, the best model for predicting choice behavior under dynamic-uncertainty situations should be further evaluated.

  16. Street Choice Logit Model for Visitors in Shopping Districts

    Directory of Open Access Journals (Sweden)

    Ko Kawada

    2014-07-01

    Full Text Available In this study, we propose two models for predicting people’s activity. The first model is the pedestrian distribution prediction (or postdiction model by multiple regression analysis using space syntax indices of urban fabric and people distribution data obtained from a field survey. The second model is a street choice model for visitors using multinomial logit model. We performed a questionnaire survey on the field to investigate the strolling routes of 46 visitors and obtained a total of 1211 street choices in their routes. We proposed a utility function, sum of weighted space syntax indices, and other indices, and estimated the parameters for weights on the basis of maximum likelihood. These models consider both street networks, distance from destination, direction of the street choice and other spatial compositions (numbers of pedestrians, cars, shops, and elevation. The first model explains the characteristics of the street where many people tend to walk or stay. The second model explains the mechanism underlying the street choice of visitors and clarifies the differences in the weights of street choice parameters among the various attributes, such as gender, existence of destinations, number of people, etc. For all the attributes considered, the influences of DISTANCE and DIRECTION are strong. On the other hand, the influences of Int.V, SHOPS, CARS, ELEVATION, and WIDTH are different for each attribute. People with defined destinations tend to choose streets that “have more shops, and are wider and lower”. In contrast, people with undefined destinations tend to choose streets of high Int.V. The choice of males is affected by Int.V, SHOPS, WIDTH (positive and CARS (negative. Females prefer streets that have many shops, and couples tend to choose downhill streets. The behavior of individual persons is affected by all variables. The behavior of people visiting in groups is affected by SHOP and WIDTH (positive.

  17. Hybrid Compensatory-Noncompensatory Choice Sets in Semicompensatory Models

    DEFF Research Database (Denmark)

    Kaplan, Sigal; Bekhor, Shlomo; Shiftan, Yoram

    2013-01-01

    Semicompensatory models represent a choice process consisting of an elimination-based choice set formation on satisfaction of criterion thresholds and a utility-based choice. Current semicompensatory models assume a purely noncompensatory choice set formation and therefore do not support...... multinomial criteria that involve trade-offs between attributes at the choice set formation stage. This study proposes a novel behavioral paradigm consisting of a hybrid compensatory-noncompensatory choice set formation process, followed by compensatory choice. The behavioral paradigm is represented...

  18. Exclusive queueing model including the choice of service windows

    Science.gov (United States)

    Tanaka, Masahiro; Yanagisawa, Daichi; Nishinari, Katsuhiro

    2018-01-01

    In a queueing system involving multiple service windows, choice behavior is a significant concern. This paper incorporates the choice of service windows into a queueing model with a floor represented by discrete cells. We contrived a logit-based choice algorithm for agents considering the numbers of agents and the distances to all service windows. Simulations were conducted with various parameters of agent choice preference for these two elements and for different floor configurations, including the floor length and the number of service windows. We investigated the model from the viewpoint of transit times and entrance block rates. The influences of the parameters on these factors were surveyed in detail and we determined that there are optimum floor lengths that minimize the transit times. In addition, we observed that the transit times were determined almost entirely by the entrance block rates. The results of the presented model are relevant to understanding queueing systems including the choice of service windows and can be employed to optimize facility design and floor management.

  19. Modeling Choice and Valuation in Decision Experiments

    Science.gov (United States)

    Loomes, Graham

    2010-01-01

    This article develops a parsimonious descriptive model of individual choice and valuation in the kinds of experiments that constitute a substantial part of the literature relating to decision making under risk and uncertainty. It suggests that many of the best known "regularities" observed in those experiments may arise from a tendency for…

  20. Ridge regression in prediction problems: automatic choice of the ridge parameter.

    Science.gov (United States)

    Cule, Erika; De Iorio, Maria

    2013-11-01

    To date, numerous genetic variants have been identified as associated with diverse phenotypic traits. However, identified associations generally explain only a small proportion of trait heritability and the predictive power of models incorporating only known-associated variants has been small. Multiple regression is a popular framework in which to consider the joint effect of many genetic variants simultaneously. Ordinary multiple regression is seldom appropriate in the context of genetic data, due to the high dimensionality of the data and the correlation structure among the predictors. There has been a resurgence of interest in the use of penalised regression techniques to circumvent these difficulties. In this paper, we focus on ridge regression, a penalised regression approach that has been shown to offer good performance in multivariate prediction problems. One challenge in the application of ridge regression is the choice of the ridge parameter that controls the amount of shrinkage of the regression coefficients. We present a method to determine the ridge parameter based on the data, with the aim of good performance in high-dimensional prediction problems. We establish a theoretical justification for our approach, and demonstrate its performance on simulated genetic data and on a real data example. Fitting a ridge regression model to hundreds of thousands to millions of genetic variants simultaneously presents computational challenges. We have developed an R package, ridge, which addresses these issues. Ridge implements the automatic choice of ridge parameter presented in this paper, and is freely available from CRAN. © 2013 WILEY PERIODICALS, INC.

  1. Modeling Stochastic Route Choice Behaviors with Equivalent Impedance

    Directory of Open Access Journals (Sweden)

    Jun Li

    2015-01-01

    Full Text Available A Logit-based route choice model is proposed to address the overlapping and scaling problems in the traditional multinomial Logit model. The nonoverlapping links are defined as a subnetwork, and its equivalent impedance is explicitly calculated in order to simply network analyzing. The overlapping links are repeatedly merged into subnetworks with Logit-based equivalent travel costs. The choice set at each intersection comprises only the virtual equivalent route without overlapping. In order to capture heterogeneity in perception errors of different sizes of networks, different scale parameters are assigned to subnetworks and they are linked to the topological relationships to avoid estimation burden. The proposed model provides an alternative method to model the stochastic route choice behaviors without the overlapping and scaling problems, and it still maintains the simple and closed-form expression from the MNL model. A link-based loading algorithm based on Dial’s algorithm is proposed to obviate route enumeration and it is suitable to be applied on large-scale networks. Finally a comparison between the proposed model and other route choice models is given by numerical examples.

  2. Binary choice models with endogenous regressors

    OpenAIRE

    Christopher Baum; Yingying Dong; Arthur Lewbel; Tao Yang

    2012-01-01

    Dong and Lewbel have developed the theory of simple estimators for binary choice models with endogenous or mismeasured regressors, depending on a `special regressor' as defined by Lewbel (J. Econometrics, 2000). `Control function' methods such as Stata's ivprobit are generally only valid when endogenous regressors are consistent. The estimators proposed here can be used with limited, censored, continuous or discrete endogenous regressors, and have significant advantages over alternatives such...

  3. Discrete Choice Models - Estimation of Passenger Traffic

    DEFF Research Database (Denmark)

    Sørensen, Majken Vildrik

    2003-01-01

    model, data and estimation are described, with a focus of possibilities/limitations of different techniques. Two special issues of modelling are addressed in further detail, namely data segmentation and estimation of Mixed Logit models. Both issues are concerned with whether individuals can be assumed...... for estimation of choice models). For application of the method an algorithm is provided with a case. Also for the second issue, estimation of Mixed Logit models, a method was proposed. The most commonly used approach to estimate Mixed Logit models, is to employ the Maximum Simulated Likelihood estimation (MSL...... distribution of coefficients were found. All the shapes of distributions found, complied with sound knowledge in terms of which should be uni-modal, sign specific and/or skewed distributions....

  4. Model for understanding consumer textural food choice.

    Science.gov (United States)

    Jeltema, Melissa; Beckley, Jacqueline; Vahalik, Jennifer

    2015-05-01

    The current paradigm for developing products that will match the marketing messaging is flawed because the drivers of product choice and satisfaction based on texture are misunderstood. Qualitative research across 10 years has led to the thesis explored in this research that individuals have a preferred way to manipulate food in their mouths (i.e., mouth behavior) and that this behavior is a major driver of food choice, satisfaction, and the desire to repurchase. Texture, which is currently thought to be a major driver of product choice, is a secondary factor, and is important only in that it supports the primary driver-mouth behavior. A model for mouth behavior is proposed and the qualitative research supporting the identification of different mouth behaviors is presented. The development of a trademarked typing tool for characterizing mouth behavior is described along with quantitative substantiation of the tool's ability to group individuals by mouth behavior. The use of these four groups to understand textural preferences and the implications for a variety of areas including product design and weight management are explored.

  5. Linking Item Response Model Parameters.

    Science.gov (United States)

    van der Linden, Wim J; Barrett, Michelle D

    2016-09-01

    With a few exceptions, the problem of linking item response model parameters from different item calibrations has been conceptualized as an instance of the problem of test equating scores on different test forms. This paper argues, however, that the use of item response models does not require any test score equating. Instead, it involves the necessity of parameter linking due to a fundamental problem inherent in the formal nature of these models-their general lack of identifiability. More specifically, item response model parameters need to be linked to adjust for the different effects of the identifiability restrictions used in separate item calibrations. Our main theorems characterize the formal nature of these linking functions for monotone, continuous response models, derive their specific shapes for different parameterizations of the 3PL model, and show how to identify them from the parameter values of the common items or persons in different linking designs.

  6. INFLUENCE OF ROLLING STOCK VIBROACOUSTICAL PARAMETERS ON THE CHOICE OF RATIONAL VALUES OF LOCOMOTIVE RUNNING GEAR

    Directory of Open Access Journals (Sweden)

    Yu. V. Zelenko

    2016-06-01

    Full Text Available Purpose.The success of the traffic on the railways of Ukraine depends on the number and the operational fleet of electric locomotives. Today, the locomotive depot exploit physically and morally outdated locomotives that have low reliability. Modernization of electric locomotives is not economically justified. The aim of this study is to improve the safety of the traction rolling stock by the frequency analysis of dynamical systems, which allows conducting the calculation of the natural (of resonant frequencies of the design and related forms of vibrations.Methodology.The study was conducted by methods of analytical mechanics and mathematical modeling of operating loads of freight locomotive when driving at different speeds on the straight and curved track sections. The theoretical value of the work is the technique of choice of constructive schemes and rational parameters of perspective electric locomotive taking into account the electric inertia ratios and stiffness coefficients of Lagrange second-order equations.Findings. The problems of theoretical research and the development of a mathematical model of the spatial electric vibrations are solved. The theoretical studies of the effect of inertia ratios and stiffness coefficients on the dynamic values and the parameter values of electric locomotive undercarriages are presented.Originality.The set of developed regulations and obtained results is a practical solution to selecting rational parameters of bogies of the freight mainline locomotive for railways of Ukraine. A concept of choice of constructive scheme and rational parameters of perspective locomotive is formulated. It is developed the method of calculation of spatial electric locomotive oscillations to determine its dynamic performance. The software complex for processing the data of experimental studies of dynamic parameters of electric locomotive and comparing the results of the theoretical calculations with the data of full

  7. Estimating Route Choice Models from Stochastically Generated Choice Sets on Large-Scale Networks Correcting for Unequal Sampling Probability

    DEFF Research Database (Denmark)

    Vacca, Alessandro; Prato, Carlo Giacomo; Meloni, Italo

    2015-01-01

    is the dependency of the parameter estimates from the choice set generation technique. Bias introduced in model estimation has been corrected only for the random walk algorithm, which has problematic applicability to large-scale networks. This study proposes a correction term for the sampling probability of routes...

  8. Comparing parameter choice methods for the regularization in the SONAH algorithm

    DEFF Research Database (Denmark)

    Gomes, Jesper Skovhus

    2006-01-01

    . The coefficients that perform this plane-to-plane transformation are found by solving a least squares problem, i.e. the SONAH algorithm minimizes a residual involving an infinite set of elementary waves. Since SONAH solves an inverse problem and since measurement errors are unavoidable in practice, regularization...... is needed. A parameter choice method based on a priori information about the signal-to-noise-ratio (SNR) in the measurement setup is often chosen. However, this parameter choice method may be undesirable since SNR is difficult to determine in practice. In this paper, data based parameter choice methods...... are used in order to determine a regularization parameter. Two such approaches are compared: Generalized Cross-Validation (GCV) and a trade-off curve analysis inspired by the L-curve. Results from computer simulations and from practical measurements with a two-layer microphone array are given...

  9. A Probabilistic, Dynamic, and Attribute-wise Model of Intertemporal Choice

    Science.gov (United States)

    Dai, Junyi; Busemeyer, Jerome R.

    2014-01-01

    Most theoretical and empirical research on intertemporal choice assumes a deterministic and static perspective, leading to the widely adopted delay discounting models. As a form of preferential choice, however, intertemporal choice may be generated by a stochastic process that requires some deliberation time to reach a decision. We conducted three experiments to investigate how choice and decision time varied as a function of manipulations designed to examine the delay duration effect, the common difference effect, and the magnitude effect in intertemporal choice. The results, especially those associated with the delay duration effect, challenged the traditional deterministic and static view and called for alternative approaches. Consequently, various static or dynamic stochastic choice models were explored and fit to the choice data, including alternative-wise models derived from the traditional exponential or hyperbolic discount function and attribute-wise models built upon comparisons of direct or relative differences in money and delay. Furthermore, for the first time, dynamic diffusion models, such as those based on decision field theory, were also fit to the choice and response time data simultaneously. The results revealed that the attribute-wise diffusion model with direct differences, power transformations of objective value and time, and varied diffusion parameter performed the best and could account for all three intertemporal effects. In addition, the empirical relationship between choice proportions and response times was consistent with the prediction of diffusion models and thus favored a stochastic choice process for intertemporal choice that requires some deliberation time to make a decision. PMID:24635188

  10. A choice of the parameters of NPP steam generators on the basis of vector optimization

    International Nuclear Information System (INIS)

    Lemeshev, V.U.; Metreveli, D.G.

    1981-01-01

    The optimization problem of the parameters of the designed systems is considered as the problem of multicriterion optimization. It is proposed to choose non-dominant, optimal according to Pareto, parameters. An algorithm is built on the basis of the required and sufficient non-dominant conditions to find non-dominant solutions. This algorithm has been employed to solve the problem on a choice of optimal parameters for the counterflow shell-tube steam generator of NPP of BRGD type [ru

  11. Modeling of Parameters of Subcritical Assembly SAD

    CERN Document Server

    Petrochenkov, S; Puzynin, I

    2005-01-01

    The accepted conceptual design of the experimental Subcritical Assembly in Dubna (SAD) is based on the MOX core with a nominal unit capacity of 25 kW (thermal). This corresponds to the multiplication coefficient $k_{\\rm eff} =0.95$ and accelerator beam power 1 kW. A subcritical assembly driven with the existing 660 MeV proton accelerator at the Joint Institute for Nuclear Research has been modelled in order to make choice of the optimal parameters for the future experiments. The Monte Carlo method was used to simulate neutron spectra, energy deposition and doses calculations. Some of the calculation results are presented in the paper.

  12. A link based network route choice model with unrestricted choice set

    DEFF Research Database (Denmark)

    Fosgerau, Mogens; Frejinger, Emma; Karlstrom, Anders

    2013-01-01

    This paper considers the path choice problem, formulating and discussing an econometric random utility model for the choice of path in a network with no restriction on the choice set. Starting from a dynamic specification of link choices we show that it is equivalent to a static model...... of the multinomial logit form but with infinitely many alternatives. The model can be consistently estimated and used for prediction in a computationally efficient way. Similarly to the path size logit model, we propose an attribute called link size that corrects utilities of overlapping paths but that is link...... additive. The model is applied to data recording path choices in a network with more than 3000 nodes and 7000 links....

  13. Prior distributions for item parameters in IRT models

    NARCIS (Netherlands)

    Matteucci, M.; S. Mignani, Prof.; Veldkamp, Bernard P.

    2012-01-01

    The focus of this article is on the choice of suitable prior distributions for item parameters within item response theory (IRT) models. In particular, the use of empirical prior distributions for item parameters is proposed. Firstly, regression trees are implemented in order to build informative

  14. A Small-Sample Choice of the Tuning Parameter in Ridge Regression.

    Science.gov (United States)

    Boonstra, Philip S; Mukherjee, Bhramar; Taylor, Jeremy M G

    2015-07-01

    We propose new approaches for choosing the shrinkage parameter in ridge regression, a penalized likelihood method for regularizing linear regression coefficients, when the number of observations is small relative to the number of parameters. Existing methods may lead to extreme choices of this parameter, which will either not shrink the coefficients enough or shrink them by too much. Within this "small- n , large- p " context, we suggest a correction to the common generalized cross-validation (GCV) method that preserves the asymptotic optimality of the original GCV. We also introduce the notion of a "hyperpenalty", which shrinks the shrinkage parameter itself, and make a specific recommendation regarding the choice of hyperpenalty that empirically works well in a broad range of scenarios. A simple algorithm jointly estimates the shrinkage parameter and regression coefficients in the hyperpenalized likelihood. In a comprehensive simulation study of small-sample scenarios, our proposed approaches offer superior prediction over nine other existing methods.

  15. The Choice of Higher Education and Family Income : An Application of the Choice Model

    OpenAIRE

    金子, 元久; 吉本, 圭一

    1989-01-01

    This paper analyses the effect of family income upon the choice of opportunities of higher education by applying the discrete choice model, The data were taken from a tracer survey conducted upon high school graduates in l980. Empirical findings from the analysis may be summarized as follows: l) The chances of taking the opportunities of higher education are indeed related to family income. This is true for three stages of choice (special training schools and above vs. employment; junior coll...

  16. Models for estimating photosynthesis parameters from in situ production profiles

    Science.gov (United States)

    Kovač, Žarko; Platt, Trevor; Sathyendranath, Shubha; Antunović, Suzana

    2017-12-01

    The rate of carbon assimilation in phytoplankton primary production models is mathematically prescribed with photosynthesis irradiance functions, which convert a light flux (energy) into a material flux (carbon). Information on this rate is contained in photosynthesis parameters: the initial slope and the assimilation number. The exactness of parameter values is crucial for precise calculation of primary production. Here we use a model of the daily production profile based on a suite of photosynthesis irradiance functions and extract photosynthesis parameters from in situ measured daily production profiles at the Hawaii Ocean Time-series station Aloha. For each function we recover parameter values, establish parameter distributions and quantify model skill. We observe that the choice of the photosynthesis irradiance function to estimate the photosynthesis parameters affects the magnitudes of parameter values as recovered from in situ profiles. We also tackle the problem of parameter exchange amongst the models and the effect it has on model performance. All models displayed little or no bias prior to parameter exchange, but significant bias following parameter exchange. The best model performance resulted from using optimal parameter values. Model formulation was extended further by accounting for spectral effects and deriving a spectral analytical solution for the daily production profile. The daily production profile was also formulated with time dependent growing biomass governed by a growth equation. The work on parameter recovery was further extended by exploring how to extract photosynthesis parameters from information on watercolumn production. It was demonstrated how to estimate parameter values based on a linearization of the full analytical solution for normalized watercolumn production and from the solution itself, without linearization. The paper complements previous works on photosynthesis irradiance models by analysing the skill and consistency of

  17. Minimization of multi-penalty functionals by alternating iterative thresholding and optimal parameter choices

    Science.gov (United States)

    Naumova, Valeriya; Peter, Steffen

    2014-12-01

    Inspired by several recent developments in regularization theory, optimization, and signal processing, we present and analyze a numerical approach to multi-penalty regularization in spaces of sparsely represented functions. The sparsity prior is motivated by the largely expected geometrical/structured features of high-dimensional data, which may not be well-represented in the framework of typically more isotropic Hilbert spaces. In this paper, we are particularly interested in regularizers which are able to correctly model and separate the multiple components of additively mixed signals. This situation is rather common as pure signals may be corrupted by additive noise. To this end, we consider a regularization functional composed by a data-fidelity term, where signal and noise are additively mixed, a non-smooth and non-convex sparsity promoting term, and a penalty term to model the noise. We propose and analyze the convergence of an iterative alternating algorithm based on simple iterative thresholding steps to perform the minimization of the functional. By means of this algorithm, we explore the effect of choosing different regularization parameters and penalization norms in terms of the quality of recovering the pure signal and separating it from additive noise. For a given fixed noise level numerical experiments confirm a significant improvement in performance compared to standard one-parameter regularization methods. By using high-dimensional data analysis methods such as principal component analysis, we are able to show the correct geometrical clustering of regularized solutions around the expected solution. Eventually, for the compressive sensing problems considered in our experiments we provide a guideline for a choice of regularization norms and parameters.

  18. Choices Matter, but How Do We Model Them?

    Science.gov (United States)

    Brelsford, C.; Dumas, M.

    2017-12-01

    Quantifying interactions between social systems and the physical environment we live within has long been a major scientific challenge. Humans have had such a large influence on our environment that it is no longer reasonable to consider the behavior of an ecological or hydrological system from a purely `physical' perspective: imagining a system that excludes the influence of human choices and behavior. Understanding the role that human social choices play in the energy water nexus is crucial for developing accurate models in that space. The relatively new field of socio-hydrology is making progress towards understanding the role humans play in hydrological systems. While this fact is now widely recognized across the many academic fields that study water systems, we have yet to develop a coherent set of theories for how to model the behavior of these complex and highly interdependent socio-hydrological systems. How should we conceptualize hydrological systems as socio-ecological systems (i.e. system with variables, states, parameters, actors who can control certain variables and a sense of the desirability of states) within which the rigorous study of feedbacks becomes possible? This talk reviews the state of knowledge of how social decisions around water consumption, allocation, and transport influence and are influenced by the physical hydrology that water also moves within. We cover recent papers in socio-hydrology, engineering, water law, and institutional analysis. There have been several calls within socio-hydrology to model human social behavior endogenously along with the hydrology. These improvements are needed across a range of spatial and temporal scales. We suggest two potential strategies for coupled models that allow endogenous water consumption behavior: a social first model which looks for empirical relationships between water consumption and allocation choices and the hydrological state, and a hydrology first model in which we look for regularities

  19. Modeling Spanish Mood Choice in Belief Statements

    Science.gov (United States)

    Robinson, Jason R.

    2013-01-01

    This work develops a computational methodology new to linguistics that empirically evaluates competing linguistic theories on Spanish verbal mood choice through the use of computational techniques to learn mood and other hidden linguistic features from Spanish belief statements found in corpora. The machine learned probabilistic linguistic models…

  20. Study on Parameter Choice Methods for the RFMP with Respect to Downward Continuation

    Directory of Open Access Journals (Sweden)

    Martin Gutting

    2017-06-01

    Full Text Available Recently, the regularized functional matching pursuit (RFMP was introduced as a greedy algorithm for linear ill-posed inverse problems. This algorithm incorporates the Tikhonov-Phillips regularization which implies the necessity of a parameter choice. In this paper, some known parameter choice methods are evaluated with respect to their performance in the RFMP and its enhancement, the regularized orthogonal functional matching pursuit (ROFMP. As an example of a linear inverse problem, the downward continuation of gravitational field data from the satellite orbit to the Earth's surface is chosen, because it is exponentially ill-posed. For the test scenarios, different satellite heights with several noise-to-signal ratios and kinds of noise are combined. The performances of the parameter choice strategies in these scenarios are analyzed. For example, it is shown that a strongly scattered set of data points is an essentially harder challenge for the regularization than a regular grid. The obtained results yield, as a first orientation, that the generalized cross validation, the L-curve method and the residual method could be most appropriate for the RFMP and the ROFMP.

  1. A study on regularization parameter choice in near-field acoustical holography

    DEFF Research Database (Denmark)

    Gomes, Jesper; Hansen, Per Christian

    2008-01-01

    a regularization parameter. These parameter choice methods (PCMs) are attractive, since they require no a priori knowledge about the noise. However, there seems to be no clear understanding of when one PCM is better than the other. This paper presents comparisons of three PCMs: GCV, L-curve and Normalized......), and the Equivalent Source Method (ESM). All combinations of the PCMs and the NAH methods are investigated using simulated measurements with different types of noise added to the input. Finally, the comparisons are carried out for a practical experiment. This aim of this work is to create a better understanding...... of which mechanisms that affect the performance of the different PCMs....

  2. Implications of the choice and configuration of hydrologic models on the portrayal of climate change impact

    Science.gov (United States)

    Mendoza, P. A.; Clark, M. P.; Rajagopalan, B.; Mizukami, N.; Gutmann, E. D.

    2013-12-01

    Climate change studies involve several methodological choices that impact the hydrological sensitivities obtained. Among these, hydrologic model structure selection and parameter identification are particularly relevant and usually have a strong subjective component. This subjectivity is not only limited to engineering applications, but also extends to many of our research studies, resulting in problems such as missing processes in our models, inappropriate parameterizations and compensatory effects of model parameters. The goal of this research is to identify the role of model structures and parameter values on the assessment of hydrologic sensitivity to climate change. We conduct our study in three basins located in the Colorado Headwaters Region, using four different hydrologic models (PRMS, VIC, Noah and Noah-MP). We first compare both model performance and climate sensitivities using default parameterizations and parameter values calibrated with the Shuffled Complex Evolution algorithm. Our results demonstrate that calibration doesn't necessarily improve the representation of hydrological processes or decrease inter-model differences in the change of signature measures of hydrologic behavior with respect to a future climate scenario. We found that inter-model differences in hydrologic sensitivities to climate change may be larger than the climate change signal even after models have been calibrated. Results demonstrate that both model choice (after calibration) and parameter selection have important effects in the portrayal of climate change impacts, and work is ongoing to identify more robust modeling strategies that explicitly account for the subjectivity in these choices. Location of the basins of interest Hydrological models used in this study

  3. Complexity effects in choice experiments-based models

    NARCIS (Netherlands)

    Dellaert, B.G.C.; Donkers, B.; van Soest, A.H.O.

    2012-01-01

    Many firms rely on choice experiment–based models to evaluate future marketing actions under various market conditions. This research investigates choice complexity (i.e., number of alternatives, number of attributes, and utility similarity between the most attractive alternatives) and individual

  4. How the twain can meet: Prospect theory and models of heuristics in risky choice.

    Science.gov (United States)

    Pachur, Thorsten; Suter, Renata S; Hertwig, Ralph

    2017-03-01

    Two influential approaches to modeling choice between risky options are algebraic models (which focus on predicting the overt decisions) and models of heuristics (which are also concerned with capturing the underlying cognitive process). Because they rest on fundamentally different assumptions and algorithms, the two approaches are usually treated as antithetical, or even incommensurable. Drawing on cumulative prospect theory (CPT; Tversky & Kahneman, 1992) as the currently most influential instance of a descriptive algebraic model, we demonstrate how the two modeling traditions can be linked. CPT's algebraic functions characterize choices in terms of psychophysical (diminishing sensitivity to probabilities and outcomes) as well as psychological (risk aversion and loss aversion) constructs. Models of heuristics characterize choices as rooted in simple information-processing principles such as lexicographic and limited search. In computer simulations, we estimated CPT's parameters for choices produced by various heuristics. The resulting CPT parameter profiles portray each of the choice-generating heuristics in psychologically meaningful ways-capturing, for instance, differences in how the heuristics process probability information. Furthermore, CPT parameters can reflect a key property of many heuristics, lexicographic search, and track the environment-dependent behavior of heuristics. Finally, we show, both in an empirical and a model recovery study, how CPT parameter profiles can be used to detect the operation of heuristics. We also address the limits of CPT's ability to capture choices produced by heuristics. Our results highlight an untapped potential of CPT as a measurement tool to characterize the information processing underlying risky choice. Copyright © 2017 Elsevier Inc. All rights reserved.

  5. Brand Choice Modeling Modeling Toothpaste Brand Choice: An Empirical Comparison of Artificial Neural Networks and Multinomial Probit Model

    Directory of Open Access Journals (Sweden)

    Tolga Kaya

    2010-11-01

    Full Text Available The purpose of this study is to compare the performances of Artificial Neural Networks (ANN and Multinomial Probit (MNP approaches in modeling the choice decision within fast moving consumer goods sector. To do this, based on 2597 toothpaste purchases of a panel sample of 404 households, choice models are built and their performances are compared on the 861 purchases of a test sample of 135 households. Results show that ANN's predictions are better while MNP is useful in providing marketing insight.

  6. NEUROBIOLOGY OF ECONOMIC CHOICE: A GOOD-BASED MODEL

    Science.gov (United States)

    Padoa-Schioppa, Camillo

    2012-01-01

    Traditionally the object of economic theory and experimental psychology, economic choice recently became a lively research focus in systems neuroscience. Here I summarize the emerging results and I propose a unifying model of how economic choice might function at the neural level. Economic choice entails comparing options that vary on multiple dimensions. Hence, while choosing, individuals integrate different determinants into a subjective value; decisions are then made by comparing values. According to the good-based model, the values of different goods are computed independently of one another, which implies transitivity. Values are not learned as such, but rather computed at the time of choice. Most importantly, values are compared within the space of goods, independent of the sensori-motor contingencies of choice. Evidence from neurophysiology, imaging and lesion studies indicates that abstract representations of value exist in the orbitofrontal and ventromedial prefrontal cortices. The computation and comparison of values may thus take place within these regions. PMID:21456961

  7. Discrete choice models for commuting interactions

    DEFF Research Database (Denmark)

    Rouwendal, Jan; Mulalic, Ismir; Levkovich, Or

    An emerging quantitative spatial economics literature models commuting interactions by a gravity equation that is mathematically equivalent to a multinomial logit model. This model is widely viewed as restrictive because of the independence of irrelevant alternatives (IIA) property that links...

  8. Profile construction in experimental choice designs for mixed logit models

    NARCIS (Netherlands)

    Sandor, Z; Wedel, M

    2002-01-01

    A computationally attractive model for the analysis of conjoint choice experiments is the mixed multinomial logit model, a multinomial logit model in which it is assumed that the coefficients follow a (normal) distribution across subjects. This model offers the advantage over the standard

  9. Models of Affective Decision Making: How Do Feelings Predict Choice?

    Science.gov (United States)

    Charpentier, Caroline J; De Neve, Jan-Emmanuel; Li, Xinyi; Roiser, Jonathan P; Sharot, Tali

    2016-06-01

    Intuitively, how you feel about potential outcomes will determine your decisions. Indeed, an implicit assumption in one of the most influential theories in psychology, prospect theory, is that feelings govern choice. Surprisingly, however, very little is known about the rules by which feelings are transformed into decisions. Here, we specified a computational model that used feelings to predict choices. We found that this model predicted choice better than existing value-based models, showing a unique contribution of feelings to decisions, over and above value. Similar to the value function in prospect theory, our feeling function showed diminished sensitivity to outcomes as value increased. However, loss aversion in choice was explained by an asymmetry in how feelings about losses and gains were weighted when making a decision, not by an asymmetry in the feelings themselves. The results provide new insights into how feelings are utilized to reach a decision. © The Author(s) 2016.

  10. Building aggregate timber supply models from individual harvest choice

    Science.gov (United States)

    Maksym Polyakov; David N. Wear; Robert Huggett

    2009-01-01

    Timber supply has traditionally been modelled using aggregate data. In this paper, we build aggregate supply models for four roundwood products for the US state of North Carolina from a stand-level harvest choice model applied to detailed forest inventory. The simulated elasticities of pulpwood supply are much lower than reported by previous studies. Cross price...

  11. A compact cyclic plasticity model with parameter evolution

    DEFF Research Database (Denmark)

    Krenk, Steen; Tidemann, L.

    2017-01-01

    by the Armstrong–Frederick model, contained as a special case of the present model for a particular choice of the shape parameter. In contrast to previous work, where shaping the stress-strain loops is derived from multiple internal stress states, this effect is here represented by a single parameter......The paper presents a compact model for cyclic plasticity based on energy in terms of external and internal variables, and plastic yielding described by kinematic hardening and a flow potential with an additive term controlling the nonlinear cyclic hardening. The model is basically described by five...... parameters: external and internal stiffness, a yield stress and a limiting ultimate stress, and finally a parameter controlling the gradual development of plastic deformation. Calibration against numerous experimental results indicates that typically larger plastic strains develop than predicted...

  12. Cognitive models of choice: comparing decision field theory to the proportional difference model.

    Science.gov (United States)

    Scheibehenne, Benjamin; Rieskamp, Jörg; González-Vallejo, Claudia

    2009-07-01

    People often face preferential decisions under risk. To further our understanding of the cognitive processes underlying these preferential choices, two prominent cognitive models, decision field theory (DFT; Busemeyer & Townsend, 1993) and the proportional difference model (PD; González-Vallejo, 2002), were rigorously tested against each other. In two consecutive experiments, the participants repeatedly had to choose between monetary gambles. The first experiment provided the reference to estimate the models' free parameters. From these estimations, new gamble pairs were generated for the second experiment such that the two models made maximally divergent predictions. In the first experiment, both models explained the data equally well. However, in the second generalization experiment, the participants' choices were much closer to the predictions of DFT. The results indicate that the stochastic process assumed by DFT, in which evidence in favor of or against each option accumulates over time, described people's choice behavior better than the trade-offs between proportional differences assumed by PD. Copyright © 2009 Cognitive Science Society, Inc.

  13. Sample selection and taste correlation in discrete choice transport modelling

    DEFF Research Database (Denmark)

    Mabit, Stefan Lindhard

    2008-01-01

    of taste correlation in willingness-to-pay estimation are presented. The first contribution addresses how to incorporate taste correlation in the estimation of the value of travel time for public transport. Given a limited dataset the approach taken is to use theory on the value of travel time as guidance...... many issues that deserve attention. This thesis investigates how sample selection can affect estimation of discrete choice models and how taste correlation should be incorporated into applied mixed logit estimation. Sampling in transport modelling is often based on an observed trip. This may cause...... a sample to be choice-based or governed by a self-selection mechanism. In both cases, there is a possibility that sampling affects the estimation of a population model. It was established in the seventies how choice-based sampling affects the estimation of multinomial logit models. The thesis examines...

  14. Modeling Dynamic Food Choice Processes to Understand Dietary Intervention Effects.

    Science.gov (United States)

    Marcum, Christopher Steven; Goldring, Megan R; McBride, Colleen M; Persky, Susan

    2018-02-17

    Meal construction is largely governed by nonconscious and habit-based processes that can be represented as a collection of in dividual, micro-level food choices that eventually give rise to a final plate. Despite this, dietary behavior intervention research rarely captures these micro-level food choice processes, instead measuring outcomes at aggregated levels. This is due in part to a dearth of analytic techniques to model these dynamic time-series events. The current article addresses this limitation by applying a generalization of the relational event framework to model micro-level food choice behavior following an educational intervention. Relational event modeling was used to model the food choices that 221 mothers made for their child following receipt of an information-based intervention. Participants were randomized to receive either (a) control information; (b) childhood obesity risk information; (c) childhood obesity risk information plus a personalized family history-based risk estimate for their child. Participants then made food choices for their child in a virtual reality-based food buffet simulation. Micro-level aspects of the built environment, such as the ordering of each food in the buffet, were influential. Other dynamic processes such as choice inertia also influenced food selection. Among participants receiving the strongest intervention condition, choice inertia decreased and the overall rate of food selection increased. Modeling food selection processes can elucidate the points at which interventions exert their influence. Researchers can leverage these findings to gain insight into nonconscious and uncontrollable aspects of food selection that influence dietary outcomes, which can ultimately improve the design of dietary interventions.

  15. A likelihood-based biostatistical model for analyzing consumer movement in simultaneous choice experiments.

    Science.gov (United States)

    Zeilinger, Adam R; Olson, Dawn M; Andow, David A

    2014-08-01

    Consumer feeding preference among resource choices has critical implications for basic ecological and evolutionary processes, and can be highly relevant to applied problems such as ecological risk assessment and invasion biology. Within consumer choice experiments, also known as feeding preference or cafeteria experiments, measures of relative consumption and measures of consumer movement can provide distinct and complementary insights into the strength, causes, and consequences of preference. Despite the distinct value of inferring preference from measures of consumer movement, rigorous and biologically relevant analytical methods are lacking. We describe a simple, likelihood-based, biostatistical model for analyzing the transient dynamics of consumer movement in a paired-choice experiment. With experimental data consisting of repeated discrete measures of consumer location, the model can be used to estimate constant consumer attraction and leaving rates for two food choices, and differences in choice-specific attraction and leaving rates can be tested using model selection. The model enables calculation of transient and equilibrial probabilities of consumer-resource association, which could be incorporated into larger scale movement models. We explore the effect of experimental design on parameter estimation through stochastic simulation and describe methods to check that data meet model assumptions. Using a dataset of modest sample size, we illustrate the use of the model to draw inferences on consumer preference as well as underlying behavioral mechanisms. Finally, we include a user's guide and computer code scripts in R to facilitate use of the model by other researchers.

  16. Loss Aversion and Inhibition in Dynamical Models of Multialternative Choice

    Science.gov (United States)

    Usher, Marius; McClelland, James L.

    2004-01-01

    The roles of loss aversion and inhibition among alternatives are examined in models of the similarity, compromise, and attraction effects that arise in choices among 3 alternatives differing on 2 attributes. R. M. Roe, J. R. Busemeyer, and J. T. Townsend (2001) have proposed a linear model in which effects previously attributed to loss aversion…

  17. Climate change decision-making: Model & parameter uncertainties explored

    Energy Technology Data Exchange (ETDEWEB)

    Dowlatabadi, H.; Kandlikar, M.; Linville, C.

    1995-12-31

    A critical aspect of climate change decision-making is uncertainties in current understanding of the socioeconomic, climatic and biogeochemical processes involved. Decision-making processes are much better informed if these uncertainties are characterized and their implications understood. Quantitative analysis of these uncertainties serve to inform decision makers about the likely outcome of policy initiatives, and help set priorities for research so that outcome ambiguities faced by the decision-makers are reduced. A family of integrated assessment models of climate change have been developed at Carnegie Mellon. These models are distinguished from other integrated assessment efforts in that they were designed from the outset to characterize and propagate parameter, model, value, and decision-rule uncertainties. The most recent of these models is ICAM 2.1. This model includes representation of the processes of demographics, economic activity, emissions, atmospheric chemistry, climate and sea level change and impacts from these changes and policies for emissions mitigation, and adaptation to change. The model has over 800 objects of which about one half are used to represent uncertainty. In this paper we show, that when considering parameter uncertainties, the relative contribution of climatic uncertainties are most important, followed by uncertainties in damage calculations, economic uncertainties and direct aerosol forcing uncertainties. When considering model structure uncertainties we find that the choice of policy is often dominated by model structure choice, rather than parameter uncertainties.

  18. Determination and evaluation of the parameters affecting the choice of veal meat of the "Ternera de Aliste" quality appellation.

    Science.gov (United States)

    Severiano-Pérez, P; Vivar-Quintana, A M; Revilla, I

    2006-07-01

    The aim of the present work was to determine and assess the parameters affecting the choice of veal under the "Ternera de Aliste" quality appellation. The parameters affecting the choice proved to be colour, taste, odour, hardness and juiciness. Using these parameters, sensory evaluation, both analytical (with trained judges, QDA) and affective (with consumers, the home-use test) was carried out on four veal types, and the relative preferences for the samples assessed. Colour, hardness and losses due to cooking were also analysed instrumentally. The results revealed that the methodology is important for discriminating small differences between samples. The same trend was observed for the results of the panel of judges, consumers, and instrumental analyses regarding both hardness and juiciness. Regarding the determinant parameters in the choice of veal, in raw meat consumers prefer light colours but when expressing their general relative preferences for samples, juiciness, taste and hardness of the cooked meat had the greatest weight.

  19. Choice as a Global Language in Local Practice: A Mixed Model of School Choice in Taiwan

    Science.gov (United States)

    Mao, Chin-Ju

    2015-01-01

    This paper uses school choice policy as an example to demonstrate how local actors adopt, mediate, translate, and reformulate "choice" as neo-liberal rhetoric informing education reform. Complex processes exist between global policy about school choice and the local practice of school choice. Based on the theoretical sensibility of…

  20. Day-to-day route choice modeling incorporating inertial behavior

    NARCIS (Netherlands)

    van Essen, Mariska Alice; Rakha, H.; Vreeswijk, Jacob Dirk; Wismans, Luc Johannes Josephus; van Berkum, Eric C.

    2015-01-01

    Accurate route choice modeling is one of the most important aspects when predicting the effects of transport policy and dynamic traffic management. Moreover, the effectiveness of intervention measures to a large extent depends on travelers’ response to the changes these measures cause. As a

  1. Iteration Capping For Discrete Choice Models Using the EM Algorithm

    NARCIS (Netherlands)

    Kabatek, J.

    2013-01-01

    The Expectation-Maximization (EM) algorithm is a well-established estimation procedure which is used in many domains of econometric analysis. Recent application in a discrete choice framework (Train, 2008) facilitated estimation of latent class models allowing for very exible treatment of unobserved

  2. Incorporating Responsiveness to Marketing Efforts When Modeling Brand Choice

    NARCIS (Netherlands)

    D. Fok (Dennis); Ph.H.B.F. Franses (Philip Hans); R. Paap (Richard)

    2001-01-01

    textabstractIn this paper we put forward a brand choice model which incorporates responsiveness to marketing efforts as a form of structural heterogeneity. We introduce two latent segments of households. The households in the first segment are assumed to respond to marketing efforts while households

  3. Costly innovators versus cheap imitators: a discrete choice model

    NARCIS (Netherlands)

    Hommes, C.; Zeppini, P.

    2010-01-01

    Two alternative ways to an innovative product or process are R&D investment or imitation of others’ innovation. In this article we propose a discrete choice model with costly innovators and free imitators and study the endogenous dynamics of price and demand in a market with many firms producing a

  4. Random regret-based discrete-choice modelling: an application to healthcare.

    Science.gov (United States)

    de Bekker-Grob, Esther W; Chorus, Caspar G

    2013-07-01

    A new modelling approach for analysing data from discrete-choice experiments (DCEs) has been recently developed in transport economics based on the notion of regret minimization-driven choice behaviour. This so-called Random Regret Minimization (RRM) approach forms an alternative to the dominant Random Utility Maximization (RUM) approach. The RRM approach is able to model semi-compensatory choice behaviour and compromise effects, while being as parsimonious and formally tractable as the RUM approach. Our objectives were to introduce the RRM modelling approach to healthcare-related decisions, and to investigate its usefulness in this domain. Using data from DCEs aimed at determining valuations of attributes of osteoporosis drug treatments and human papillomavirus (HPV) vaccinations, we empirically compared RRM models, RUM models and Hybrid RUM-RRM models in terms of goodness of fit, parameter ratios and predicted choice probabilities. In terms of model fit, the RRM model did not outperform the RUM model significantly in the case of the osteoporosis DCE data (p = 0.21), whereas in the case of the HPV DCE data, the Hybrid RUM-RRM model outperformed the RUM model (p implied by the two models can vary substantially. Differences in model fit between RUM, RRM and Hybrid RUM-RRM were found to be small. Although our study did not show significant differences in parameter ratios, the RRM and Hybrid RUM-RRM models did feature considerable differences in terms of the trade-offs implied by these ratios. In combination, our results suggest that RRM and Hybrid RUM-RRM modelling approach hold the potential of offering new and policy-relevant insights for health researchers and policy makers.

  5. Robust estimation of hydrological model parameters

    Directory of Open Access Journals (Sweden)

    A. Bárdossy

    2008-11-01

    Full Text Available The estimation of hydrological model parameters is a challenging task. With increasing capacity of computational power several complex optimization algorithms have emerged, but none of the algorithms gives a unique and very best parameter vector. The parameters of fitted hydrological models depend upon the input data. The quality of input data cannot be assured as there may be measurement errors for both input and state variables. In this study a methodology has been developed to find a set of robust parameter vectors for a hydrological model. To see the effect of observational error on parameters, stochastically generated synthetic measurement errors were applied to observed discharge and temperature data. With this modified data, the model was calibrated and the effect of measurement errors on parameters was analysed. It was found that the measurement errors have a significant effect on the best performing parameter vector. The erroneous data led to very different optimal parameter vectors. To overcome this problem and to find a set of robust parameter vectors, a geometrical approach based on Tukey's half space depth was used. The depth of the set of N randomly generated parameters was calculated with respect to the set with the best model performance (Nash-Sutclife efficiency was used for this study for each parameter vector. Based on the depth of parameter vectors, one can find a set of robust parameter vectors. The results show that the parameters chosen according to the above criteria have low sensitivity and perform well when transfered to a different time period. The method is demonstrated on the upper Neckar catchment in Germany. The conceptual HBV model was used for this study.

  6. Modeling Inertia and Variety Seeking Tendencies in Brand Choice Behavior

    OpenAIRE

    Kapil Bawa

    1990-01-01

    Theories of exploratory behavior suggest that inertia and variety-seeking tendencies may coexist within the individual, implying that the same individual may exhibit inertia and variety-seeking at different times depending on his/her choice history. Past research has not allowed for such -consumer variability in these tendencies. The purpose of this study is to present a choice model that allows us to identify such “hybrid” behavior (i.e., a mixture of inertia and variety-seeking), and to dis...

  7. Incorporating Mental Representations in Discrete Choice Models of Travel Behaviour : Modelling Approach and Empirical Application

    NARCIS (Netherlands)

    T.A. Arentze (Theo); B.G.C. Dellaert (Benedict); C.G. Chorus (Casper)

    2013-01-01

    textabstractWe introduce an extension of the discrete choice model to take into account individuals’ mental representation of a choice problem. We argue that, especially in daily activity and travel choices, the activated needs of an individual have an influence on the benefits he or she pursues in

  8. Model parameter updating using Bayesian networks

    Energy Technology Data Exchange (ETDEWEB)

    Treml, C. A. (Christine A.); Ross, Timothy J.

    2004-01-01

    This paper outlines a model parameter updating technique for a new method of model validation using a modified model reference adaptive control (MRAC) framework with Bayesian Networks (BNs). The model parameter updating within this method is generic in the sense that the model/simulation to be validated is treated as a black box. It must have updateable parameters to which its outputs are sensitive, and those outputs must have metrics that can be compared to that of the model reference, i.e., experimental data. Furthermore, no assumptions are made about the statistics of the model parameter uncertainty, only upper and lower bounds need to be specified. This method is designed for situations where a model is not intended to predict a complete point-by-point time domain description of the item/system behavior; rather, there are specific points, features, or events of interest that need to be predicted. These specific points are compared to the model reference derived from actual experimental data. The logic for updating the model parameters to match the model reference is formed via a BN. The nodes of this BN consist of updateable model input parameters and the specific output values or features of interest. Each time the model is executed, the input/output pairs are used to adapt the conditional probabilities of the BN. Each iteration further refines the inferred model parameters to produce the desired model output. After parameter updating is complete and model inputs are inferred, reliabilities for the model output are supplied. Finally, this method is applied to a simulation of a resonance control cooling system for a prototype coupled cavity linac. The results are compared to experimental data.

  9. SPOTting Model Parameters Using a Ready-Made Python Package.

    Directory of Open Access Journals (Sweden)

    Tobias Houska

    Full Text Available The choice for specific parameter estimation methods is often more dependent on its availability than its performance. We developed SPOTPY (Statistical Parameter Optimization Tool, an open source python package containing a comprehensive set of methods typically used to calibrate, analyze and optimize parameters for a wide range of ecological models. SPOTPY currently contains eight widely used algorithms, 11 objective functions, and can sample from eight parameter distributions. SPOTPY has a model-independent structure and can be run in parallel from the workstation to large computation clusters using the Message Passing Interface (MPI. We tested SPOTPY in five different case studies to parameterize the Rosenbrock, Griewank and Ackley functions, a one-dimensional physically based soil moisture routine, where we searched for parameters of the van Genuchten-Mualem function and a calibration of a biogeochemistry model with different objective functions. The case studies reveal that the implemented SPOTPY methods can be used for any model with just a minimal amount of code for maximal power of parameter optimization. They further show the benefit of having one package at hand that includes number of well performing parameter search methods, since not every case study can be solved sufficiently with every algorithm or every objective function.

  10. On parameter estimation in deformable models

    DEFF Research Database (Denmark)

    Fisker, Rune; Carstensen, Jens Michael

    1998-01-01

    Deformable templates have been intensively studied in image analysis through the last decade, but despite its significance the estimation of model parameters has received little attention. We present a method for supervised and unsupervised model parameter estimation using a general Bayesian form...

  11. Model choice and sample size in item response theory analysis of aphasia tests.

    Science.gov (United States)

    Hula, William D; Fergadiotis, Gerasimos; Martin, Nadine

    2012-05-01

    The purpose of this study was to identify the most appropriate item response theory (IRT) measurement model for aphasia tests requiring 2-choice responses and to determine whether small samples are adequate for estimating such models. Pyramids and Palm Trees (Howard & Patterson, 1992) test data that had been collected from individuals with aphasia were analyzed, and the resulting item and person estimates were used to develop simulated test data for 3 sample size conditions. The simulated data were analyzed using a standard 1-parameter logistic (1-PL) model and 3 models that accounted for the influence of guessing: augmented 1-PL and 2-PL models and a 3-PL model. The model estimates obtained from the simulated data were compared to their known true values. With small and medium sample sizes, an augmented 1-PL model was the most accurate at recovering the known item and person parameters; however, no model performed well at any sample size. Follow-up simulations confirmed that the large influence of guessing and the extreme easiness of the items contributed substantially to the poor estimation of item difficulty and person ability. Incorporating the assumption of guessing into IRT models improves parameter estimation accuracy, even for small samples. However, caution should be exercised in interpreting scores obtained from easy 2-choice tests, regardless of whether IRT modeling or percentage correct scoring is used.

  12. Parameter identification in the logistic STAR model

    DEFF Research Database (Denmark)

    Ekner, Line Elvstrøm; Nejstgaard, Emil

    We propose a new and simple parametrization of the so-called speed of transition parameter of the logistic smooth transition autoregressive (LSTAR) model. The new parametrization highlights that a consequence of the well-known identification problem of the speed of transition parameter is that th......We propose a new and simple parametrization of the so-called speed of transition parameter of the logistic smooth transition autoregressive (LSTAR) model. The new parametrization highlights that a consequence of the well-known identification problem of the speed of transition parameter...

  13. Application of rrm as behavior mode choice on modelling transportation

    Science.gov (United States)

    Surbakti, M. S.; Sadullah, A. F.

    2018-03-01

    Transportation mode selection, the first step in transportation planning process, is probably one of the most important planning elements. The development of models that can explain the preference of passengers regarding their chosen mode of public transport option will contribute to the improvement and development of existing public transport. Logit models have been widely used to determine the mode choice models in which the alternative are different transport modes. Random Regret Minimization (RRM) theory is a theory developed from the behavior to choose (choice behavior) in a state of uncertainty. During its development, the theory was used in various disciplines, such as marketing, micro economy, psychology, management, and transportation. This article aims to show the use of RRM in various modes of selection, from the results of various studies that have been conducted both in north sumatera and western Java.

  14. Parameter Estimation of Partial Differential Equation Models

    KAUST Repository

    Xun, Xiaolei

    2013-09-01

    Partial differential equation (PDE) models are commonly used to model complex dynamic systems in applied sciences such as biology and finance. The forms of these PDE models are usually proposed by experts based on their prior knowledge and understanding of the dynamic system. Parameters in PDE models often have interesting scientific interpretations, but their values are often unknown and need to be estimated from the measurements of the dynamic system in the presence of measurement errors. Most PDEs used in practice have no analytic solutions, and can only be solved with numerical methods. Currently, methods for estimating PDE parameters require repeatedly solving PDEs numerically under thousands of candidate parameter values, and thus the computational load is high. In this article, we propose two methods to estimate parameters in PDE models: a parameter cascading method and a Bayesian approach. In both methods, the underlying dynamic process modeled with the PDE model is represented via basis function expansion. For the parameter cascading method, we develop two nested levels of optimization to estimate the PDE parameters. For the Bayesian method, we develop a joint model for data and the PDE and develop a novel hierarchical model allowing us to employ Markov chain Monte Carlo (MCMC) techniques to make posterior inference. Simulation studies show that the Bayesian method and parameter cascading method are comparable, and both outperform other available methods in terms of estimation accuracy. The two methods are demonstrated by estimating parameters in a PDE model from long-range infrared light detection and ranging data. Supplementary materials for this article are available online. © 2013 American Statistical Association.

  15. Application of lumped-parameter models

    Energy Technology Data Exchange (ETDEWEB)

    Ibsen, Lars Bo; Liingaard, M.

    2006-12-15

    This technical report concerns the lumped-parameter models for a suction caisson with a ratio between skirt length and foundation diameter equal to 1/2, embedded into an viscoelastic soil. The models are presented for three different values of the shear modulus of the subsoil. Subsequently, the assembly of the dynamic stiffness matrix for the foundation is considered, and the solution for obtaining the steady state response, when using lumped-parameter models is given. (au)

  16. Metro passengers’ route choice model and its application considering perceived transfer threshold

    Science.gov (United States)

    Jin, Fanglei; Zhang, Yongsheng; Liu, Shasha

    2017-01-01

    With the rapid development of the Metro network in China, the greatly increased route alternatives make passengers’ route choice behavior and passenger flow assignment more complicated, which presents challenges to the operation management. In this paper, a path sized logit model is adopted to analyze passengers’ route choice preferences considering such parameters as in-vehicle time, number of transfers, and transfer time. Moreover, the “perceived transfer threshold” is defined and included in the utility function to reflect the penalty difference caused by transfer time on passengers’ perceived utility under various numbers of transfers. Next, based on the revealed preference data collected in the Guangzhou Metro, the proposed model is calibrated. The appropriate perceived transfer threshold value and the route choice preferences are analyzed. Finally, the model is applied to a personalized route planning case to demonstrate the engineering practicability of route choice behavior analysis. The results show that the introduction of the perceived transfer threshold is helpful to improve the model’s explanatory abilities. In addition, personalized route planning based on route choice preferences can meet passengers’ diversified travel demands. PMID:28957376

  17. Understanding Predisposition in College Choice: Toward an Integrated Model of College Choice and Theory of Reasoned Action

    Science.gov (United States)

    Pitre, Paul E.; Johnson, Todd E.; Pitre, Charisse Cowan

    2006-01-01

    This article seeks to improve traditional models of college choice that draw from recruitment and enrollment management paradigms. In adopting a consumer approach to college choice, this article seeks to build upon consumer-related research, which centers on behavior and reasoning. More specifically, this article seeks to move inquiry beyond the…

  18. Predictors of science, technology, engineering, and mathematics choice options: A meta-analytic path analysis of the social-cognitive choice model by gender and race/ethnicity.

    Science.gov (United States)

    Lent, Robert W; Sheu, Hung-Bin; Miller, Matthew J; Cusick, Megan E; Penn, Lee T; Truong, Nancy N

    2018-01-01

    We tested the interest and choice portion of social-cognitive career theory (SCCT; Lent, Brown, & Hackett, 1994) in the context of science, technology, engineering, and mathematics (STEM) domains. Data from 143 studies (including 196 independent samples) conducted over a 30-year period (1983 through 2013) were subjected to meta-analytic path analyses. The interest/choice model was found to fit the data well over all samples as well as within samples composed primarily of women and men and racial/ethnic minority and majority persons. The model also accounted for large portions of the variance in interests and choice goals within each path analysis. Despite the general predictive utility of SCCT across gender and racial/ethnic groups, we did find that several parameter estimates differed by group. We present both the group similarities and differences and consider their implications for future research, intervention, and theory refinement. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  19. Robustness of public choice models of voting behavior

    Directory of Open Access Journals (Sweden)

    Mihai UNGUREANU

    2013-05-01

    Full Text Available Modern economics modeling practice involves highly unrealistic assumptions. Since testing such models is not always an easy enterprise, researchers face the problem of determining whether a result is dependent (or not on the unrealistic details of the model. A solution for this problem is conducting robustness analysis. In its classical form, robustness analysis is a non-empirical method of confirmation – it raises our trust in a given result by implying it with from several different models. In this paper I argue that robustness analysis could be thought as a method of post-empirical failure. This form of robustness analysis involves assigning guilt for the empirical failure to a certain part of the model. Starting from this notion of robustness, I analyze a case of empirical failure from public choice theory or the economic approach of politics. Using the fundamental methodological principles of neoclassical economics, the first model of voting behavior implied that almost no one would vote. This was clearly an empirical failure. Public choice scholars faced the problem of either restraining the domain of their discipline or giving up to some of their neoclassical methodological features. The second solution was chosen and several different models of voting behavior were built. I will treat these models as a case for performing robustness analysis and I will determine which assumption from the original model is guilty for the empirical failure.

  20. CHAMP: Changepoint Detection Using Approximate Model Parameters

    Science.gov (United States)

    2014-06-01

    form (with independent emissions or otherwise), in which parameter estimates are available via means such as maximum likelihood fit, MCMC , or sample ...counterparts, including the ability to generate a full posterior distribution over changepoint locations and offering a natural way to incorporate prior... sample consensus method. Our modifications also remove a significant restriction on model definition when detecting parameter changes within a single

  1. A formal model of theory choice in science

    OpenAIRE

    William A. Brock; Steven N. Durlauf

    1999-01-01

    Since the work of Thomas Kuhn, the role of social factors in the scientific enterprise has been a major concern in the philosophy and history of science. In particular, conformity effects among scientists have been used to question whether science naturally progresses over time. Using neoclassical economic reasoning, this paper develops a formal model of scientific theory choice which incorporates social factors. Our results demonstrate that the influence of social factors on scientific progr...

  2. Exploiting intrinsic fluctuations to identify model parameters.

    Science.gov (United States)

    Zimmer, Christoph; Sahle, Sven; Pahle, Jürgen

    2015-04-01

    Parameterisation of kinetic models plays a central role in computational systems biology. Besides the lack of experimental data of high enough quality, some of the biggest challenges here are identification issues. Model parameters can be structurally non-identifiable because of functional relationships. Noise in measured data is usually considered to be a nuisance for parameter estimation. However, it turns out that intrinsic fluctuations in particle numbers can make parameters identifiable that were previously non-identifiable. The authors present a method to identify model parameters that are structurally non-identifiable in a deterministic framework. The method takes time course recordings of biochemical systems in steady state or transient state as input. Often a functional relationship between parameters presents itself by a one-dimensional manifold in parameter space containing parameter sets of optimal goodness. Although the system's behaviour cannot be distinguished on this manifold in a deterministic framework it might be distinguishable in a stochastic modelling framework. Their method exploits this by using an objective function that includes a measure for fluctuations in particle numbers. They show on three example models, immigration-death, gene expression and Epo-EpoReceptor interaction, that this resolves the non-identifiability even in the case of measurement noise with known amplitude. The method is applied to partially observed recordings of biochemical systems with measurement noise. It is simple to implement and it is usually very fast to compute. This optimisation can be realised in a classical or Bayesian fashion.

  3. Setting Parameters for Biological Models With ANIMO

    NARCIS (Netherlands)

    Schivo, Stefano; Scholma, Jetse; Karperien, Hermanus Bernardus Johannes; Post, Janine Nicole; van de Pol, Jan Cornelis; Langerak, Romanus; André, Étienne; Frehse, Goran

    2014-01-01

    ANIMO (Analysis of Networks with Interactive MOdeling) is a software for modeling biological networks, such as e.g. signaling, metabolic or gene networks. An ANIMO model is essentially the sum of a network topology and a number of interaction parameters. The topology describes the interactions

  4. Testing the role of the Barbero-Immirzi parameter and the choice of connection in loop quantum gravity

    Science.gov (United States)

    Achour, Jibril Ben; Geiller, Marc; Noui, Karim; Yu, Chao

    2015-05-01

    We study the role of the Barbero-Immirzi parameter γ and the choice of connection in the construction of (a symmetry-reduced version of) loop quantum gravity. We start with the four-dimensional Lorentzian Holst action that we reduce to three dimensions in a way that preserves the presence of γ . In the time gauge, the phase space of the resulting three-dimensional theory mimics exactly that of the four-dimensional one. Its quantization can be performed, and on the kinematical Hilbert space spanned by SU(2) spin network states the spectra of geometric operators are discrete and γ dependent. However, because of the three-dimensional nature of the theory, its SU(2) Ashtekar-Barbero Hamiltonian constraint can be traded for the flatness constraint of an s l (2 ,C ) connection, and we show that this latter has to satisfy a linear simplicitylike condition analogous to the one used in the construction of spin foam models. The physically relevant solution to this constraint singles out the noncompact subgroup SU(1, 1), which in turn leads to the disappearance of the Barbero-Immirzi parameter and to a continuous length spectrum, in agreement with what is expected from Lorentzian three-dimensional gravity.

  5. Parameters and error of a theoretical model

    International Nuclear Information System (INIS)

    Moeller, P.; Nix, J.R.; Swiatecki, W.

    1986-09-01

    We propose a definition for the error of a theoretical model of the type whose parameters are determined from adjustment to experimental data. By applying a standard statistical method, the maximum-likelihoodlmethod, we derive expressions for both the parameters of the theoretical model and its error. We investigate the derived equations by solving them for simulated experimental and theoretical quantities generated by use of random number generators. 2 refs., 4 tabs

  6. Application of lumped-parameter models

    DEFF Research Database (Denmark)

    Ibsen, Lars Bo; Liingaard, Morten

    This technical report concerns the lumped-parameter models for a suction caisson with a ratio between skirt length and foundation diameter equal to 1/2, embedded into an viscoelastic soil. The models are presented for three different values of the shear modulus of the subsoil (section 1.1). Subse......This technical report concerns the lumped-parameter models for a suction caisson with a ratio between skirt length and foundation diameter equal to 1/2, embedded into an viscoelastic soil. The models are presented for three different values of the shear modulus of the subsoil (section 1...

  7. Setting Parameters for Biological Models With ANIMO

    Directory of Open Access Journals (Sweden)

    Stefano Schivo

    2014-03-01

    Full Text Available ANIMO (Analysis of Networks with Interactive MOdeling is a software for modeling biological networks, such as e.g. signaling, metabolic or gene networks. An ANIMO model is essentially the sum of a network topology and a number of interaction parameters. The topology describes the interactions between biological entities in form of a graph, while the parameters determine the speed of occurrence of such interactions. When a mismatch is observed between the behavior of an ANIMO model and experimental data, we want to update the model so that it explains the new data. In general, the topology of a model can be expanded with new (known or hypothetical nodes, and enables it to match experimental data. However, the unrestrained addition of new parts to a model causes two problems: models can become too complex too fast, to the point of being intractable, and too many parts marked as "hypothetical" or "not known" make a model unrealistic. Even if changing the topology is normally the easier task, these problems push us to try a better parameter fit as a first step, and resort to modifying the model topology only as a last resource. In this paper we show the support added in ANIMO to ease the task of expanding the knowledge on biological networks, concentrating in particular on the parameter settings.

  8. Simple model for multiple-choice collective decision making.

    Science.gov (United States)

    Lee, Ching Hua; Lucas, Andrew

    2014-11-01

    We describe a simple model of heterogeneous, interacting agents making decisions between n≥2 discrete choices. For a special class of interactions, our model is the mean field description of random field Potts-like models and is effectively solved by finding the extrema of the average energy E per agent. In these cases, by studying the propagation of decision changes via avalanches, we argue that macroscopic dynamics is well captured by a gradient flow along E. We focus on the permutation symmetric case, where all n choices are (on average) the same, and spontaneous symmetry breaking (SSB) arises purely from cooperative social interactions. As examples, we show that bimodal heterogeneity naturally provides a mechanism for the spontaneous formation of hierarchies between decisions and that SSB is a preferred instability to discontinuous phase transitions between two symmetric points. Beyond the mean field limit, exponentially many stable equilibria emerge when we place this model on a graph of finite mean degree. We conclude with speculation on decision making with persistent collective oscillations. Throughout the paper, we emphasize analogies between methods of solution to our model and common intuition from diverse areas of physics, including statistical physics and electromagnetism.

  9. Modelling and parameter estimation of dynamic systems

    CERN Document Server

    Raol, JR; Singh, J

    2004-01-01

    Parameter estimation is the process of using observations from a system to develop mathematical models that adequately represent the system dynamics. The assumed model consists of a finite set of parameters, the values of which are calculated using estimation techniques. Most of the techniques that exist are based on least-square minimization of error between the model response and actual system response. However, with the proliferation of high speed digital computers, elegant and innovative techniques like filter error method, H-infinity and Artificial Neural Networks are finding more and mor

  10. Models and parameters for environmental radiological assessments

    Energy Technology Data Exchange (ETDEWEB)

    Miller, C W [ed.

    1984-01-01

    This book presents a unified compilation of models and parameters appropriate for assessing the impact of radioactive discharges to the environment. Models examined include those developed for the prediction of atmospheric and hydrologic transport and deposition, for terrestrial and aquatic food-chain bioaccumulation, and for internal and external dosimetry. Chapters have been entered separately into the data base. (ACR)

  11. Consistent Stochastic Modelling of Meteocean Design Parameters

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Sterndorff, M. J.

    2000-01-01

    Consistent stochastic models of metocean design parameters and their directional dependencies are essential for reliability assessment of offshore structures. In this paper a stochastic model for the annual maximum values of the significant wave height, and the associated wind velocity, current...

  12. Models and parameters for environmental radiological assessments

    International Nuclear Information System (INIS)

    Miller, C.W.

    1984-01-01

    This book presents a unified compilation of models and parameters appropriate for assessing the impact of radioactive discharges to the environment. Models examined include those developed for the prediction of atmospheric and hydrologic transport and deposition, for terrestrial and aquatic food-chain bioaccumulation, and for internal and external dosimetry. Chapters have been entered separately into the data base

  13. Choice certainty in Discrete Choice Experiments

    DEFF Research Database (Denmark)

    Uggeldahl, Kennet Christian; Jacobsen, Catrine; Lundhede, Thomas

    2016-01-01

    In this study, we conduct a Discrete Choice Experiment (DCE) using eye tracking technology to investigate if eye movements during the completion of choice sets reveal information about respondents’ choice certainty. We hypothesise that the number of times that respondents shift their visual...... attention between the alternatives in a choice set reflects their stated choice certainty. Based on one of the largest samples of eye tracking data in a DCE to date, we find evidence in favor of our hypothesis. We also link eye tracking observations to model-based choice certainty through parameterization...... of the scale function in a random parameters logit model. We find that choices characterized by more frequent gaze shifting do indeed exhibit a higher degree of error variance, however, this effects is insignificant once response time is controlled for. Overall, findings suggest that eye tracking can provide...

  14. On the role of modeling parameters in IMRT plan optimization

    International Nuclear Information System (INIS)

    Krause, Michael; Scherrer, Alexander; Thieke, Christian

    2008-01-01

    The formulation of optimization problems in intensity-modulated radiotherapy (IMRT) planning comprises the choice of various values such as function-specific parameters or constraint bounds. In current inverse planning programs that yield a single treatment plan for each optimization, it is often unclear how strongly these modeling parameters affect the resulting plan. This work investigates the mathematical concepts of elasticity and sensitivity to deal with this problem. An artificial planning case with a horse-shoe formed target with different opening angles surrounding a circular risk structure is studied. As evaluation functions the generalized equivalent uniform dose (EUD) and the average underdosage below and average overdosage beyond certain dose thresholds are used. A single IMRT plan is calculated for an exemplary parameter configuration. The elasticity and sensitivity of each parameter are then calculated without re-optimization, and the results are numerically verified. The results show the following. (1) elasticity can quantify the influence of a modeling parameter on the optimization result in terms of how strongly the objective function value varies under modifications of the parameter value. It also can describe how strongly the geometry of the involved planning structures affects the optimization result. (2) Based on the current parameter settings and corresponding treatment plan, sensitivity analysis can predict the optimization result for modified parameter values without re-optimization, and it can estimate the value intervals in which such predictions are valid. In conclusion, elasticity and sensitivity can provide helpful tools in inverse IMRT planning to identify the most critical parameters of an individual planning problem and to modify their values in an appropriate way

  15. Source term modelling parameters for Project-90

    International Nuclear Information System (INIS)

    Shaw, W.; Smith, G.; Worgan, K.; Hodgkinson, D.; Andersson, K.

    1992-04-01

    This document summarises the input parameters for the source term modelling within Project-90. In the first place, the parameters relate to the CALIBRE near-field code which was developed for the Swedish Nuclear Power Inspectorate's (SKI) Project-90 reference repository safety assessment exercise. An attempt has been made to give best estimate values and, where appropriate, a range which is related to variations around base cases. It should be noted that the data sets contain amendments to those considered by KBS-3. In particular, a completely new set of inventory data has been incorporated. The information given here does not constitute a complete set of parameter values for all parts of the CALIBRE code. Rather, it gives the key parameter values which are used in the constituent models within CALIBRE and the associated studies. For example, the inventory data acts as an input to the calculation of the oxidant production rates, which influence the generation of a redox front. The same data is also an initial value data set for the radionuclide migration component of CALIBRE. Similarly, the geometrical parameters of the near-field are common to both sub-models. The principal common parameters are gathered here for ease of reference and avoidance of unnecessary duplication and transcription errors. (au)

  16. Impact of implementation choices on quantitative predictions of cell-based computational models

    Science.gov (United States)

    Kursawe, Jochen; Baker, Ruth E.; Fletcher, Alexander G.

    2017-09-01

    'Cell-based' models provide a powerful computational tool for studying the mechanisms underlying the growth and dynamics of biological tissues in health and disease. An increasing amount of quantitative data with cellular resolution has paved the way for the quantitative parameterisation and validation of such models. However, the numerical implementation of cell-based models remains challenging, and little work has been done to understand to what extent implementation choices may influence model predictions. Here, we consider the numerical implementation of a popular class of cell-based models called vertex models, which are often used to study epithelial tissues. In two-dimensional vertex models, a tissue is approximated as a tessellation of polygons and the vertices of these polygons move due to mechanical forces originating from the cells. Such models have been used extensively to study the mechanical regulation of tissue topology in the literature. Here, we analyse how the model predictions may be affected by numerical parameters, such as the size of the time step, and non-physical model parameters, such as length thresholds for cell rearrangement. We find that vertex positions and summary statistics are sensitive to several of these implementation parameters. For example, the predicted tissue size decreases with decreasing cell cycle durations, and cell rearrangement may be suppressed by large time steps. These findings are counter-intuitive and illustrate that model predictions need to be thoroughly analysed and implementation details carefully considered when applying cell-based computational models in a quantitative setting.

  17. Parameter choices for a muon recirculating linear accelerator from 5 to 63 GeV

    Energy Technology Data Exchange (ETDEWEB)

    Berg, J. S. [Brookhaven National Lab. (BNL), Upton, NY (United States). Collider-Accelerator Dept.

    2014-06-19

    A recirculating linear accelerator (RLA) has been proposed to accelerate muons from 5 to 63 GeV for a muon collider. It should be usable both for a Higgs factory and as a stage for a higher energy collider. First, the constraints due to the beam loading are computed. Next, an expression for the longitudinal emittance growth to lowest order in the longitudinal emittance is worked out. After finding the longitudinal expression, a simplified model that describes the arcs and their approximate expression for the time of flight dependence on energy in those arcs is found. Finally, these results are used to estimate the parameters required for the RLA arcs and the linac phase.

  18. Closing the gap between behavior and models in route choice: The role of spatiotemporal constraints and latent traits in choice set formation

    DEFF Research Database (Denmark)

    Kaplan, Sigal; Prato, Carlo Giacomo

    2012-01-01

    A considerable gap exists between the behavioral paradigm of choice set formation in route choice and its representation in route choice modeling. While travelers form their viable choice set by retaining routes that satisfy spatiotemporal constraints, existing route generation techniques do not ...... spatiotemporal constraints and latent traits in route choice models, and (iii) the linkage between spatiotemporal constraints and time saving, spatial and mnemonic abilities....

  19. SPOTting model parameters using a ready-made Python package

    Science.gov (United States)

    Houska, Tobias; Kraft, Philipp; Breuer, Lutz

    2015-04-01

    The selection and parameterization of reliable process descriptions in ecological modelling is driven by several uncertainties. The procedure is highly dependent on various criteria, like the used algorithm, the likelihood function selected and the definition of the prior parameter distributions. A wide variety of tools have been developed in the past decades to optimize parameters. Some of the tools are closed source. Due to this, the choice for a specific parameter estimation method is sometimes more dependent on its availability than the performance. A toolbox with a large set of methods can support users in deciding about the most suitable method. Further, it enables to test and compare different methods. We developed the SPOT (Statistical Parameter Optimization Tool), an open source python package containing a comprehensive set of modules, to analyze and optimize parameters of (environmental) models. SPOT comes along with a selected set of algorithms for parameter optimization and uncertainty analyses (Monte Carlo, MC; Latin Hypercube Sampling, LHS; Maximum Likelihood, MLE; Markov Chain Monte Carlo, MCMC; Scuffled Complex Evolution, SCE-UA; Differential Evolution Markov Chain, DE-MCZ), together with several likelihood functions (Bias, (log-) Nash-Sutcliff model efficiency, Correlation Coefficient, Coefficient of Determination, Covariance, (Decomposed-, Relative-, Root-) Mean Squared Error, Mean Absolute Error, Agreement Index) and prior distributions (Binomial, Chi-Square, Dirichlet, Exponential, Laplace, (log-, multivariate-) Normal, Pareto, Poisson, Cauchy, Uniform, Weibull) to sample from. The model-independent structure makes it suitable to analyze a wide range of applications. We apply all algorithms of the SPOT package in three different case studies. Firstly, we investigate the response of the Rosenbrock function, where the MLE algorithm shows its strengths. Secondly, we study the Griewank function, which has a challenging response surface for

  20. Advances in Modelling, System Identification and Parameter ...

    Indian Academy of Sciences (India)

    models determined from flight test data by using parameter estimation methods find extensive use in design/modification of flight control systems, high fidelity flight simulators and evaluation of handling qualitites of aircraft and rotorcraft. R K Mehra et al present new algorithms and results for flutter tests and adaptive notching ...

  1. A lumped parameter model of plasma focus

    International Nuclear Information System (INIS)

    Gonzalez, Jose H.; Florido, Pablo C.; Bruzzone, H.; Clausse, Alejandro

    1999-01-01

    A lumped parameter model to estimate neutron emission of a plasma focus (PF) device is developed. The dynamic of the current sheet is calculated using a snowplow model, and the neutron production with the thermal fusion cross section for a deuterium filling gas. The results were contrasted as a function of the filling pressure with experimental measurements of a 3.68 KJ Mather-type PF. (author)

  2. One parameter model potential for noble metals

    International Nuclear Information System (INIS)

    Idrees, M.; Khwaja, F.A.; Razmi, M.S.K.

    1981-08-01

    A phenomenological one parameter model potential which includes s-d hybridization and core-core exchange contributions is proposed for noble metals. A number of interesting properties like liquid metal resistivities, band gaps, thermoelectric powers and ion-ion interaction potentials are calculated for Cu, Ag and Au. The results obtained are in better agreement with experiment than the ones predicted by the other model potentials in the literature. (author)

  3. Parameter optimization for surface flux transport models

    Science.gov (United States)

    Whitbread, T.; Yeates, A. R.; Muñoz-Jaramillo, A.; Petrie, G. J. D.

    2017-11-01

    Accurate prediction of solar activity calls for precise calibration of solar cycle models. Consequently we aim to find optimal parameters for models which describe the physical processes on the solar surface, which in turn act as proxies for what occurs in the interior and provide source terms for coronal models. We use a genetic algorithm to optimize surface flux transport models using National Solar Observatory (NSO) magnetogram data for Solar Cycle 23. This is applied to both a 1D model that inserts new magnetic flux in the form of idealized bipolar magnetic regions, and also to a 2D model that assimilates specific shapes of real active regions. The genetic algorithm searches for parameter sets (meridional flow speed and profile, supergranular diffusivity, initial magnetic field, and radial decay time) that produce the best fit between observed and simulated butterfly diagrams, weighted by a latitude-dependent error structure which reflects uncertainty in observations. Due to the easily adaptable nature of the 2D model, the optimization process is repeated for Cycles 21, 22, and 24 in order to analyse cycle-to-cycle variation of the optimal solution. We find that the ranges and optimal solutions for the various regimes are in reasonable agreement with results from the literature, both theoretical and observational. The optimal meridional flow profiles for each regime are almost entirely within observational bounds determined by magnetic feature tracking, with the 2D model being able to accommodate the mean observed profile more successfully. Differences between models appear to be important in deciding values for the diffusive and decay terms. In like fashion, differences in the behaviours of different solar cycles lead to contrasts in parameters defining the meridional flow and initial field strength.

  4. Human Nonindependent Mate Choice: Is Model Female Attractiveness Everything?

    Directory of Open Access Journals (Sweden)

    Antonios Vakirtzis

    2012-04-01

    Full Text Available Following two decades of research on non-human animals, there has recently been increased interest in human nonindependent mate choice, namely the ways in which choosing women incorporate information about a man's past or present romantic partners (‘model females’ into their own assessment of the male. Experimental studies using static facial images have generally found that men receive higher desirability ratings from female raters when presented with attractive (compared to unattractive model females. This phenomenon has a straightforward evolutionary explanation: the fact that female mate value is more dependent on physical attractiveness compared to male mate value. Furthermore, due to assortative mating for attractiveness, men who are paired with attractive women are more likely to be of high mate value themselves. Here, we also examine the possible relevance of model female cues other than attractiveness (personality and behavioral traits by presenting video recordings of model females to a set of female raters. The results confirm that the model female's attractiveness is the primary cue. Contrary to some earlier findings in the human and nonhuman literature, we found no evidence that female raters prefer partners of slightly older model females. We conclude by suggesting some promising variations on the present experimental design.

  5. Mode choice model for vulnerable motorcyclists in Malaysia.

    Science.gov (United States)

    Ibrahim Sheikh, A K; Radin Umar, R S; Habshah, M; Kassim, H; Stevenson, Mark; Ahmed, Hariza

    2006-06-01

    In developing countries, motorcycle use has grown in popularity in the past decades. Commensurate with this growth is the increase in death and casualties among motorcyclists in these countries. One of the strategic programs to minimize this problem is to reduce motorcyclists exposure by shifting them into safer modes of transport. This study aims to explore the differences in the characteristics of bus and motorcycle users. It identifies the factors contributing to their choice of transport mode and estimates the probability that motorcyclists might change their travel mode to a safer alternative; namely, bus travel. In this article, a survey of 535 motorcycle and bus users was conducted in seven districts of Selangor state, Malaysia. A binary logit model was developed for the two alternative modes, bus and motorcycle. It was found that travel time, travel cost, gender, age, and income level are significant in influencing motorcyclists' mode choice behavior. The probability of motorcycle riders shifting to public transport was also examined based on a scenario of a reduction in bus travel time and travel cost. Reduction of total travel time for the bus mode emerges as the most important element in a program aimed at attracting motorcyclists towards public transport and away from the motorcycle mode.

  6. Market Assessment For Traveler Services, A Choice Modeling Study Phase Iii, Fast-Trac Deliverable, #16B: Final Choice Modeling Report

    Science.gov (United States)

    1999-02-12

    FAST-TRAC : THIS REPORT DESCRIBES THE CHOICE MODEL STUDY OF THE FAST-TRAC (FASTER AND SAFER TRAVEL THROUGH TRAFFIC ROUTING AND ADVANCED CONTROLS) OPERATIONAL TEST IN SOUTHEAST MICHIGAN. CHOICE MODELING IS A STATED-PREFERENCE APPROACH IN WHICH RESP...

  7. Analisis Perbandingan Parameter Transformasi Antar Itrf Hasil Hitungan Kuadrat Terkecil Model Helmert 14-parameter Dengan Parameter Standar Iers

    OpenAIRE

    Fadly, Romi; Dewi, Citra

    2014-01-01

    This research aims to compare the 14 transformation parameters between ITRF from computation result using the Helmert 14-parameter models with IERS standard parameters. The transforma- tion parameters are calculated from the coordinates and velocities of ITRF05 to ITRF00 epoch 2000.00, and from ITRF08 to ITRF05 epoch 2005.00 for respectively transformation models. The transformation parameters are compared to the IERS standard parameters, then tested the signifi- cance of the d...

  8. Constant-parameter capture-recapture models

    Science.gov (United States)

    Brownie, C.; Hines, J.E.; Nichols, J.D.

    1986-01-01

    Jolly (1982, Biometrics 38, 301-321) presented modifications of the Jolly-Seber model for capture-recapture data, which assume constant survival and/or capture rates. Where appropriate, because of the reduced number of parameters, these models lead to more efficient estimators than the Jolly-Seber model. The tests to compare models given by Jolly do not make complete use of the data, and we present here the appropriate modifications, and also indicate how to carry out goodness-of-fit tests which utilize individual capture history information. We also describe analogous models for the case where young and adult animals are tagged. The availability of computer programs to perform the analysis is noted, and examples are given using output from these programs.

  9. Aqueous Electrolytes: Model Parameters and Process Simulation

    DEFF Research Database (Denmark)

    Thomsen, Kaj

    This thesis deals with aqueous electrolyte mixtures. The Extended UNIQUAC model is being used to describe the excess Gibbs energy of such solutions. Extended UNIQUAC parameters for the twelve ions Na+, K+, NH4+, H+, Cl-, NO3-, SO42-, HSO4-, OH-, CO32-, HCO3-, and S2O82- are estimated. A computer ...... program including a steady state process simulator for the design, simulation, and optimization of fractional crystallization processes is presented.......This thesis deals with aqueous electrolyte mixtures. The Extended UNIQUAC model is being used to describe the excess Gibbs energy of such solutions. Extended UNIQUAC parameters for the twelve ions Na+, K+, NH4+, H+, Cl-, NO3-, SO42-, HSO4-, OH-, CO32-, HCO3-, and S2O82- are estimated. A computer...

  10. Suitable parameter choice on quantitative morphology of A549 cell in epithelial–mesenchymal transition

    Science.gov (United States)

    Ren, Zhou-Xin; Yu, Hai-Bin; Li, Jian-Sheng; Shen, Jun-Ling; Du, Wen-Sen

    2015-01-01

    Evaluation of morphological changes in cells is an integral part of study on epithelial to mesenchymal transition (EMT), however, only a few papers reported the changes in quantitative parameters and no article compared different parameters for demanding better parameters. In the study, the purpose was to investigate suitable parameters for quantitative evaluation of EMT morphological changes. A549 human lung adenocarcinoma cell line was selected for the study. Some cells were stimulated by transforming growth factor-β1 (TGF-β1) for EMT, and other cells were as control without TGF-β1 stimulation. Subsequently, cells were placed in phase contrast microscope and three arbitrary fields were captured and saved with a personal computer. Using the tools of Photoshop software, some cells in an image were selected, segmented out and exchanged into unique hue, and other part in the image was shifted into another unique hue. The cells were calculated with 29 morphological parameters by Image Pro Plus software. A parameter between cells with or without TGF-β1 stimulation was compared statistically and nine parameters were significantly different between them. Receiver operating characteristic curve (ROC curve) of a parameter was described with SPSS software and F-test was used to compare two areas under the curves (AUCs) in Excel. Among them, roundness and radius ratio were the most AUCs and were significant higher than the other parameters. The results provided a new method with quantitative assessment of cell morphology during EMT, and found out two parameters, roundness and radius ratio, as suitable for quantification. PMID:26182364

  11. ADOPT: A Historically Validated Light Duty Vehicle Consumer Choice Model

    Energy Technology Data Exchange (ETDEWEB)

    Brooker, A.; Gonder, J.; Lopp, S.; Ward, J.

    2015-05-04

    The Automotive Deployment Option Projection Tool (ADOPT) is a light-duty vehicle consumer choice and stock model supported by the U.S. Department of Energy’s Vehicle Technologies Office. It estimates technology improvement impacts on U.S. light-duty vehicles sales, petroleum use, and greenhouse gas emissions. ADOPT uses techniques from the multinomial logit method and the mixed logit method estimate sales. Specifically, it estimates sales based on the weighted value of key attributes including vehicle price, fuel cost, acceleration, range and usable volume. The average importance of several attributes changes nonlinearly across its range and changes with income. For several attributes, a distribution of importance around the average value is used to represent consumer heterogeneity. The majority of existing vehicle makes, models, and trims are included to fully represent the market. The Corporate Average Fuel Economy regulations are enforced. The sales feed into the ADOPT stock model. It captures key aspects for summing petroleum use and greenhouse gas emissions This includes capturing the change in vehicle miles traveled by vehicle age, the creation of new model options based on the success of existing vehicles, new vehicle option introduction rate limits, and survival rates by vehicle age. ADOPT has been extensively validated with historical sales data. It matches in key dimensions including sales by fuel economy, acceleration, price, vehicle size class, and powertrain across multiple years. A graphical user interface provides easy and efficient use. It manages the inputs, simulation, and results.

  12. A comparison of methods for representing random taste heterogeneity in discrete choice models

    DEFF Research Database (Denmark)

    Fosgerau, Mogens; Hess, Stephane

    2009-01-01

    This paper reports the findings of a systematic study using Monte Carlo experiments and a real dataset aimed at comparing the performance of various ways of specifying random taste heterogeneity in a discrete choice model. Specifically, the analysis compares the performance of two recent advanced...... distributions. Both approaches allow the researcher to increase the number of parameters as desired. The paper provides a range of evidence on the ability of the various approaches to recover various distributions from data. The two advanced approaches are comparable in terms of the likelihoods achieved...

  13. Brain perfusion heterogeneity measurement based on Random Walk algorithm: choice and influence of inner parameters.

    Science.gov (United States)

    Modzelewski, Romain; Janvresse, Elise; de la Rue, Thierry; Vera, Pierre

    2010-06-01

    A Random Walk (RW) algorithm was designed to quantify the level of diffuse heterogeneous perfusion in brain SPECT images in patients suffering from systemic brain disease or from drug-induced therapy. The goal of the present paper is to understand the behavior of the RW method on different kinds of images (extrinsic parameters) and also to understand how to choose the right parameters of the RW (intrinsic parameters) depending on the image characteristics (i.e. SPECT images). "Extrinsic parameters" are related to the image characteristics (level/size of defect and diffuse heterogeneity) and "intrinsic" parameters are related to the parameters of the method (number (N(rw)) and length of walk (L(rw)), temperature (T) and slowing parameter (S)). Two successive studies were conducted to test the influence of these parameters on the RW result. In the first study, calibrated checkerboard images are used to test the influence of "extrinsic parameters" (i.e. image characteristics) on the RW result (R-value). The R-value was tested as a function of (i) the size of black & white (B&W) squares simulating the size of a cortical defect, (ii) the intensity level gaps between the B&W squares simulating the intensity of the cortical defect and (iii) intensity (=variance) of noise, simulating the diffuse heterogeneity. The second study was constructed with simulated representative brain SPECT images, to test the "intrinsic" parameters. The R-value was tested regarding the influence of four parameters: S, T, N(rw) and L(rw). The third study is constructed so as to see if the classification by diffuse heterogeneity of real brain SPECT images is the same if it's made by senior clinicians or by RW algorithm. Study 1: the RW was strongly influenced by all the characteristics of the images. Moreover, these characteristics interact with each other. The RW is influenced most by diffuse heterogeneity, then by intensity and finally by the size of a defect. Study 2: N(rw) and L(rw) values of

  14. Modelling tourists arrival using time varying parameter

    Science.gov (United States)

    Suciptawati, P.; Sukarsa, K. G.; Kencana, Eka N.

    2017-06-01

    The importance of tourism and its related sectors to support economic development and poverty reduction in many countries increase researchers’ attentions to study and model tourists’ arrival. This work is aimed to demonstrate time varying parameter (TVP) technique to model the arrival of Korean’s tourists to Bali. The number of Korean tourists whom visiting Bali for period January 2010 to December 2015 were used to model the number of Korean’s tourists to Bali (KOR) as dependent variable. The predictors are the exchange rate of Won to IDR (WON), the inflation rate in Korea (INFKR), and the inflation rate in Indonesia (INFID). Observing tourists visit to Bali tend to fluctuate by their nationality, then the model was built by applying TVP and its parameters were approximated using Kalman Filter algorithm. The results showed all of predictor variables (WON, INFKR, INFID) significantly affect KOR. For in-sample and out-of-sample forecast with ARIMA’s forecasted values for the predictors, TVP model gave mean absolute percentage error (MAPE) as much as 11.24 percent and 12.86 percent, respectively.

  15. Modeling 2-alternative forced-choice tasks: Accounting for both magnitude and difference effects.

    Science.gov (United States)

    Ratcliff, Roger; Voskuilen, Chelsea; Teodorescu, Andrei

    2018-03-01

    We present a model-based analysis of two-alternative forced-choice tasks in which two stimuli are presented side by side and subjects must make a comparative judgment (e.g., which stimulus is brighter). Stimuli can vary on two dimensions, the difference in strength of the two stimuli and the magnitude of each stimulus. Differences between the two stimuli produce typical RT and accuracy effects (i.e., subjects respond more quickly and more accurately when there is a larger difference between the two). However, the overall magnitude of the pair of stimuli also affects RT and accuracy. In the more common two-choice task, a single stimulus is presented and the stimulus varies on only one dimension. In this two-stimulus task, if the standard diffusion decision model is fit to the data with only drift rate (evidence accumulation rate) differing among conditions, the model cannot fit the data. However, if either of one of two variability parameters is allowed to change with stimulus magnitude, the model can fit the data. This results in two models that are extremely constrained with about one tenth of the number of parameters than there are data points while at the same time the models account for accuracy and correct and error RT distributions. While both of these versions of the diffusion model can account for the observed data, the model that allows across-trial variability in drift to vary might be preferred for theoretical reasons. The diffusion model fits are compared to the leaky competing accumulator model which did not perform as well. Copyright © 2018 Elsevier Inc. All rights reserved.

  16. Development of discrete choice model considering internal reference points and their effects in travel mode choice context

    Science.gov (United States)

    Sarif; Kurauchi, Shinya; Yoshii, Toshio

    2017-06-01

    In the conventional travel behavior models such as logit and probit, decision makers are assumed to conduct the absolute evaluations on the attributes of the choice alternatives. On the other hand, many researchers in cognitive psychology and marketing science have been suggesting that the perceptions of attributes are characterized by the benchmark called “reference points” and the relative evaluations based on them are often employed in various choice situations. Therefore, this study developed a travel behavior model based on the mental accounting theory in which the internal reference points are explicitly considered. A questionnaire survey about the shopping trip to the CBD in Matsuyama city was conducted, and then the roles of reference points in travel mode choice contexts were investigated. The result showed that the goodness-of-fit of the developed model was higher than that of the conventional model, indicating that the internal reference points might play the major roles in the choice of travel mode. Also shown was that the respondents seem to utilize various reference points: some tend to adopt the lowest fuel price they have experienced, others employ fare price they feel in perceptions of the travel cost.

  17. Lumped Parameters Model of a Crescent Pump

    Directory of Open Access Journals (Sweden)

    Massimo Rundo

    2016-10-01

    Full Text Available This paper presents the lumped parameters model of an internal gear crescent pump with relief valve, able to estimate the steady-state flow-pressure characteristic and the pressure ripple. The approach is based on the identification of three variable control volumes regardless of the number of gear teeth. The model has been implemented in the commercial environment LMS Amesim with the development of customized components. Specific attention has been paid to the leakage passageways, some of them affected by the deformation of the cover plate under the action of the delivery pressure. The paper reports the finite element method analysis of the cover for the evaluation of the deflection and the validation through a contactless displacement transducer. Another aspect described in this study is represented by the computational fluid dynamics analysis of the relief valve, whose results have been used for tuning the lumped parameters model. Finally, the validation of the entire model of the pump is presented in terms of steady-state flow rate and of pressure oscillations.

  18. Choice of scans and optimization of instrument parameters in neutron diffraction

    International Nuclear Information System (INIS)

    Sequeira, A.

    1975-01-01

    With neutron intensities available at medium flux reactors, the study of crystal and molecular structures is now restricted to molecules having less than about 50 atoms per asymmetric unit. This limit could perhaps be extended to structures having upto about 100 atoms in the asymmetric unit if all the experimental parameters associated with the neutron diffractometer could be ideally optimized. In view of the fact that most of the structures of current biological interest fall in this category, such as the mono-, di-, and oligonucleotides, as well as small peptides, it is important that all the instrument parameters are chosen so as to stretch the power of a given neutron source to its limit. Some ways of optimizing the various instrument parameters in order to obtain the maximum neutron intensity at a given resolution are discussed. The small effects of vertical divergences on the resolution are ignored

  19. Agent-based modelling of consumer energy choices

    Science.gov (United States)

    Rai, Varun; Henry, Adam Douglas

    2016-06-01

    Strategies to mitigate global climate change should be grounded in a rigorous understanding of energy systems, particularly the factors that drive energy demand. Agent-based modelling (ABM) is a powerful tool for representing the complexities of energy demand, such as social interactions and spatial constraints. Unlike other approaches for modelling energy demand, ABM is not limited to studying perfectly rational agents or to abstracting micro details into system-level equations. Instead, ABM provides the ability to represent behaviours of energy consumers -- such as individual households -- using a range of theories, and to examine how the interaction of heterogeneous agents at the micro-level produces macro outcomes of importance to the global climate, such as the adoption of low-carbon behaviours and technologies over space and time. We provide an overview of ABM work in the area of consumer energy choices, with a focus on identifying specific ways in which ABM can improve understanding of both fundamental scientific and applied aspects of the demand side of energy to aid the design of better policies and programmes. Future research needs for improving the practice of ABM to better understand energy demand are also discussed.

  20. The Answering Process for Multiple-Choice Questions in Collaborative Learning: A Mathematical Learning Model Analysis

    Science.gov (United States)

    Nakamura, Yasuyuki; Nishi, Shinnosuke; Muramatsu, Yuta; Yasutake, Koichi; Yamakawa, Osamu; Tagawa, Takahiro

    2014-01-01

    In this paper, we introduce a mathematical model for collaborative learning and the answering process for multiple-choice questions. The collaborative learning model is inspired by the Ising spin model and the model for answering multiple-choice questions is based on their difficulty level. An intensive simulation study predicts the possibility of…

  1. A Conditional Curie-Weiss Model for Stylized Multi-group Binary Choice with Social Interaction

    Science.gov (United States)

    Opoku, Alex Akwasi; Edusei, Kwame Owusu; Ansah, Richard Kwame

    2018-04-01

    This paper proposes a conditional Curie-Weiss model as a model for decision making in a stylized society made up of binary decision makers that face a particular dichotomous choice between two options. Following Brock and Durlauf (Discrete choice with social interaction I: theory, 1955), we set-up both socio-economic and statistical mechanical models for the choice problem. We point out when both the socio-economic and statistical mechanical models give rise to the same self-consistent equilibrium mean choice level(s). Phase diagram of the associated statistical mechanical model and its socio-economic implications are discussed.

  2. Exploiting residual information in the parameter choice for discrete ill-posed problems

    DEFF Research Database (Denmark)

    Hansen, Per Christian; Kilmer, Misha E.; Kjeldsen, Rikke Høj

    2006-01-01

    Most algorithms for choosing the regularization parameter in a discrete ill-posed problem are based on the norm of the residual vector. In this work we propose a different approach, where we seek to use all the information available in the residual vector. We present important relations between...

  3. Empirical analyses of a choice model that captures ordering among attribute values

    DEFF Research Database (Denmark)

    Mabit, Stefan Lindhard

    2017-01-01

    an alternative additionally because it has the highest price. In this paper, we specify a discrete choice model that takes into account the ordering of attribute values across alternatives. This model is used to investigate the effect of attribute value ordering in three case studies related to alternative-fuel...... vehicles, mode choice, and route choice. In our application to choices among alternative-fuel vehicles, we see that especially the price coefficient is sensitive to changes in ordering. The ordering effect is also found in the applications to mode and route choice data where both travel time and cost...

  4. Joint Residence-Workplace Location Choice Model Based on Household Decision Behavior

    Directory of Open Access Journals (Sweden)

    Pengpeng Jiao

    2015-01-01

    Full Text Available Residence location and workplace are the two most important urban land-use types, and there exist strong interdependences between them. Existing researches often assume that one choice dimension is correlated to the other. Using the mixed logit framework, three groups of choice models are developed to illustrate such choice dependencies. First, for all households, this paper presents a basic methodology of the residence location and workplace choice without decision sequence based on the assumption that the two choice behaviors are independent of each other. Second, the paper clusters all households into two groups, choosing residence or workplace first, and formulates the residence location and workplace choice models under the constraint of decision sequence. Third, this paper combines the residence location and workplace together as the choice alternative and puts forward the joint choice model. A questionnaire survey is implemented in Beijing city to collect the data of 1994 households. Estimation results indicate that the joint choice model fits the data significantly better, and the elasticity effects analyses show that the joint choice model reflects the influences of relevant factors to the choice probability well and leads to the job-housing balance.

  5. Parameter estimation in fractional diffusion models

    CERN Document Server

    Kubilius, Kęstutis; Ralchenko, Kostiantyn

    2017-01-01

    This book is devoted to parameter estimation in diffusion models involving fractional Brownian motion and related processes. For many years now, standard Brownian motion has been (and still remains) a popular model of randomness used to investigate processes in the natural sciences, financial markets, and the economy. The substantial limitation in the use of stochastic diffusion models with Brownian motion is due to the fact that the motion has independent increments, and, therefore, the random noise it generates is “white,” i.e., uncorrelated. However, many processes in the natural sciences, computer networks and financial markets have long-term or short-term dependences, i.e., the correlations of random noise in these processes are non-zero, and slowly or rapidly decrease with time. In particular, models of financial markets demonstrate various kinds of memory and usually this memory is modeled by fractional Brownian diffusion. Therefore, the book constructs diffusion models with memory and provides s...

  6. Nonparametric Identification and Estimation of Finite Mixture Models of Dynamic Discrete Choices

    OpenAIRE

    Hiroyuki Kasahara; Katsumi Shimotsu

    2006-01-01

    In dynamic discrete choice analysis, controlling for unobserved heterogeneity is an important issue, and finite mixture models provide flexible ways to account for unobserved heterogeneity. This paper studies nonparametric identifiability of type probabilities and type-specific component distributions in finite mixture models of dynamic discrete choices. We derive sufficient conditions for nonparametric identification for various finite mixture models of dynamic discrete choices used in appli...

  7. Comparing species decisions in a dichotomous choice task: adjusting task parameters improves performance in monkeys.

    Science.gov (United States)

    Prétôt, Laurent; Bshary, Redouan; Brosnan, Sarah F

    2016-07-01

    In comparative psychology, both similarities and differences among species are studied to better understand the evolution of their behavior. To do so, we first test species in tasks using similar procedures, but if differences are found, it is important to determine their underlying cause(s) (e.g., are they due to ecology, cognitive ability, an artifact of the study, and/or some other factor?). In our previous work, primates performed unexpectedly poorly on an apparently simple two-choice discrimination task based on the natural behavior of cleaner fish, while the fish did quite well. In this task, if the subjects first chose one of the options (ephemeral) they received both food items, but if they chose the other (permanent) option first, the ephemeral option disappeared. Here, we test several proposed explanations for primates' relatively poorer performance. In Study 1, we used a computerized paradigm that differed from the previous test by removing interaction with human experimenters, which may be distracting, and providing a more standardized testing environment. In Study 2, we adapted the computerized paradigm from Study 1 to be more relevant to primate ecology. Monkeys' overall performance in these adapted tasks matched the performance of the fish in the original study, showing that with the appropriate modifications they can solve the task. We discuss these results in light of comparative research, which requires balancing procedural similarity with considerations of how the details of the task or the context may influence how different species perceive and solve tasks differently.

  8. Moose models with vanishing S parameter

    International Nuclear Information System (INIS)

    Casalbuoni, R.; De Curtis, S.; Dominici, D.

    2004-01-01

    In the linear moose framework, which naturally emerges in deconstruction models, we show that there is a unique solution for the vanishing of the S parameter at the lowest order in the weak interactions. We consider an effective gauge theory based on K SU(2) gauge groups, K+1 chiral fields, and electroweak groups SU(2) L and U(1) Y at the ends of the chain of the moose. S vanishes when a link in the moose chain is cut. As a consequence one has to introduce a dynamical nonlocal field connecting the two ends of the moose. Then the model acquires an additional custodial symmetry which protects this result. We examine also the possibility of a strong suppression of S through an exponential behavior of the link couplings as suggested by the Randall Sundrum metric

  9. The Effects of Land Use Patterns on Tour Type Choice. The Application of a Hybrid Choice Model

    DEFF Research Database (Denmark)

    de Abreu e Silva, João; Sottile, Eleonora; Cherchi, Elisabetta

    2014-01-01

    to travel. Workers who reside in more central, mixed and traditional urban spaces tend to have a higher propensity to travel. Workers who live in more diverse areas have a higher probability of engaging in more complex work related tours. Working in more suburban areas reduces the probability of engaging......The relations between travel behavior and land use patterns have been the object of intensive research in the last two decades. Due to their immediate policy implications, mode choice and vehicle miles of travel (VMT) have been the main focus of attention. Other relevant dimensions, like trip...... of the latent propensity to travel in the discrete choice among types of tours. This model is applied to a travel diary of workers collected in the Lisbon Metropolitan Area in 2009. Different model specifications were built, testing the inclusion of purportedly built land use factors, which have the advantage...

  10. A comprehensive dwelling unit choice model accommodating psychological constructs within a search strategy for consideration set formation.

    Science.gov (United States)

    2015-12-01

    This study adopts a dwelling unit level of analysis and considers a probabilistic choice set generation approach for residential choice modeling. In doing so, we accommodate the fact that housing choices involve both characteristics of the dwelling u...

  11. Models for setting ATM parameter values

    DEFF Research Database (Denmark)

    Blaabjerg, Søren; Gravey, A.; Romæuf, L.

    1996-01-01

    presents approximate methods and discusses their applicability. We then discuss the problem of obtaining traffic characteristic values for a connection that has crossed a series of switching nodes. This problem is particularly relevant for the traffic contract components corresponding to ICIs...... (CDV) tolerance(s). The values taken by these traffic parameters characterize the so-called ''Worst Case Traffic'' that is used by CAC procedures for accepting a new connection and allocating resources to it. Conformance to the negotiated traffic characteristics is defined, at the ingress User...... essential to set traffic characteristic values that are relevant to the considered cell stream, and that ensure that the amount of non-conforming traffic is small. Using a queueing model representation for the GCRA formalism, several methods are available for choosing the traffic characteristics. This paper...

  12. Ridge Regression in Prediction Problems: Automatic Choice of the Ridge Parameter

    OpenAIRE

    Cule, Erika; De Iorio, Maria

    2013-01-01

    To date, numerous genetic variants have been identified as associated with diverse phenotypic traits. However, identified associations generally explain only a small proportion of trait heritability and the predictive power of models incorporating only known-associated variants has been small. Multiple regression is a popular framework in which to consider the joint effect of many genetic variants simultaneously. Ordinary multiple regression is seldom appropriate in the context of genetic dat...

  13. Dengue human infection model performance parameters.

    Science.gov (United States)

    Endy, Timothy P

    2014-06-15

    Dengue is a global health problem and of concern to travelers and deploying military personnel with development and licensure of an effective tetravalent dengue vaccine a public health priority. The dengue viruses (DENVs) are mosquito-borne flaviviruses transmitted by infected Aedes mosquitoes. Illness manifests across a clinical spectrum with severe disease characterized by intravascular volume depletion and hemorrhage. DENV illness results from a complex interaction of viral properties and host immune responses. Dengue vaccine development efforts are challenged by immunologic complexity, lack of an adequate animal model of disease, absence of an immune correlate of protection, and only partially informative immunogenicity assays. A dengue human infection model (DHIM) will be an essential tool in developing potential dengue vaccines or antivirals. The potential performance parameters needed for a DHIM to support vaccine or antiviral candidates are discussed. © The Author 2014. Published by Oxford University Press on behalf of the Infectious Diseases Society of America. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  14. Dimensionality reduction of RKHS model parameters.

    Science.gov (United States)

    Taouali, Okba; Elaissi, Ilyes; Messaoud, Hassani

    2015-07-01

    This paper proposes a new method to reduce the parameter number of models developed in the Reproducing Kernel Hilbert Space (RKHS). In fact, this number is equal to the number of observations used in the learning phase which is assumed to be high. The proposed method entitled Reduced Kernel Partial Least Square (RKPLS) consists on approximating the retained latent components determined using the Kernel Partial Least Square (KPLS) method by their closest observation vectors. The paper proposes the design and the comparative study of the proposed RKPLS method and the Support Vector Machines on Regression (SVR) technique. The proposed method is applied to identify a nonlinear Process Trainer PT326 which is a physical process available in our laboratory. Moreover as a thermal process with large time response may help record easily effective observations which contribute to model identification. Compared to the SVR technique, the results from the proposed RKPLS method are satisfactory. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  15. Assessing the value of museums with a combined discrete choice/ count data model

    NARCIS (Netherlands)

    Rouwendal, J.; Boter, J.

    2009-01-01

    This article assesses the value of Dutch museums using information about destination choice as well as about the number of trips undertaken by an actor. Destination choice is analysed by means of a mixed logit model, and a count data model is used to explain trip generation. We use a

  16. Emerging Australian Education Markets: A Discrete Choice Model of Taiwanese and Indonesian Student Intended Study Destination.

    Science.gov (United States)

    Kemp, Steven; Madden, Gary; Simpson, Michael

    1998-01-01

    Isolates factors influencing choice of Australia as a preferred destination for international students in emerging regional markets. Uses data obtained from a survey of students in Indonesia and Taiwan to estimate a U.S./Australia and rest-of-world/Australia discrete destination-choice model. This model identifies key factors determining country…

  17. MODELLING CONSUMER CHOICE IN THE MARKET SWITCHBOARD EQUIPMENT USING IBM SPSS STATISTICS

    Directory of Open Access Journals (Sweden)

    Sergey V. Mkhitaryan

    2014-01-01

    Full Text Available Modelling consumer choice in the marketswitch equipment will allow manufacturing enterprises to improve the efficiencyof design and marketing activities byreducing the financial and human losses associated with pre-treatment orders. Todevelop a model of consumer choice canbe used logistic regression.

  18. Analysis of strength-of-preference measures in dichotomous choice models

    Science.gov (United States)

    Donald F. Dennis; Peter Newman; Robert Manning

    2008-01-01

    Choice models are becoming increasingly useful for soliciting and analyzing multiple objective decisions faced by recreation managers and others interested in decisions involving natural resources. Choice models are used to estimate relative values for multiple aspects of natural resource management, not individually but within the context of other relevant decision...

  19. Stated choice models for predicting the impact of user fees at public recreation sites

    Science.gov (United States)

    Herbert W. Schroeder; Jordan Louviere

    1999-01-01

    A crucial question in the implementation of fee programs is how the users of recreation sites will respond to various levels and types of fees. Stated choice models can help managers anticipate the impact of user fees on people's choices among the alternative recreation sites available to them. Models developed for both day and overnight trips to several areas and...

  20. The choice of equipment mix and parameters for HTGR-based nuclear cogeneration plants

    International Nuclear Information System (INIS)

    Malevski, A.L.; Stoliarevski, A.Ya.; Vladimirov, V.T.; Larin, E.A.; Lesnykh, V.V.; Naumov, Yu.V.; Fedotov, I.L.

    1990-01-01

    electricity and steam and hot water. If the helium temperature at the core outlet reaches 1120-1220 K, it will be possible to create a single-loop HTGR-based gas-turbine installation using waste heat for heat supply. The economic feasibility of creating industrial and heating plants with HTGR, rational fields of their application in cogeneration systems can be determined after complex optimization analysis of schemes and their main parameters considering the whole complex of really influencing factors in their operation

  1. Modeling issues & choices in the data mining optimization ontology

    CSIR Research Space (South Africa)

    Keet, CM

    2013-05-01

    Full Text Available We describe the Data Mining Optimization Ontology (DMOP), which was developed to support informed decision-making at various choice points of the knowledge discovery (KD) process. It can be used as a reference by data miners, but its primary purpose...

  2. Consumer choice models on the effect of promotions in retailing

    NARCIS (Netherlands)

    Guyt, Jonne

    2015-01-01

    This doctoral thesis contains three empirical essays regarding the effect of promotions on consumer choices in a retailing context. The first essay studies the scheduling of featured price cuts for national brands, across retail chains. It shows that coordinating promotions across chains influences

  3. Black Students, Black Colleges: An African American College Choice Model.

    Science.gov (United States)

    McDonough, Patricia M.; Antonio, Anthony Lising; Trent, James W.

    1997-01-01

    Explores African Americans' college choice decisions, based on a national sample of 220,757 freshmen. Independent of gender, family income, or educational aspiration, the most powerful predictors for choosing historically black colleges and universities are geography, religion, the college's academic reputation, and relatives' desires. The top…

  4. Fund choice behavior and estimation of switching models: an experiment*

    NARCIS (Netherlands)

    Anufriev, M.; Bao, T.; Tuinstra, J.

    2013-01-01

    We run a laboratory experiment that contributes to the finance literature on "return chasing behavior" studying how investors switch between mutual funds driven by past performance of the funds. The subjects in this experiment make discrete choices between several (2, 3 or 4) experimental funds in

  5. Clustering reveals limits of parameter identifiability in multi-parameter models of biochemical dynamics.

    Science.gov (United States)

    Nienałtowski, Karol; Włodarczyk, Michał; Lipniacki, Tomasz; Komorowski, Michał

    2015-09-29

    Compared to engineering or physics problems, dynamical models in quantitative biology typically depend on a relatively large number of parameters. Progress in developing mathematics to manipulate such multi-parameter models and so enable their efficient interplay with experiments has been slow. Existing solutions are significantly limited by model size. In order to simplify analysis of multi-parameter models a method for clustering of model parameters is proposed. It is based on a derived statistically meaningful measure of similarity between groups of parameters. The measure quantifies to what extend changes in values of some parameters can be compensated by changes in values of other parameters. The proposed methodology provides a natural mathematical language to precisely communicate and visualise effects resulting from compensatory changes in values of parameters. As a results, a relevant insight into identifiability analysis and experimental planning can be obtained. Analysis of NF-κB and MAPK pathway models shows that highly compensative parameters constitute clusters consistent with the network topology. The method applied to examine an exceptionally rich set of published experiments on the NF-κB dynamics reveals that the experiments jointly ensure identifiability of only 60% of model parameters. The method indicates which further experiments should be performed in order to increase the number of identifiable parameters. We currently lack methods that simplify broadly understood analysis of multi-parameter models. The introduced tools depict mutually compensative effects between parameters to provide insight regarding role of individual parameters, identifiability and experimental design. The method can also find applications in related methodological areas of model simplification and parameters estimation.

  6. Considerations on the choice of experimental parameters in residual stress measurements by hole-drilling and ESPI

    Directory of Open Access Journals (Sweden)

    C. Barile

    2014-10-01

    Full Text Available Residual stresses occur in many manufactured structures and components. Great number of investigations have been carried out to study this phenomenon. Over the years, different techniques have been developed to measure residual stresses; nowadays the combination of Hole Drilling method (HD with Electronic Speckle Pattern Interferometry (ESPI has encountered great interest. The use of a high sensitivity optical technique instead of the strain gage rosette has the advantage to provide full field information without any contact with the sample by consequently reducing the cost and the time required for the measurement. The accuracy of the measurement, however, is influenced by the proper choice of several parameters: geometrical, analysis and experimental. In this paper, in particular, the effects of some of those parameters are investigated: misknowledgment in illumination and detection angles, the influence of the relative angle between the sensitivity vector of the system and the principal stress directions, the extension of the area of analysis and the adopted drilling rotation speed. In conclusion indications are provided to the scope of optimizing the measurement process together with the identification of the major sources of errors that can arise during the measuring and the analysis stages.

  7. Forced Response Prediction of Turbine Blades with Flexible Dampers: The Impact of Engineering Modelling Choices

    Directory of Open Access Journals (Sweden)

    Chiara Gastaldi

    2017-12-01

    Full Text Available This paper focuses on flexible friction dampers (or “strips” mounted on the underside of adjacent turbine blade platforms for sealing and damping purposes. A key parameter to ensure a robust and trustworthy design is the correct prediction of the maximum frequency shift induced by the strip damper coupling adjacent blades. While this topic has been extensively addressed on rigid friction dampers, both experimentally and numerically, no such investigation is available as far as flexible dampers are concerned. This paper builds on the authors’ prior experience with rigid dampers to investigate the peculiarities and challenges of a robust dynamic model of blade-strips systems. The starting point is a numerical tool implementing state-of-the-art techniques for the efficient solution of the nonlinear equations, e.g., multi-harmonic balance method with coupled static solution and state-of-the-art contact elements. The full step-by-step modelling process is here retraced and upgraded to take into account the damper flexibility: for each step, key modelling choices (e.g., mesh size, master nodes selection, contact parameters which may affect the predicted response are addressed. The outcome is a series of guidelines which will help the designer assign numerical predictions the proper level of trust and outline a much-needed experimental campaign.

  8. Fuzzy social choice models explaining the government formation process

    CERN Document Server

    C Casey, Peter; A Goodman, Carly; Pook, Kelly Nelson; N Mordeson, John; J Wierman, Mark; D Clark, Terry

    2014-01-01

    This book explores the extent to which fuzzy set logic can overcome some of the shortcomings of public choice theory, particularly its inability to provide adequate predictive power in empirical studies. Especially in the case of social preferences, public choice theory has failed to produce the set of alternatives from which collective choices are made.  The book presents empirical findings achieved by the authors in their efforts to predict the outcome of government formation processes in European parliamentary and semi-presidential systems.  Using data from the Comparative Manifesto Project (CMP), the authors propose a new approach that reinterprets error in the coding of CMP data as ambiguity in the actual political positions of parties on the policy dimensions being coded. The range of this error establishes parties’ fuzzy preferences. The set of possible outcomes in the process of government formation is then calculated on the basis of both the fuzzy Pareto set and the fuzzy maximal set, and the pre...

  9. Assessing robustness of designs for random effects parameters for nonlinear mixed-effects models.

    Science.gov (United States)

    Duffull, Stephen B; Hooker, Andrew C

    2017-12-01

    Optimal designs for nonlinear models are dependent on the choice of parameter values. Various methods have been proposed to provide designs that are robust to uncertainty in the prior choice of parameter values. These methods are generally based on estimating the expectation of the determinant (or a transformation of the determinant) of the information matrix over the prior distribution of the parameter values. For high dimensional models this can be computationally challenging. For nonlinear mixed-effects models the question arises as to the importance of accounting for uncertainty in the prior value of the variances of the random effects parameters. In this work we explore the influence of the variance of the random effects parameters on the optimal design. We find that the method for approximating the expectation and variance of the likelihood is of potential importance for considering the influence of random effects. The most common approximation to the likelihood, based on a first-order Taylor series approximation, yields designs that are relatively insensitive to the prior value of the variance of the random effects parameters and under these conditions it appears to be sufficient to consider uncertainty on the fixed-effects parameters only.

  10. A comparative study of machine learning classifiers for modeling travel mode choice

    NARCIS (Netherlands)

    Hagenauer, J; Helbich, M

    2017-01-01

    The analysis of travel mode choice is an important task in transportation planning and policy making in order to understand and predict travel demands. While advances in machine learning have led to numerous powerful classifiers, their usefulness for modeling travel mode choice remains largely

  11. Joint modeling of constrained path enumeration and path choice behavior: a semi-compensatory approach

    DEFF Research Database (Denmark)

    Kaplan, Sigal; Prato, Carlo Giacomo

    2010-01-01

    A behavioural and a modelling framework are proposed for representing route choice from a path set that satisfies travellers’ spatiotemporal constraints. Within the proposed framework, travellers’ master sets are constructed by path generation, consideration sets are delimited according to spatio...... constraints are related to travellers’ socio-economic characteristics and that path choice is related to minimizing time and avoiding congestion....

  12. Modeling the Bullying Prevention Program Preferences of Educators: A Discrete Choice Conjoint Experiment

    Science.gov (United States)

    Cunningham, Charles E.; Vaillancourt, Tracy; Rimas, Heather; Deal, Ken; Cunningham, Lesley; Short, Kathy; Chen, Yvonne

    2009-01-01

    We used discrete choice conjoint analysis to model the bullying prevention program preferences of educators. Using themes from computerized decision support lab focus groups (n = 45 educators), we composed 20 three-level bullying prevention program design attributes. Each of 1,176 educators completed 25 choice tasks presenting experimentally…

  13. Sample Size and Item Parameter Estimation Precision When Utilizing the One-Parameter "Rasch" Model

    Science.gov (United States)

    Custer, Michael

    2015-01-01

    This study examines the relationship between sample size and item parameter estimation precision when utilizing the one-parameter model. Item parameter estimates are examined relative to "true" values by evaluating the decline in root mean squared deviation (RMSD) and the number of outliers as sample size increases. This occurs across…

  14. Multinomial Logit Model of Choices of Internet Modes in Iraq

    OpenAIRE

    Ph.D. Almas Heshmati; Ph.D. Firas H. Al-Hammadany

    2014-01-01

    Iraq is a country that has the potential to explode onto the Internet market due to the fact that much of Iraq is still largely without access to the Internet. Iraq¡¯s market has much room for corporate and individual investments in Internet technology, mainly, Internet access. However, this requires a deep understanding of the user with regards to the Internet and the market characteristics involved. This study is concerned with the users¡¯ choice of Internet mode connections in Iraq. There ...

  15. Understanding the formation and influence of attitudes in patients' treatment choices for lower back pain: Testing the benefits of a hybrid choice model approach

    DEFF Research Database (Denmark)

    Kløjgaard, Mirja Elisabeth; Hess, S.

    2014-01-01

    A growing number of studies across different fields are making use of a new class of choice models, labelled variably as hybrid model structures or integrated choice and latent variable models, and incorporating the role of attitudes in decision making. To date, this technique has not been used...... in health economics. The present paper looks at the formation of such attitudes and their role in patients treatment choices in the context of low back pain. We use stated choice data collected from a sample of 561 patients with 348 respondents referred to a regional spine centre in Middelfart, Denmark...... in spring/summer 2012. We show how the hybrid model structure is able to make a link between attitudinal questions and treatment choices, and also explains variation of these attitudes across key socio-demographic groups. However, we also show how, in this case, only a small share of the overall...

  16. Incorporating Latent Variables into Discrete Choice Models - A Simultaneous Estimation Approach Using SEM Software

    Directory of Open Access Journals (Sweden)

    Dirk Temme

    2008-12-01

    Full Text Available Integrated choice and latent variable (ICLV models represent a promising new class of models which merge classic choice models with the structural equation approach (SEM for latent variables. Despite their conceptual appeal, applications of ICLV models in marketing remain rare. We extend previous ICLV applications by first estimating a multinomial choice model and, second, by estimating hierarchical relations between latent variables. An empirical study on travel mode choice clearly demonstrates the value of ICLV models to enhance the understanding of choice processes. In addition to the usually studied directly observable variables such as travel time, we show how abstract motivations such as power and hedonism as well as attitudes such as a desire for flexibility impact on travel mode choice. Furthermore, we show that it is possible to estimate such a complex ICLV model with the widely available structural equation modeling package Mplus. This finding is likely to encourage more widespread application of this appealing model class in the marketing field.

  17. Behavioural Models for Route Choice of Passengers in Multimodal Public Transport Networks

    DEFF Research Database (Denmark)

    Anderson, Marie Karen

    The subject of this thesis is behavioural models for route choice of passengers in multimodal public transport networks. While research in sustainable transport has dedicated much attention toward the determinants of choice between car and sustainable travel options, it has devoted less attention...... and processed in this study. The characteristics of the collected data are analysed and the actual choices of the public transport passengers are revealed in the thesis. The data were map-matched to the GIS network of the area and quality controlled in a multi-step procedure. From the choice set generation...... perspective, this thesis generates attractive routes for the origindestination pair of each traveller. The problem is not trivial when considering the combinatorial iv Behavioural models for route choice of passengers in multimodal public transport networks nature of the problem. The dense network...

  18. Optimizing incomplete sample designs for item response model parameters

    NARCIS (Netherlands)

    van der Linden, Willem J.

    Several models for optimizing incomplete sample designs with respect to information on the item parameters are presented. The following cases are considered: (1) known ability parameters; (2) unknown ability parameters; (3) item sets with multiple ability scales; and (4) response models with

  19. A novel concurrent pictorial choice model of mood-induced relapse in hazardous drinkers.

    Science.gov (United States)

    Hardy, Lorna; Hogarth, Lee

    2017-12-01

    This study tested whether a novel concurrent pictorial choice procedure, inspired by animal self-administration models, is sensitive to the motivational effect of negative mood induction on alcohol-seeking in hazardous drinkers. Forty-eight hazardous drinkers (scoring ≥7 on the Alcohol Use Disorders Inventory) recruited from the community completed measures of alcohol dependence, depression, and drinking coping motives. Baseline alcohol-seeking was measured by percent choice to enlarge alcohol- versus food-related thumbnail images in two alternative forced-choice trials. Negative and positive mood was then induced in succession by means of self-referential affective statements and music, and percent alcohol choice was measured after each induction in the same way as baseline. Baseline alcohol choice correlated with alcohol dependence severity, r = .42, p = .003, drinking coping motives (in two questionnaires, r = .33, p = .02 and r = .46, p = .001), and depression symptoms, r = .31, p = .03. Alcohol choice was increased by negative mood over baseline (p mood (p = .54, ηp2 = .008). The negative mood-induced increase in alcohol choice was not related to gender, alcohol dependence, drinking to cope, or depression symptoms (ps ≥ .37). The concurrent pictorial choice measure is a sensitive index of the relative value of alcohol, and provides an accessible experimental model to study negative mood-induced relapse mechanisms in hazardous drinkers. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  20. Study on Parameters Modeling of Wind Turbines Using SCADA Data

    Directory of Open Access Journals (Sweden)

    Yonglong YAN

    2014-08-01

    Full Text Available Taking the advantage of the current massive monitoring data from Supervisory Control and Data Acquisition (SCADA system of wind farm, it is of important significance for anomaly detection, early warning and fault diagnosis to build the data model of state parameters of wind turbines (WTs. The operational conditions and the relationships between the state parameters of wind turbines are complex. It is difficult to establish the model of state parameter accurately, and the modeling method of state parameters of wind turbines considering parameter selection is proposed. Firstly, by analyzing the characteristic of SCADA data, a reasonable range of data and monitoring parameters are chosen. Secondly, neural network algorithm is adapted, and the selection method of input parameters in the model is presented. Generator bearing temperature and cooling air temperature are regarded as target parameters, and the two models are built and input parameters of the models are selected, respectively. Finally, the parameter selection method in this paper and the method using genetic algorithm-partial least square (GA-PLS are analyzed comparatively, and the results show that the proposed methods are correct and effective. Furthermore, the modeling of two parameters illustrate that the method in this paper can applied to other state parameters of wind turbines.

  1. Modelling Stochastic Route Choice Behaviours with a Closed-Form Mixed Logit Model

    Directory of Open Access Journals (Sweden)

    Xinjun Lai

    2015-01-01

    Full Text Available A closed-form mixed Logit approach is proposed to model the stochastic route choice behaviours. It combines both the advantages of Probit and Logit to provide a flexible form in alternatives correlation and a tractable form in expression; besides, the heterogeneity in alternative variance can also be addressed. Paths are compared by pairs where the superiority of the binary Probit can be fully used. The Probit-based aggregation is also used for a nested Logit structure. Case studies on both numerical and empirical examples demonstrate that the new method is valid and practical. This paper thus provides an operational solution to incorporate the normal distribution in route choice with an analytical expression.

  2. A robotics-based approach to modeling of choice reaching experiments on visual attention

    Directory of Open Access Journals (Sweden)

    Soeren eStrauss

    2012-04-01

    Full Text Available The paper presents a robotics-based model for choice reaching experiments on visual attention. In these experiments participants were asked to make rapid reach movements towards a target in an odd-colour search task, i.e. reaching for a green square among red squares and vice versa (e.g. Song & Nakayama, 2008. Interestingly these studies found that in a high number of trials movements were initially directed towards a distractor and only later were adjusted towards the target. These curved trajectories occurred particularly frequently when the target in the directly preceding trial had a different colour (priming effect. Our model is embedded in a closed-loop control of a LEGO robot arm aiming to mimic these reach movements. The model is based on our earlier work which suggests that target selection in visual search is implemented through parallel interactions between competitive and cooperative processes in the brain (Heinke & Backhaus, 2011; Heinke & Humphreys, 2003. To link this model with the control of the robot arm we implemented a topological representation of movement parameters following the dynamic field theory (Erlhagen & Schoener, 2002. The robot arm is able to mimic the results of the odd-colour search task including the priming effect and also generates human-like trajectories with a bell-shaped velocity profile. Theoretical implications and predictions are discussed in the paper.

  3. Departure time choice: Modelling individual preferences, intention and constraints

    DEFF Research Database (Denmark)

    Thorhauge, Mikkel

    to change their departure time rather than changing their transport mode to avoid congestion (Hendrickson and Planke, 1984; SACTRA, 1994; Kroes et al., 1996; Hess et al., 2007a). Hence, understanding the departure time choice from an individual perspective is important to develop policies aimed to address...... working hours) as the penalty of late arrival is very likely to be higher for individuals with constraints on arrival time. However, flexibility is not only a matter of fixed arrival time. Activities can be mandatory or discretionary (Yamamoto and Kitamura, 1999), performed alone or jointly with family...... departure time. Parallel with the micro-economic theory, the psychology literature has evidenced that individuals’ behaviours are driven by underlying latent constructs, such as attitude, norms and perceptions. In the past decades, more attention has been given to incorporate and understand underlying...

  4. Parameter and Uncertainty Estimation in Groundwater Modelling

    DEFF Research Database (Denmark)

    Jensen, Jacob Birk

    The data basis on which groundwater models are constructed is in general very incomplete, and this leads to uncertainty in model outcome. Groundwater models form the basis for many, often costly decisions and if these are to be made on solid grounds, the uncertainty attached to model results must...... be quantified. This study was motivated by the need to estimate the uncertainty involved in groundwater models.Chapter 2 presents an integrated surface/subsurface unstructured finite difference model that was developed and applied to a synthetic case study.The following two chapters concern calibration...... was applied.Capture zone modelling was conducted on a synthetic stationary 3-dimensional flow problem involving river, surface and groundwater flow. Simulated capture zones were illustrated as likelihood maps and compared with a deterministic capture zones derived from a reference model. The results showed...

  5. WINKLER'S SINGLE-PARAMETER SUBGRADE MODEL FROM ...

    African Journals Online (AJOL)

    Preferred Customer

    SUBGRADE MODELING. Asrat Worku. Department of ... The models give consistently larger stiffness for the Winkler springs as compared to previously proposed similar continuum-based models that ignore the lateral stresses. ...... (ν = 0.25 and E = 40MPa); (b) a medium stiff clay (ν = 0.45 and E = 50MPa). In contrast to this, ...

  6. The sensitivity of ecosystem service models to choices of input data and spatial resolution

    Science.gov (United States)

    Kenneth J. Bagstad; Erika Cohen; Zachary H. Ancona; Steven. G. McNulty; Ge   Sun

    2018-01-01

    Although ecosystem service (ES) modeling has progressed rapidly in the last 10–15 years, comparative studies on data and model selection effects have become more common only recently. Such studies have drawn mixed conclusions about whether different data and model choices yield divergent results. In this study, we compared the results of different models to address...

  7. Choices and changes: Eccles' Expectancy-Value model and upper-secondary school students' longitudinal reflections about their choice of a STEM education

    Science.gov (United States)

    Lykkegaard, Eva; Ulriksen, Lars

    2016-03-01

    During the past 30 years, Eccles' comprehensive social-psychological Expectancy-Value Model of Motivated Behavioural Choices (EV-MBC model) has been proven suitable for studying educational choices related to Science, Technology, Engineering and/or Mathematics (STEM). The reflections of 15 students in their last year in upper-secondary school concerning their choice of tertiary education were examined using quantitative EV-MBC surveys and repeated qualitative interviews. This article presents the analyses of three cases in detail. The analytical focus was whether the factors indicated in the EV-MBC model could be used to detect significant changes in the students' educational choice processes. An important finding was that the quantitative EV-MBC surveys and the qualitative interviews gave quite different results concerning the students' considerations about the choice of tertiary education, and that significant changes in the students' reflections were not captured by the factors of the EV-MBC model. This questions the validity of the EV-MBC surveys. Moreover, the quantitative factors from the EV-MBC model did not sufficiently explain students' dynamical educational choice processes where students in parallel considered several different potential educational trajectories. We therefore call for further studies of the EV-MBC model's use in describing longitudinal choice processes and especially in investigating significant changes.

  8. The selection of a mode of urban transportation: Integrating psychological variables to discrete choice models

    International Nuclear Information System (INIS)

    Cordoba Maquilon, Jorge E; Gonzalez Calderon, Carlos A; Posada Henao, John J

    2011-01-01

    A study using revealed preference surveys and psychological tests was conducted. Key psychological variables of behavior involved in the choice of transportation mode in a population sample of the Metropolitan Area of the Valle de Aburra were detected. The experiment used the random utility theory for discrete choice models and reasoned action in order to assess beliefs. This was used as a tool for analysis of the psychological variables using the sixteen personality factor questionnaire (16PF test). In addition to the revealed preference surveys, two other surveys were carried out: one with socio-economic characteristics and the other with latent indicators. This methodology allows for an integration of discrete choice models and latent variables. The integration makes the model operational and quantifies the unobservable psychological variables. The most relevant result obtained was that anxiety affects the choice of urban transportation mode and shows that physiological alterations, as well as problems in perception and beliefs, can affect the decision-making process.

  9. Institutional influences on business model choice by new ventures in the microgenerated energy industry

    International Nuclear Information System (INIS)

    Provance, Mike; Donnelly, Richard G.; Carayannis, Elias G.

    2011-01-01

    Business model choice plays an important source of competitive advantage for new ventures in the microgeneration sector. Yet, existing literature focuses on strategic management of internal resources as the constraints in this choice process. In the energy sector, external factors may be at least as influential in shaping these business models. This paper examines the roles of politico-institutional and socio-institutional dynamics in the choice of business models for microgeneration ventures. Business models have traditionally been viewed as constructions of the internal values, strategies, and resources of organizations. But, this perspective overlooks the role that external forces have on these models, particularly in more highly institutionalized contexts like microgeneration. When these factors are introduced into the existing framework for business model choice, the business model based less on firm decision-making and more about variables that exist within national innovation systems and political structure, local socio-technological conditions, and cognitive abilities of the entrepreneur and corresponding stakeholders. - Highlights: → This work provides theoretical foundation for variation in microgeneration business models. → Explores institutional influences on strategic view of business model choice. → Compares the nature of microgeneration across geo-political contexts.

  10. An integrated framework for modeling freight mode and route choice.

    Science.gov (United States)

    2013-10-01

    A number of statewide travel demand models have included freight as a separate component in analysis. Unlike : passenger travel, freight has not gained equivalent attention because of lack of data and difficulties in modeling. In : the current state ...

  11. Identifying the connective strength between model parameters and performance criteria

    Directory of Open Access Journals (Sweden)

    B. Guse

    2017-11-01

    Full Text Available In hydrological models, parameters are used to represent the time-invariant characteristics of catchments and to capture different aspects of hydrological response. Hence, model parameters need to be identified based on their role in controlling the hydrological behaviour. For the identification of meaningful parameter values, multiple and complementary performance criteria are used that compare modelled and measured discharge time series. The reliability of the identification of hydrologically meaningful model parameter values depends on how distinctly a model parameter can be assigned to one of the performance criteria. To investigate this, we introduce the new concept of connective strength between model parameters and performance criteria. The connective strength assesses the intensity in the interrelationship between model parameters and performance criteria in a bijective way. In our analysis of connective strength, model simulations are carried out based on a latin hypercube sampling. Ten performance criteria including Nash–Sutcliffe efficiency (NSE, Kling–Gupta efficiency (KGE and its three components (alpha, beta and r as well as RSR (the ratio of the root mean square error to the standard deviation for different segments of the flow duration curve (FDC are calculated. With a joint analysis of two regression tree (RT approaches, we derive how a model parameter is connected to different performance criteria. At first, RTs are constructed using each performance criterion as the target variable to detect the most relevant model parameters for each performance criterion. Secondly, RTs are constructed using each parameter as the target variable to detect which performance criteria are impacted by changes in the values of one distinct model parameter. Based on this, appropriate performance criteria are identified for each model parameter. In this study, a high bijective connective strength between model parameters and performance criteria

  12. The Drift Diffusion Model can account for the accuracy and reaction time of value-based choices under high and low time pressure

    Directory of Open Access Journals (Sweden)

    Milica Milosavljevic

    2010-10-01

    Full Text Available An important open problem is how values are compared to make simple choices. A natural hypothesis is that the brain carries out the computations associated with the value comparisons in a manner consistent with the Drift Diffusion Model (DDM, since this model has been able to account for a large amount of data in other domains. We investigated the ability of four different versions of the DDM to explain the data in a real binary food choice task under conditions of high and low time pressure. We found that a seven-parameter version of the DDM can account for the choice and reaction time data with high-accuracy, in both the high and low time pressure conditions. The changes associated with the introduction of time pressure could be traced to changes in two key model parameters: the barrier height and the noise in the slope of the drift process.

  13. Measurement of charge with an active integrator in the presence of noise and pileup effects. A choice of parameters in the charge division method

    International Nuclear Information System (INIS)

    Fanet, H.; Lugol, J.C.

    1991-01-01

    In the presence of electronics noise and pileup effects it is possible to measure charge with an active integrator. The subject of this paper is to deal with the choice of measurement parameters. An application of position sensing with the charge division method is studied and results are compared to those obtained with POMME polarimeter electronics. (orig.)

  14. Summary of the DREAM8 Parameter Estimation Challenge: Toward Parameter Identification for Whole-Cell Models.

    Directory of Open Access Journals (Sweden)

    Jonathan R Karr

    2015-05-01

    Full Text Available Whole-cell models that explicitly represent all cellular components at the molecular level have the potential to predict phenotype from genotype. However, even for simple bacteria, whole-cell models will contain thousands of parameters, many of which are poorly characterized or unknown. New algorithms are needed to estimate these parameters and enable researchers to build increasingly comprehensive models. We organized the Dialogue for Reverse Engineering Assessments and Methods (DREAM 8 Whole-Cell Parameter Estimation Challenge to develop new parameter estimation algorithms for whole-cell models. We asked participants to identify a subset of parameters of a whole-cell model given the model's structure and in silico "experimental" data. Here we describe the challenge, the best performing methods, and new insights into the identifiability of whole-cell models. We also describe several valuable lessons we learned toward improving future challenges. Going forward, we believe that collaborative efforts supported by inexpensive cloud computing have the potential to solve whole-cell model parameter estimation.

  15. Estimation of Kinetic Parameters in an Automotive SCR Catalyst Model

    DEFF Research Database (Denmark)

    Åberg, Andreas; Widd, Anders; Abildskov, Jens

    2016-01-01

    A challenge during the development of models for simulation of the automotive Selective Catalytic Reduction catalyst is the parameter estimation of the kinetic parameters, which can be time consuming and problematic. The parameter estimation is often carried out on small-scale reactor tests...

  16. Value-based choice: An integrative, neuroscience-informed model of health goals.

    Science.gov (United States)

    Berkman, Elliot T

    2018-01-01

    Traditional models of health behaviour focus on the roles of cognitive, personality and social-cognitive constructs (e.g. executive function, grit, self-efficacy), and give less attention to the process by which these constructs interact in the moment that a health-relevant choice is made. Health psychology needs a process-focused account of how various factors are integrated to produce the decisions that determine health behaviour. I present an integrative value-based choice model of health behaviour, which characterises the mechanism by which a variety of factors come together to determine behaviour. This model imports knowledge from research on behavioural economics and neuroscience about how choices are made to the study of health behaviour, and uses that knowledge to generate novel predictions about how to change health behaviour. I describe anomalies in value-based choice that can be exploited for health promotion, and review neuroimaging evidence about the involvement of midline dopamine structures in tracking and integrating value-related information during choice. I highlight how this knowledge can bring insights to health psychology using illustrative case of healthy eating. Value-based choice is a viable model for health behaviour and opens new avenues for mechanism-focused intervention.

  17. Edge Modeling by Two Blur Parameters in Varying Contrasts.

    Science.gov (United States)

    Seo, Suyoung

    2018-06-01

    This paper presents a method of modeling edge profiles with two blur parameters, and estimating and predicting those edge parameters with varying brightness combinations and camera-to-object distances (COD). First, the validity of the edge model is proven mathematically. Then, it is proven experimentally with edges from a set of images captured for specifically designed target sheets and with edges from natural images. Estimation of the two blur parameters for each observed edge profile is performed with a brute-force method to find parameters that produce global minimum errors. Then, using the estimated blur parameters, actual blur parameters of edges with arbitrary brightness combinations are predicted using a surface interpolation method (i.e., kriging). The predicted surfaces show that the two blur parameters of the proposed edge model depend on both dark-side edge brightness and light-side edge brightness following a certain global trend. This is similar across varying CODs. The proposed edge model is compared with a one-blur parameter edge model using experiments of the root mean squared error for fitting the edge models to each observed edge profile. The comparison results suggest that the proposed edge model has superiority over the one-blur parameter edge model in most cases where edges have varying brightness combinations.

  18. Estimation of Parameters in Latent Class Models with Constraints on the Parameters.

    Science.gov (United States)

    Paulson, James A.

    This paper reviews the application of the EM Algorithm to marginal maximum likelihood estimation of parameters in the latent class model and extends the algorithm to the case where there are monotone homogeneity constraints on the item parameters. It is shown that the EM algorithm can be used to obtain marginal maximum likelihood estimates of the…

  19. Beyond Garbage Cans: An AI Model of Organizational Choice.

    Science.gov (United States)

    Masuch, Michael; LaPotin, Perry

    1989-01-01

    Building on a simulation methodology, this study presents a new organizational decision-making model that complements the original garbage can model and overcomes design-related limitations by using artificial intelligence tools. Decision-making in organized structures may become as disorderly as in organized anarchies, but for different reasons.…

  20. Models of Teaching: Indicators Influencing Teachers' Perception of Pedagogical Choice

    Science.gov (United States)

    Nordyke, Alison Michelle

    2011-01-01

    The models of teaching are systematic tools that allow teachers to vary their classroom pedagogical practices to meet the needs of all learners in their classroom. This study was designed to determine key factors that influence teachers' decisions when determining a model of teaching for classroom instruction and to identify how teacher training…

  1. Modeling and Forecasting Large Realized Covariance Matrices and Portfolio Choice

    NARCIS (Netherlands)

    Callot, Laurent A.F.; Kock, Anders B.; Medeiros, Marcelo C.

    2017-01-01

    We consider modeling and forecasting large realized covariance matrices by penalized vector autoregressive models. We consider Lasso-type estimators to reduce the dimensionality and provide strong theoretical guarantees on the forecast capability of our procedure. We show that we can forecast

  2. Modelling travel time perception in transport mode choices

    NARCIS (Netherlands)

    Varotto, S.F.; Glerum, A.; Stathopoulos, A.; Bierlaire, M.; Longo, G.

    2015-01-01

    Travel behaviour models typically rely on data afflicted by errors, in perception (e.g., over/under-estimation by traveller) and measurement (e.g., software or researcher imputation error). Such errors are shown to have a relevant impact on model outputs. So far a comprehensive framework to deal

  3. Maximum profile likelihood estimation of differential equation parameters through model based smoothing state estimates.

    Science.gov (United States)

    Campbell, D A; Chkrebtii, O

    2013-12-01

    Statistical inference for biochemical models often faces a variety of characteristic challenges. In this paper we examine state and parameter estimation for the JAK-STAT intracellular signalling mechanism, which exemplifies the implementation intricacies common in many biochemical inference problems. We introduce an extension to the Generalized Smoothing approach for estimating delay differential equation models, addressing selection of complexity parameters, choice of the basis system, and appropriate optimization strategies. Motivated by the JAK-STAT system, we further extend the generalized smoothing approach to consider a nonlinear observation process with additional unknown parameters, and highlight how the approach handles unobserved states and unevenly spaced observations. The methodology developed is generally applicable to problems of estimation for differential equation models with delays, unobserved states, nonlinear observation processes, and partially observed histories. Crown Copyright © 2013. Published by Elsevier Inc. All rights reserved.

  4. Incremental parameter estimation of kinetic metabolic network models

    Directory of Open Access Journals (Sweden)

    Jia Gengjie

    2012-11-01

    Full Text Available Abstract Background An efficient and reliable parameter estimation method is essential for the creation of biological models using ordinary differential equation (ODE. Most of the existing estimation methods involve finding the global minimum of data fitting residuals over the entire parameter space simultaneously. Unfortunately, the associated computational requirement often becomes prohibitively high due to the large number of parameters and the lack of complete parameter identifiability (i.e. not all parameters can be uniquely identified. Results In this work, an incremental approach was applied to the parameter estimation of ODE models from concentration time profiles. Particularly, the method was developed to address a commonly encountered circumstance in the modeling of metabolic networks, where the number of metabolic fluxes (reaction rates exceeds that of metabolites (chemical species. Here, the minimization of model residuals was performed over a subset of the parameter space that is associated with the degrees of freedom in the dynamic flux estimation from the concentration time-slopes. The efficacy of this method was demonstrated using two generalized mass action (GMA models, where the method significantly outperformed single-step estimations. In addition, an extension of the estimation method to handle missing data is also presented. Conclusions The proposed incremental estimation method is able to tackle the issue on the lack of complete parameter identifiability and to significantly reduce the computational efforts in estimating model parameters, which will facilitate kinetic modeling of genome-scale cellular metabolism in the future.

  5. Analysis Test of Understanding of Vectors with the Three-Parameter Logistic Model of Item Response Theory and Item Response Curves Technique

    Science.gov (United States)

    Rakkapao, Suttida; Prasitpong, Singha; Arayathanitkul, Kwan

    2016-01-01

    This study investigated the multiple-choice test of understanding of vectors (TUV), by applying item response theory (IRT). The difficulty, discriminatory, and guessing parameters of the TUV items were fit with the three-parameter logistic model of IRT, using the parscale program. The TUV ability is an ability parameter, here estimated assuming…

  6. An approach to adjustment of relativistic mean field model parameters

    Directory of Open Access Journals (Sweden)

    Bayram Tuncay

    2017-01-01

    Full Text Available The Relativistic Mean Field (RMF model with a small number of adjusted parameters is powerful tool for correct predictions of various ground-state nuclear properties of nuclei. Its success for describing nuclear properties of nuclei is directly related with adjustment of its parameters by using experimental data. In the present study, the Artificial Neural Network (ANN method which mimics brain functionality has been employed for improvement of the RMF model parameters. In particular, the understanding capability of the ANN method for relations between the RMF model parameters and their predictions for binding energies (BEs of 58Ni and 208Pb have been found in agreement with the literature values.

  7. A simulation of water pollution model parameter estimation

    Science.gov (United States)

    Kibler, J. F.

    1976-01-01

    A parameter estimation procedure for a water pollution transport model is elaborated. A two-dimensional instantaneous-release shear-diffusion model serves as representative of a simple transport process. Pollution concentration levels are arrived at via modeling of a remote-sensing system. The remote-sensed data are simulated by adding Gaussian noise to the concentration level values generated via the transport model. Model parameters are estimated from the simulated data using a least-squares batch processor. Resolution, sensor array size, and number and location of sensor readings can be found from the accuracies of the parameter estimates.

  8. Do Methodological Choices in Environmental Modeling Bias Rebound Effects? A Case Study on Electric Cars.

    Science.gov (United States)

    Font Vivanco, David; Tukker, Arnold; Kemp, René

    2016-10-18

    Improvements in resource efficiency often underperform because of rebound effects. Calculations of the size of rebound effects are subject to various types of bias, among which methodological choices have received particular attention. Modellers have primarily focused on choices related to changes in demand, however, choices related to modeling the environmental burdens from such changes have received less attention. In this study, we analyze choices in the environmental assessment methods (life cycle assessment (LCA) and hybrid LCA) and environmental input-output databases (E3IOT, Exiobase and WIOD) used as a source of bias. The analysis is done for a case study on battery electric and hydrogen cars in Europe. The results describe moderate rebound effects for both technologies in the short term. Additionally, long-run scenarios are calculated by simulating the total cost of ownership, which describe notable rebound effect sizes-from 26 to 59% and from 18 to 28%, respectively, depending on the methodological choices-with favorable economic conditions. Relevant sources of bias are found to be related to incomplete background systems, technology assumptions and sectorial aggregation. These findings highlight the importance of the method setup and of sensitivity analyses of choices related to environmental modeling in rebound effect assessments.

  9. Lumped parameter models for the interpretation of environmental tracer data

    International Nuclear Information System (INIS)

    Maloszewski, P.; Zuber, A.

    1996-01-01

    Principles of the lumped-parameter approach to the interpretation of environmental tracer data are given. The following models are considered: the piston flow model (PFM), exponential flow model (EM), linear model (LM), combined piston flow and exponential flow model (EPM), combined linear flow and piston flow model (LPM), and dispersion model (DM). The applicability of these models for the interpretation of different tracer data is discussed for a steady state flow approximation. Case studies are given to exemplify the applicability of the lumped-parameter approach. Description of a user-friendly computer program is given. (author). 68 refs, 25 figs, 4 tabs

  10. A test for the parameters of multiple linear regression models ...

    African Journals Online (AJOL)

    A test for the parameters of multiple linear regression models is developed for conducting tests simultaneously on all the parameters of multiple linear regression models. The test is robust relative to the assumptions of homogeneity of variances and absence of serial correlation of the classical F-test. Under certain null and ...

  11. WATGIS: A GIS-Based Lumped Parameter Water Quality Model

    Science.gov (United States)

    Glenn P. Fernandez; George M. Chescheir; R. Wayne Skaggs; Devendra M. Amatya

    2002-01-01

    A Geographic Information System (GIS)­based, lumped parameter water quality model was developed to estimate the spatial and temporal nitrogen­loading patterns for lower coastal plain watersheds in eastern North Carolina. The model uses a spatially distributed delivery ratio (DR) parameter to account for nitrogen retention or loss along a drainage network. Delivery...

  12. Exploring the interdependencies between parameters in a material model.

    Energy Technology Data Exchange (ETDEWEB)

    Silling, Stewart Andrew; Fermen-Coker, Muge

    2014-01-01

    A method is investigated to reduce the number of numerical parameters in a material model for a solid. The basis of the method is to detect interdependencies between parameters within a class of materials of interest. The method is demonstrated for a set of material property data for iron and steel using the Johnson-Cook plasticity model.

  13. Regionalization of SWAT Model Parameters for Use in Ungauged Watersheds

    Directory of Open Access Journals (Sweden)

    Indrajeet Chaubey

    2010-11-01

    Full Text Available There has been a steady shift towards modeling and model-based approaches as primary methods of assessing watershed response to hydrologic inputs and land management, and of quantifying watershed-wide best management practice (BMP effectiveness. Watershed models often require some degree of calibration and validation to achieve adequate watershed and therefore BMP representation. This is, however, only possible for gauged watersheds. There are many watersheds for which there are very little or no monitoring data available, thus the question as to whether it would be possible to extend and/or generalize model parameters obtained through calibration of gauged watersheds to ungauged watersheds within the same region. This study explored the possibility of developing regionalized model parameter sets for use in ungauged watersheds. The study evaluated two regionalization methods: global averaging, and regression-based parameters, on the SWAT model using data from priority watersheds in Arkansas. Resulting parameters were tested and model performance determined on three gauged watersheds. Nash-Sutcliffe efficiencies (NS for stream flow obtained using regression-based parameters (0.53–0.83 compared well with corresponding values obtained through model calibration (0.45–0.90. Model performance obtained using global averaged parameter values was also generally acceptable (0.4 ≤ NS ≤ 0.75. Results from this study indicate that regionalized parameter sets for the SWAT model can be obtained and used for making satisfactory hydrologic response predictions in ungauged watersheds.

  14. Model choice considerations and information integration using analytical hierarchy process

    Energy Technology Data Exchange (ETDEWEB)

    Langenbrunner, James R [Los Alamos National Laboratory; Hemez, Francois M [Los Alamos National Laboratory; Booker, Jane M [BOOKER SCIENTIFIC; Ross, Timothy J. [UNM

    2010-10-15

    Using the theory of information-gap for decision-making under severe uncertainty, it has been shown that model output compared to experimental data contains irrevocable trade-offs between fidelity-to-data, robustness-to-uncertainty and confidence-in-prediction. We illustrate a strategy for information integration by gathering and aggregating all available data, knowledge, theory, experience, similar applications. Such integration of information becomes important when the physics is difficult to model, when observational data are sparse or difficult to measure, or both. To aggregate the available information, we take an inference perspective. Models are not rejected, nor wasted, but can be integrated into a final result. We show an example of information integration using Saaty's Analytic Hierarchy Process (AHP), integrating theory, simulation output and experimental data. We used expert elicitation to determine weights for two models and two experimental data sets, by forming pair-wise comparisons between model output and experimental data. In this way we transform epistemic and/or statistical strength from one field of study into another branch of physical application. The price to pay for utilizing all available knowledge is that inferences drawn for the integrated information must be accounted for and the costs can be considerable. Focusing on inferences and inference uncertainty (IU) is one way to understand complex information.

  15. Bayesian estimation of parameters in a regional hydrological model

    Directory of Open Access Journals (Sweden)

    K. Engeland

    2002-01-01

    Full Text Available This study evaluates the applicability of the distributed, process-oriented Ecomag model for prediction of daily streamflow in ungauged basins. The Ecomag model is applied as a regional model to nine catchments in the NOPEX area, using Bayesian statistics to estimate the posterior distribution of the model parameters conditioned on the observed streamflow. The distribution is calculated by Markov Chain Monte Carlo (MCMC analysis. The Bayesian method requires formulation of a likelihood function for the parameters and three alternative formulations are used. The first is a subjectively chosen objective function that describes the goodness of fit between the simulated and observed streamflow, as defined in the GLUE framework. The second and third formulations are more statistically correct likelihood models that describe the simulation errors. The full statistical likelihood model describes the simulation errors as an AR(1 process, whereas the simple model excludes the auto-regressive part. The statistical parameters depend on the catchments and the hydrological processes and the statistical and the hydrological parameters are estimated simultaneously. The results show that the simple likelihood model gives the most robust parameter estimates. The simulation error may be explained to a large extent by the catchment characteristics and climatic conditions, so it is possible to transfer knowledge about them to ungauged catchments. The statistical models for the simulation errors indicate that structural errors in the model are more important than parameter uncertainties. Keywords: regional hydrological model, model uncertainty, Bayesian analysis, Markov Chain Monte Carlo analysis

  16. Brownian motion model with stochastic parameters for asset prices

    Science.gov (United States)

    Ching, Soo Huei; Hin, Pooi Ah

    2013-09-01

    The Brownian motion model may not be a completely realistic model for asset prices because in real asset prices the drift μ and volatility σ may change over time. Presently we consider a model in which the parameter x = (μ,σ) is such that its value x (t + Δt) at a short time Δt ahead of the present time t depends on the value of the asset price at time t + Δt as well as the present parameter value x(t) and m-1 other parameter values before time t via a conditional distribution. The Malaysian stock prices are used to compare the performance of the Brownian motion model with fixed parameter with that of the model with stochastic parameter.

  17. Extended cox regression model: The choice of timefunction

    Science.gov (United States)

    Isik, Hatice; Tutkun, Nihal Ata; Karasoy, Durdu

    2017-07-01

    Cox regression model (CRM), which takes into account the effect of censored observations, is one the most applicative and usedmodels in survival analysis to evaluate the effects of covariates. Proportional hazard (PH), requires a constant hazard ratio over time, is the assumptionofCRM. Using extended CRM provides the test of including a time dependent covariate to assess the PH assumption or an alternative model in case of nonproportional hazards. In this study, the different types of real data sets are used to choose the time function and the differences between time functions are analyzed and discussed.

  18. AGRICULTURAL COOPERATION IN RUSSIA: THE PROBLEM OF ORGANIZATION MODEL CHOICE

    Directory of Open Access Journals (Sweden)

    J. Nilsson

    2008-09-01

    Full Text Available In today's Russia many agricultural co-operatives are established from the top downwards. The national project "Development of Agroindustrial Complex" and other governmental programs initiate the formation of cooperative societies. These cooperatives are organized in accordance with the traditional cooperative model. Many of them do, however, not have any real business activities. The aim of this paper to investigate if traditional cooperatives (following principles such as collective ownership, one member one vote, equal treatment, and solidarity, etc. constitute the best organizational model for cooperatives societies under the present conditions in the Russian agriculture.

  19. Estimation of shape model parameters for 3D surfaces

    DEFF Research Database (Denmark)

    Erbou, Søren Gylling Hemmingsen; Darkner, Sune; Fripp, Jurgen

    2008-01-01

    is applied to a database of 3D surfaces from a section of the porcine pelvic bone extracted from 33 CT scans. A leave-one-out validation shows that the parameters of the first 3 modes of the shape model can be predicted with a mean difference within [-0.01,0.02] from the true mean, with a standard deviation......Statistical shape models are widely used as a compact way of representing shape variation. Fitting a shape model to unseen data enables characterizing the data in terms of the model parameters. In this paper a Gauss-Newton optimization scheme is proposed to estimate shape model parameters of 3D...... surfaces using distance maps, which enables the estimation of model parameters without the requirement of point correspondence. For applications with acquisition limitations such as speed and cost, this formulation enables the fitting of a statistical shape model to arbitrarily sampled data. The method...

  20. The Choice of a Progressive Bilingual Education Model

    Science.gov (United States)

    Zelin, Li

    2017-01-01

    Bilingual education has unique and complex features. In the course of language study, with the mother tongue as a foundation, acquiring a second language depends on the features of student's learning and age. Based on the construction of J. Cummins's (1984) dual iceberg theory dual-language model, students' bilingual education is founded on the…

  1. Modelling the energy budget and prey choice of eider ducks

    NARCIS (Netherlands)

    Brinkman, A.G.; Ens, B.J.; Kats, R.K.H.

    2003-01-01

    We developed an energy and heat budget model for eider ducks. All relevant processes have been quantified. Food processing, diving costs, prey heating, the costs of crushing mussel shells, heat losses during diving as well as during resting, and heat production as a result of muscle activity are

  2. Determination of the Corona model parameters with artificial neural networks

    International Nuclear Information System (INIS)

    Ahmet, Nayir; Bekir, Karlik; Arif, Hashimov

    2005-01-01

    Full text : The aim of this study is to calculate new model parameters taking into account the corona of electrical transmission line wires. For this purpose, a neural network modeling proposed for the corona frequent characteristics modeling. Then this model was compared with the other model developed at the Polytechnic Institute of Saint Petersburg. The results of development of the specified corona model for calculation of its influence on the wave processes in multi-wires line and determination of its parameters are submitted. Results of obtained calculation equations are brought for electrical transmission line with allowance for superficial effect in the ground and wires with reference to developed corona model

  3. On selecting a prior for the precision parameter of Dirichlet process mixture models

    Science.gov (United States)

    Dorazio, R.M.

    2009-01-01

    In hierarchical mixture models the Dirichlet process is used to specify latent patterns of heterogeneity, particularly when the distribution of latent parameters is thought to be clustered (multimodal). The parameters of a Dirichlet process include a precision parameter ?? and a base probability measure G0. In problems where ?? is unknown and must be estimated, inferences about the level of clustering can be sensitive to the choice of prior assumed for ??. In this paper an approach is developed for computing a prior for the precision parameter ?? that can be used in the presence or absence of prior information about the level of clustering. This approach is illustrated in an analysis of counts of stream fishes. The results of this fully Bayesian analysis are compared with an empirical Bayes analysis of the same data and with a Bayesian analysis based on an alternative commonly used prior.

  4. A hybrid mode choice model to account for the dynamic effect of inertia over time

    DEFF Research Database (Denmark)

    Cherchi, Elisabetta; Börjesson, Maria; Bierlaire, Michel

    The influence of habits, giving rise to inertia effect, in the choice process has been intensely debated in the literature. Typically inertia is accounted for by letting the indirect utility functions of the alternatives of the choice situation at time t depend on the outcome of the choice made...... gathered over a continuous period of time, six weeks, to study both inertia and the influence of habits. Tendency to stick with the same alternative is measured through lagged variables that link the current choice with the previous trip made with the same purpose, mode and time of day. However, the lagged...... effect of the previous trips is not constant but it depends on the individual propensity to undertake habitual trips which is captured by the individual specific latent variable. And the frequency of the trips in the previous week is used as an indicator of the habitual behavior. The model estimation...

  5. Effects of chronic administration of drugs of abuse on impulsive choice (delay discounting) in animal models.

    Science.gov (United States)

    Setlow, Barry; Mendez, Ian A; Mitchell, Marci R; Simon, Nicholas W

    2009-09-01

    Drug-addicted individuals show high levels of impulsive choice, characterized by preference for small immediate over larger but delayed rewards. Although the causal relationship between chronic drug use and elevated impulsive choice in humans has been unclear, a small but growing body of literature over the past decade has shown that chronic drug administration in animal models can cause increases in impulsive choice, suggesting that a similar causal relationship may exist in human drug users. This article reviews this literature, with a particular focus on the effects of chronic cocaine administration, which have been most thoroughly characterized. The potential mechanisms of these effects are described in terms of drug-induced neural alterations in ventral striatal and prefrontal cortical brain systems. Some implications of this research for pharmacological treatment of drug-induced increases in impulsive choice are discussed, along with suggestions for future research in this area.

  6. Spatio-temporal modeling of nonlinear distributed parameter systems

    CERN Document Server

    Li, Han-Xiong

    2011-01-01

    The purpose of this volume is to provide a brief review of the previous work on model reduction and identifi cation of distributed parameter systems (DPS), and develop new spatio-temporal models and their relevant identifi cation approaches. In this book, a systematic overview and classifi cation on the modeling of DPS is presented fi rst, which includes model reduction, parameter estimation and system identifi cation. Next, a class of block-oriented nonlinear systems in traditional lumped parameter systems (LPS) is extended to DPS, which results in the spatio-temporal Wiener and Hammerstein s

  7. Some tests for parameter constancy in cointegrated VAR-models

    DEFF Research Database (Denmark)

    Hansen, Henrik; Johansen, Søren

    1999-01-01

    Some methods for the evaluation of parameter constancy in vector autoregressive (VAR) models are discussed. Two different ways of re-estimating the VAR model are proposed; one in which all parameters are estimated recursively based upon the likelihood function for the first observations, and anot...... be applied to test the constancy of the long-run parameters in the cointegrated VAR-model. All results are illustrated using a model for the term structure of interest rates on US Treasury securities. ...

  8. The role of intention as mediator between latent effects and behavior: application of a hybrid choice model to study departure time choices

    DEFF Research Database (Denmark)

    Thorhauge, Mikkel; Cherchi, Elisabetta; Walker, Joan L.

    2017-01-01

    of them consider the effect of intention and its role as mediator between those psychological effects and the choice, as implied in the Theory of Planned Behavior. In this paper we contribute to the literature in this field by specifically studying the direct effect of the intention on the actual behavior......, while attitude, social norms, and perceived behavioral control affect the intention to behave in a given way. We apply a hybrid choice model to study the departure time choice. For this, we use data from Danish commuters in the morning rush hours in the Greater Copenhagen area. We find a significant...

  9. Measurement model choice influenced randomized controlled trial results.

    Science.gov (United States)

    Gorter, Rosalie; Fox, Jean-Paul; Apeldoorn, Adri; Twisk, Jos

    2016-11-01

    In randomized controlled trials (RCTs), outcome variables are often patient-reported outcomes measured with questionnaires. Ideally, all available item information is used for score construction, which requires an item response theory (IRT) measurement model. However, in practice, the classical test theory measurement model (sum scores) is mostly used, and differences between response patterns leading to the same sum score are ignored. The enhanced differentiation between scores with IRT enables more precise estimation of individual trajectories over time and group effects. The objective of this study was to show the advantages of using IRT scores instead of sum scores when analyzing RCTs. Two studies are presented, a real-life RCT, and a simulation study. Both IRT and sum scores are used to measure the construct and are subsequently used as outcomes for effect calculation. The bias in RCT results is conditional on the measurement model that was used to construct the scores. A bias in estimated trend of around one standard deviation was found when sum scores were used, where IRT showed negligible bias. Accurate statistical inferences are made from an RCT study when using IRT to estimate construct measurements. The use of sum scores leads to incorrect RCT results. Copyright © 2016 Elsevier Inc. All rights reserved.

  10. Determining extreme parameter correlation in ground water models

    DEFF Research Database (Denmark)

    Hill, Mary Cole; Østerby, Ole

    2003-01-01

    In ground water flow system models with hydraulic-head observations but without significant imposed or observed flows, extreme parameter correlation generally exists. As a result, hydraulic conductivity and recharge parameters cannot be uniquely estimated. In complicated problems, such correlation...... correlation coefficients with absolute values that round to 1.00 were good indicators of extreme parameter correlation, but smaller values were not necessarily good indicators of lack of correlation and resulting unique parameter estimates; (2) the SVD may be more difficult to interpret than parameter...

  11. Parameters Estimation of Geographically Weighted Ordinal Logistic Regression (GWOLR) Model

    Science.gov (United States)

    Zuhdi, Shaifudin; Retno Sari Saputro, Dewi; Widyaningsih, Purnami

    2017-06-01

    A regression model is the representation of relationship between independent variable and dependent variable. The dependent variable has categories used in the logistic regression model to calculate odds on. The logistic regression model for dependent variable has levels in the logistics regression model is ordinal. GWOLR model is an ordinal logistic regression model influenced the geographical location of the observation site. Parameters estimation in the model needed to determine the value of a population based on sample. The purpose of this research is to parameters estimation of GWOLR model using R software. Parameter estimation uses the data amount of dengue fever patients in Semarang City. Observation units used are 144 villages in Semarang City. The results of research get GWOLR model locally for each village and to know probability of number dengue fever patient categories.

  12. Towards more variation in text generation : Developing and evaluating variation models for choice of referential form

    NARCIS (Netherlands)

    Castro Ferreira, Thiago; Krahmer, Emiel; Wubben, Sander

    In this study, we introduce a nondeterministic method for referring expression generation. We describe two models that account for individual variation in the choice of referential form in automatically generated text: a Naive Bayes model and a Recurrent Neural Network. Both are evaluated using the

  13. A discrete-choice model with social interactions : With an application to high school teen behavior

    NARCIS (Netherlands)

    Soetevent, Adriaan R.; Kooreman, Peter

    2007-01-01

    We develop an empirical discrete-choice interaction model with a finite number of agents. We characterize its equilibrium properties-in particular the correspondence between interaction strength, number of agents, and the set of equilibria-and propose to estimate the model by means of simulation

  14. Decision-Tree Models of Categorization Response Times, Choice Proportions, and Typicality Judgments

    Science.gov (United States)

    Lafond, Daniel; Lacouture, Yves; Cohen, Andrew L.

    2009-01-01

    The authors present 3 decision-tree models of categorization adapted from T. Trabasso, H. Rollins, and E. Shaughnessy (1971) and use them to provide a quantitative account of categorization response times, choice proportions, and typicality judgments at the individual-participant level. In Experiment 1, the decision-tree models were fit to…

  15. Perceived and Implicit Ranking of Academic Journals: An Optimization Choice Model

    Science.gov (United States)

    Xie, Frank Tian; Cai, Jane Z.; Pan, Yue

    2012-01-01

    A new system of ranking academic journals is proposed in this study and optimization choice model used to analyze data collected from 346 faculty members in a business discipline. The ranking model uses the aggregation of perceived, implicit sequencing of academic journals by academicians, therefore eliminating several key shortcomings of previous…

  16. A discrete choice model with social interactions; with an application to high school teen behavior

    NARCIS (Netherlands)

    Soetevent, Adriaan R.; Kooreman, Peter

    2004-01-01

    We develop an empirical discrete choice interaction model with a finite number of agents. We characterize its equilibrium properties - in particular the correspondence between the interaction strength, the number of agents, and the set of equilibria - and propose to estimate the model by means of

  17. A discrete choice model with social interactions; with an application to high school teen behavior

    NARCIS (Netherlands)

    Soetevent, A.R.; Kooreman, P.

    2007-01-01

    We develop an empirical discrete-choice interaction model with a finite number of agents. We characterize its equilibrium properties - in particular the correspondence between interaction strength, number of agents, and the set of equilibria - and propose to estimate the model by means of simulation

  18. A discrete choice model with social interactions : an analysis of high school teen behavior

    NARCIS (Netherlands)

    Kooreman, Peter; Soetevent, Adriaan

    2002-01-01

    We develop an empirical discrete choice model that explicitly allows for endogenous social interactions. We analyze the issues of multiple equilibria, statistical coherency, and estimation of the model by means of simulation methods. In an empirical application, we analyze a data set containing

  19. A Decision Model for Steady-State Choice in Concurrent Chains

    Science.gov (United States)

    Christensen, Darren R.; Grace, Randolph C.

    2010-01-01

    Grace and McLean (2006) proposed a decision model for acquisition of choice in concurrent chains which assumes that after reinforcement in a terminal link, subjects make a discrimination whether the preceding reinforcer delay was short or long relative to a criterion. Their model was subsequently extended by Christensen and Grace (2008, 2009a,…

  20. A Stochastic Route Choice Model for Car Travellers in the Copenhagen Region

    DEFF Research Database (Denmark)

    Nielsen, Otto Anker; Frederiksen, Rasmus Dyhr; Daly, A.

    2002-01-01

    The paper presents a large-scale stochastic road traffic assignment model for the Copenhagen Region. The model considers several classes of passenger cars (different trip purposes), vans and trucks, each with its own utility function on which route choices are based. The utility functions include...

  1. Sequential sampling model for multiattribute choice alternatives with random attention time and processing order

    Directory of Open Access Journals (Sweden)

    Adele eDiederich

    2014-09-01

    Full Text Available A sequential sampling model for multiattribute binary choice options, called Multiattribute attention switching (MAAS model, assumes a separate sampling process for each attribute. During the deliberation process attention switches from one attribute consideration to the next. The order in which attributes are considered as well for how long each attribute is considered - the attention time - influences the predicted choice probabilities and choice response times. Several probability distributions for the attention time including deterministic, Poisson, binomial, geometric, and uniform with different variances are investigated. Depending on the time and order schedule the model predicts a rich choice probability/choice response time pattern including preference reversals and fast errors. Furthermore, the difference between a finite and infinite decision horizons for the attribute considered last is investigated. For the former case the model predicts a probability $p_0> 0$ of not deciding within the available time. The underlying stochastic process for each attribute is an Ornstein-Uhlenbeck process approximated by a discrete birth-death process. All predictions are also true for the widely applied Wiener process.

  2. Modeling and Parameter Estimation of a Small Wind Generation System

    Directory of Open Access Journals (Sweden)

    Carlos A. Ramírez Gómez

    2013-11-01

    Full Text Available The modeling and parameter estimation of a small wind generation system is presented in this paper. The system consists of a wind turbine, a permanent magnet synchronous generator, a three phase rectifier, and a direct current load. In order to estimate the parameters wind speed data are registered in a weather station located in the Fraternidad Campus at ITM. Wind speed data were applied to a reference model programed with PSIM software. From that simulation, variables were registered to estimate the parameters. The wind generation system model together with the estimated parameters is an excellent representation of the detailed model, but the estimated model offers a higher flexibility than the programed model in PSIM software.

  3. Parameter estimation of variable-parameter nonlinear Muskingum model using excel solver

    Science.gov (United States)

    Kang, Ling; Zhou, Liwei

    2018-02-01

    Abstract . The Muskingum model is an effective flood routing technology in hydrology and water resources Engineering. With the development of optimization technology, more and more variable-parameter Muskingum models were presented to improve effectiveness of the Muskingum model in recent decades. A variable-parameter nonlinear Muskingum model (NVPNLMM) was proposed in this paper. According to the results of two real and frequently-used case studies by various models, the NVPNLMM could obtain better values of evaluation criteria, which are used to describe the superiority of the estimated outflows and compare the accuracies of flood routing using various models, and the optimal estimated outflows by the NVPNLMM were closer to the observed outflows than the ones by other models.

  4. An experimental study on cumulative prospect theory learning model of travelers’ dynamic mode choice under uncertainty

    Directory of Open Access Journals (Sweden)

    Chao Yang

    2017-06-01

    Full Text Available In this paper, we examined travelers’ dynamic mode choice behavior under travel time variability. We found travelers’ inconsistent risk attitudes through a binary mode choice experiment. Although the results deviated from the traditional utility maximization theory and could not be explained by the payoff variability effect, they could be well captured in a cumulative prospect theory (CPT framework. After considering the imperfect memory effect, we found that the prediction ability of the cumulative prospect theory learning (CPTL model could be significantly improved. The experimental results were also compared with the CPTL model and the reinforcement learning (REL model. This study empirically showed the potential of alternative theories to better capture travelers’ day-to-day mode choice behavior under uncertainty. A new definition of willingness to pay (WTP in a CPT framework was provided to explicitly consider travelers’ perceived value increases in travel time.

  5. Do Lumped-Parameter Models Provide the Correct Geometrical Damping?

    DEFF Research Database (Denmark)

    Andersen, Lars

    2007-01-01

    This paper concerns the formulation of lumped-parameter models for rigid footings on homogenous or stratified soil with focus on the horizontal sliding and rocking. Such models only contain a few degrees of freedom, which makes them ideal for inclusion in aero-elastic codes for wind turbines......-parameter models with respect to the prediction of the maximum response during excitation and the geometrical damping related to free vibrations of a footing....

  6. Estimasi Parameter Item dan Latent Class dengan Model Dina untuk Diagnosis Kesulitan Belajar

    Directory of Open Access Journals (Sweden)

    - Kusaeri

    2013-07-01

    Full Text Available Abstract: Estimation of Item Parameter and Latent Class with DINA Model to Diagnose Learning Difficulties. This study aims to estimate item parameter of diagnostic test developed with DINA model and identify attribute profiles of each test participant. The instrument of this study was diagnostic test using multiple choice format with 4 options. The data were analyzed using Mplus software, R program and ITEMAN. The results show that out of 8 items measuring social arithmetic and comparison, there was on ly one item that had low guessing and slip parameter. The study also found that basic operation and concept in arithmetic and verbal questions were problematic f or most students. Abstrak: Estimasi Parameter Item dan Latent Class dengan Model DINA untuk Diagnosis Kesulitan Belajar. Penelitian ini bertujuan untuk mengestimasi parameter item dari tes diagnostik yang dikembangkan dengan model DINA dan mengidentifikasi profil atribut setiap peserta tes. Instrumen penelitian ini berupa tes diagnostik berbentuk pilihan ganda dengan 4 pilihan jawaban. Data dianalisis dengan menggunakan software Mplus, program R dan ITEMAN. Hasil penelitian menunjukkan bahwa dari 8 item yang mengukur materi aritmetika sosial dan perbandingan, hanya ada satu item dengan parameter guessing dan slip rendah. Temuan lain operasi dan konsep dasar dalam aritmetika serta soal bentuk verbal masih menjadi m asalah bagi sebagian besar siswa.

  7. Incorporating model parameter uncertainty into inverse treatment planning

    International Nuclear Information System (INIS)

    Lian Jun; Xing Lei

    2004-01-01

    Radiobiological treatment planning depends not only on the accuracy of the models describing the dose-response relation of different tumors and normal tissues but also on the accuracy of tissue specific radiobiological parameters in these models. Whereas the general formalism remains the same, different sets of model parameters lead to different solutions and thus critically determine the final plan. Here we describe an inverse planning formalism with inclusion of model parameter uncertainties. This is made possible by using a statistical analysis-based frameset developed by our group. In this formalism, the uncertainties of model parameters, such as the parameter a that describes tissue-specific effect in the equivalent uniform dose (EUD) model, are expressed by probability density function and are included in the dose optimization process. We found that the final solution strongly depends on distribution functions of the model parameters. Considering that currently available models for computing biological effects of radiation are simplistic, and the clinical data used to derive the models are sparse and of questionable quality, the proposed technique provides us with an effective tool to minimize the effect caused by the uncertainties in a statistical sense. With the incorporation of the uncertainties, the technique has potential for us to maximally utilize the available radiobiology knowledge for better IMRT treatment

  8. Discrete choice modeling of environmental security. Research report

    Energy Technology Data Exchange (ETDEWEB)

    Carson, K.S.

    1998-10-01

    The presence of overpopulation or unsustainable population growth may place pressure on the food and water supplies of countries in sensitive areas of the world. Severe air or water pollution may place additional pressure on these resources. These pressures may generate both internal and international conflict in these areas as nations struggle to provide for their citizens. Such conflicts may result in United States intervention, either unilaterally, or through the United Nations. Therefore, it is in the interests of the United States to identify potential areas of conflict in order to properly train and allocate forces. The purpose of this research is to forecast the probability of conflict in a nation as a function of it s environmental conditions. Probit, logit and ordered probit models are employed to forecast the probability of a given level of conflict. Data from 95 countries are used to estimate the models. Probability forecasts are generated for these 95 nations. Out-of sample forecasts are generated for an additional 22 nations. These probabilities are then used to rank nations from highest probability of conflict to lowest. The results indicate that the dependence of a nation`s economy on agriculture, the rate of deforestation, and the population density are important variables in forecasting the probability and level of conflict. These results indicate that environmental variables do play a role in generating or exacerbating conflict. It is unclear that the United States military has any direct role in mitigating the environmental conditions that may generate conflict. A more important role for the military is to aid in data gathering to generate better forecasts so that the troops are adequntely prepared when conflicts arises.

  9. A method for model identification and parameter estimation

    International Nuclear Information System (INIS)

    Bambach, M; Heinkenschloss, M; Herty, M

    2013-01-01

    We propose and analyze a new method for the identification of a parameter-dependent model that best describes a given system. This problem arises, for example, in the mathematical modeling of material behavior where several competing constitutive equations are available to describe a given material. In this case, the models are differential equations that arise from the different constitutive equations, and the unknown parameters are coefficients in the constitutive equations. One has to determine the best-suited constitutive equations for a given material and application from experiments. We assume that the true model is one of the N possible parameter-dependent models. To identify the correct model and the corresponding parameters, we can perform experiments, where for each experiment we prescribe an input to the system and observe a part of the system state. Our approach consists of two stages. In the first stage, for each pair of models we determine the experiment, i.e. system input and observation, that best differentiates between the two models, and measure the distance between the two models. Then we conduct N(N − 1) or, depending on the approach taken, N(N − 1)/2 experiments and use the result of the experiments as well as the previously computed model distances to determine the true model. We provide sufficient conditions on the model distances and measurement errors which guarantee that our approach identifies the correct model. Given the model, we identify the corresponding model parameters in the second stage. The problem in the second stage is a standard parameter estimation problem and we use a method suitable for the given application. We illustrate our approach on three examples, including one where the models are elliptic partial differential equations with different parameterized right-hand sides and an example where we identify the constitutive equation in a problem from computational viscoplasticity. (paper)

  10. Optimal parameters for the FFA-Beddoes dynamic stall model

    Energy Technology Data Exchange (ETDEWEB)

    Bjoerck, A.; Mert, M. [FFA, The Aeronautical Research Institute of Sweden, Bromma (Sweden); Madsen, H.A. [Risoe National Lab., Roskilde (Denmark)

    1999-03-01

    Unsteady aerodynamic effects, like dynamic stall, must be considered in calculation of dynamic forces for wind turbines. Models incorporated in aero-elastic programs are of semi-empirical nature. Resulting aerodynamic forces therefore depend on values used for the semi-empiricial parameters. In this paper a study of finding appropriate parameters to use with the Beddoes-Leishman model is discussed. Minimisation of the `tracking error` between results from 2D wind tunnel tests and simulation with the model is used to find optimum values for the parameters. The resulting optimum parameters show a large variation from case to case. Using these different sets of optimum parameters in the calculation of blade vibrations, give rise to quite different predictions of aerodynamic damping which is discussed. (au)

  11. A distributed approach for parameters estimation in System Biology models

    International Nuclear Information System (INIS)

    Mosca, E.; Merelli, I.; Alfieri, R.; Milanesi, L.

    2009-01-01

    Due to the lack of experimental measurements, biological variability and experimental errors, the value of many parameters of the systems biology mathematical models is yet unknown or uncertain. A possible computational solution is the parameter estimation, that is the identification of the parameter values that determine the best model fitting respect to experimental data. We have developed an environment to distribute each run of the parameter estimation algorithm on a different computational resource. The key feature of the implementation is a relational database that allows the user to swap the candidate solutions among the working nodes during the computations. The comparison of the distributed implementation with the parallel one showed that the presented approach enables a faster and better parameter estimation of systems biology models.

  12. Do Lumped-Parameter Models Provide the Correct Geometrical Damping?

    DEFF Research Database (Denmark)

    Andersen, Lars

    This paper concerns the formulation of lumped-parameter models for rigid footings on homogenous or stratified soil. Such models only contain a few degrees of freedom, which makes them ideal for inclusion in aero-elastic codes for wind turbines and other models applied to fast evaluation of struct......This paper concerns the formulation of lumped-parameter models for rigid footings on homogenous or stratified soil. Such models only contain a few degrees of freedom, which makes them ideal for inclusion in aero-elastic codes for wind turbines and other models applied to fast evaluation...... response during excitation and the geometrical damping related to free vibrations of a hexagonal footing. The optimal order of a lumped-parameter model is determined for each degree of freedom, i.e. horizontal and vertical translation as well as torsion and rocking. In particular, the necessity of coupling...

  13. Parameter estimation for groundwater models under uncertain irrigation data

    Science.gov (United States)

    Demissie, Yonas; Valocchi, Albert J.; Cai, Ximing; Brozovic, Nicholas; Senay, Gabriel; Gebremichael, Mekonnen

    2015-01-01

    The success of modeling groundwater is strongly influenced by the accuracy of the model parameters that are used to characterize the subsurface system. However, the presence of uncertainty and possibly bias in groundwater model source/sink terms may lead to biased estimates of model parameters and model predictions when the standard regression-based inverse modeling techniques are used. This study first quantifies the levels of bias in groundwater model parameters and predictions due to the presence of errors in irrigation data. Then, a new inverse modeling technique called input uncertainty weighted least-squares (IUWLS) is presented for unbiased estimation of the parameters when pumping and other source/sink data are uncertain. The approach uses the concept of generalized least-squares method with the weight of the objective function depending on the level of pumping uncertainty and iteratively adjusted during the parameter optimization process. We have conducted both analytical and numerical experiments, using irrigation pumping data from the Republican River Basin in Nebraska, to evaluate the performance of ordinary least-squares (OLS) and IUWLS calibration methods under different levels of uncertainty of irrigation data and calibration conditions. The result from the OLS method shows the presence of statistically significant (p model predictions that persist despite calibrating the models to different calibration data and sample sizes. However, by directly accounting for the irrigation pumping uncertainties during the calibration procedures, the proposed IUWLS is able to minimize the bias effectively without adding significant computational burden to the calibration processes.

  14. Transformations among CE–CVM model parameters for ...

    Indian Academy of Sciences (India)

    Unknown

    parameters which exclusively represent interactions of the higher order systems. Such a procedure is presen- ted in detail in this communication. Furthermore, the details of transformations required to express the model parameters in one basis from those defined in another basis for the same system are also presented.

  15. Transformations among CE–CVM model parameters for ...

    Indian Academy of Sciences (India)

    ... of parameters which exclusively represent interactions of the higher order systems. Such a procedure is presented in detail in this communication. Furthermore, the details of transformations required to express the model parameters in one basis from those defined in another basis for the same system are also presented.

  16. The endogenous grid method for discrete-continuous dynamic choice models with (or without) taste shocks

    DEFF Research Database (Denmark)

    Iskhakov, Fedor; Jørgensen, Thomas H.; Rust, John

    2017-01-01

    We present a fast and accurate computational method for solving and estimating a class of dynamic programming models with discrete and continuous choice variables. The solution method we develop for structural estimation extends the endogenous grid-point method (EGM) to discrete-continuous (DC...... taste shocks that are typically interpreted as “unobserved state variables” in structural econometric applications, or serve as “random noise” to smooth out kinks in the value functions in numerical applications. We present Monte Carlo experiments that demonstrate the reliability and efficiency......) problems. Discrete choices can lead to kinks in the value functions and discontinuities in the optimal policy rules, greatly complicating the solution of the model. We show how these problems are ameliorated in the presence of additive choice-specific independent and identically distributed extreme value...

  17. Study on Identification of Material Model Parameters from Compact Tension Test on Concrete Specimens

    Science.gov (United States)

    Hokes, Filip; Kral, Petr; Husek, Martin; Kala, Jiri

    2017-10-01

    Identification of a concrete material model parameters using optimization is based on a calculation of a difference between experimentally measured and numerically obtained data. Measure of the difference can be formulated via root mean squared error that is often used for determination of accuracy of a mathematical model in the field of meteorology or demography. The quality of the identified parameters is, however, determined not only by right choice of an objective function but also by the source experimental data. One of the possible way is to use load-displacement curves from three-point bending tests that were performed on concrete specimens. This option shows the significance of modulus of elasticity, tensile strength and specific fracture energy. Another possible option is to use experimental data from compact tension test. It is clear that the response in the second type of test is also dependent on the above mentioned material parameters. The question is whether the parameters identified within three-point bending test and within compact tension test will reach the same values. The presented article brings the numerical study of inverse identification of material model parameters from experimental data measured during compact tension tests. The article also presents utilization of the modified sensitivity analysis that calculates the sensitivity of the material model parameters for different parts of loading curve. The main goal of the article is to describe the process of inverse identification of parameters for plasticity-based material model of concrete and prepare data for future comparison with identified values of the material model parameters from different type of fracture tests.

  18. Retrospective forecast of ETAS model with daily parameters estimate

    Science.gov (United States)

    Falcone, Giuseppe; Murru, Maura; Console, Rodolfo; Marzocchi, Warner; Zhuang, Jiancang

    2016-04-01

    We present a retrospective ETAS (Epidemic Type of Aftershock Sequence) model based on the daily updating of free parameters during the background, the learning and the test phase of a seismic sequence. The idea was born after the 2011 Tohoku-Oki earthquake. The CSEP (Collaboratory for the Study of Earthquake Predictability) Center in Japan provided an appropriate testing benchmark for the five 1-day submitted models. Of all the models, only one was able to successfully predict the number of events that really happened. This result was verified using both the real time and the revised catalogs. The main cause of the failure was in the underestimation of the forecasted events, due to model parameters maintained fixed during the test. Moreover, the absence in the learning catalog of an event similar to the magnitude of the mainshock (M9.0), which drastically changed the seismicity in the area, made the learning parameters not suitable to describe the real seismicity. As an example of this methodological development we show the evolution of the model parameters during the last two strong seismic sequences in Italy: the 2009 L'Aquila and the 2012 Reggio Emilia episodes. The achievement of the model with daily updated parameters is compared with that of same model where the parameters remain fixed during the test time.

  19. Agricultural and Environmental Input Parameters for the Biosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    Kaylie Rasmuson; Kurt Rautenstrauch

    2003-06-20

    This analysis is one of nine technical reports that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN) biosphere model. It documents input parameters for the biosphere model, and supports the use of the model to develop Biosphere Dose Conversion Factors (BDCF). The biosphere model is one of a series of process models supporting the Total System Performance Assessment (TSPA) for the repository at Yucca Mountain. The ERMYN provides the TSPA with the capability to perform dose assessments. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships between the major activities and their products (the analysis and model reports) that were planned in the biosphere Technical Work Plan (TWP, BSC 2003a). It should be noted that some documents identified in Figure 1-1 may be under development and therefore not available at the time this document is issued. The ''Biosphere Model Report'' (BSC 2003b) describes the ERMYN and its input parameters. This analysis report, ANL-MGR-MD-000006, ''Agricultural and Environmental Input Parameters for the Biosphere Model'', is one of the five reports that develop input parameters for the biosphere model. This report defines and justifies values for twelve parameters required in the biosphere model. These parameters are related to use of contaminated groundwater to grow crops. The parameter values recommended in this report are used in the soil, plant, and carbon-14 submodels of the ERMYN.

  20. Passenger route choice model and algorithm in the urban rail transit network

    Directory of Open Access Journals (Sweden)

    Ke Qiao

    2013-03-01

    Full Text Available Purpose: There are several routes between some OD pairs in the urban rail transit network. In order to carry out the fare allocating, operators use some models to estimate which route the passengers choose, but there are some errors between estimation results and actual choices results. The aim of this study is analyzing the passenger route choice behavior in detail based on passenger classification and improving the models to make the results more in line with the actual situations.Design/methodology/approach: In this paper, the passengers were divided into familiar type and strange type. Firstly passenger integrated travel impedance functions of two types were established respectively, after that a multi-route distribution model was used to get the initial route assignment results, then a ratio correction method was used to correct the results taking into account the transfer times, crowd and demand for seats. Finally, a case study for the Beijing local rail transit network is shown.Findings: The numerical example showed that it is logical to take passenger classification and the model and algorithm is effective, the final route choice results are more comprehensive and realistic.Originality/value: The paper offers an improved model and algorithm based on passenger classification for passenger route choice in the urban rail transit network.

  1. Stochastic hyperelastic modeling considering dependency of material parameters

    Science.gov (United States)

    Caylak, Ismail; Penner, Eduard; Dridger, Alex; Mahnken, Rolf

    2018-03-01

    This paper investigates the uncertainty of a hyperelastic model by treating random material parameters as stochastic variables. For its stochastic discretization a polynomial chaos expansion (PCE) is used. An important aspect in our work is the consideration of stochastic dependencies in the stochastic modeling of Ogden's material model. To this end, artificial experiments are generated using the auto-regressive moving average process based on real experiments. The parameter identification for all data provides statistics of Ogden's material parameters, which are subsequently used for stochastic modeling. Stochastic dependencies are incorporated into the PCE using a Nataf transformation from dependent distributed random variables to independent standard normal distributed ones. The representative numerical example shows that our proposed method adequately takes into account the stochastic dependencies of Ogden's material parameters.

  2. Parameter Estimation for the Thurstone Case III Model.

    Science.gov (United States)

    Mackay, David B.; Chaiy, Seoil

    1982-01-01

    The ability of three estimation criteria to recover parameters of the Thurstone Case V and Case III models from comparative judgment data was investigated via Monte Carlo techniques. Significant differences in recovery are shown to exist. (Author/JKS)

  3. Improved parameter estimation for hydrological models using weighted object functions

    NARCIS (Netherlands)

    Stein, A.; Zaadnoordijk, W.J.

    1999-01-01

    This paper discusses the sensitivity of calibration of hydrological model parameters to different objective functions. Several functions are defined with weights depending upon the hydrological background. These are compared with an objective function based upon kriging. Calibration is applied to

  4. A discrete-continuous choice model of climate change impacts on energy

    International Nuclear Information System (INIS)

    Morrison, W.N.; Mendelsohn, R.

    1998-01-01

    This paper estimates a discrete-continuous fuel choice model in order to explore climate impacts on the energy sector. The model is estimated on a national data set of firms and households. The results reveal that actors switch from oil in cold climates to electricity and natural gas in warm climates and that fuel-specific expenditures follow a U-shaped relationship with respect to temperature. The model implies that warming will increase American energy expenditures, reflecting a sizable welfare damage

  5. Input parameters for LEAP and analysis of the Model 22C data base

    Energy Technology Data Exchange (ETDEWEB)

    Stewart, L.; Goldstein, M.

    1981-05-01

    The input data for the Long-Term Energy Analysis Program (LEAP) employed by EIA for projections of long-term energy supply and demand in the US were studied and additional documentation provided. Particular emphasis has been placed on the LEAP Model 22C input data base, which was used in obtaining the output projections which appear in the 1978 Annual Report to Congress. Definitions, units, associated model parameters, and translation equations are given in detail. Many parameters were set to null values in Model 22C so as to turn off certain complexities in LEAP; these parameters are listed in Appendix B along with parameters having constant values across all activities. The values of the parameters for each activity are tabulated along with the source upon which each parameter is based - and appropriate comments provided, where available. The structure of the data base is briefly outlined and an attempt made to categorize the parameters according to the methods employed for estimating the numerical values. Due to incomplete documentation and/or lack of specific parameter definitions, few of the input values could be traced and uniquely interpreted using the information provided in the primary and secondary sources. Input parameter choices were noted which led to output projections which are somewhat suspect. Other data problems encountered are summarized. Some of the input data were corrected and a revised base case was constructed. The output projections for this revised case are compared with the Model 22C output for the year 2020, for the Transportation Sector. LEAP could be a very useful tool, especially so in the study of emerging technologies over long-time frames.

  6. Partial sum approaches to mathematical parameters of some growth models

    Science.gov (United States)

    Korkmaz, Mehmet

    2016-04-01

    Growth model is fitted by evaluating the mathematical parameters, a, b and c. In this study, the method of partial sums were used. For finding the mathematical parameters, firstly three partial sums were used, secondly four partial sums were used, thirdly five partial sums were used and finally N partial sums were used. The purpose of increasing the partial decomposition is to produce a better phase model which gives a better expected value by minimizing error sum of squares in the interval used.

  7. Parameter estimation in stochastic rainfall-runoff models

    DEFF Research Database (Denmark)

    Jonsdottir, Harpa; Madsen, Henrik; Palsson, Olafur Petur

    2006-01-01

    A parameter estimation method for stochastic rainfall-runoff models is presented. The model considered in the paper is a conceptual stochastic model, formulated in continuous-discrete state space form. The model is small and a fully automatic optimization is, therefore, possible for estimating all....... For a comparison the parameters are also estimated by an output error method, where the sum of squared simulation error is minimized. The former methodology is optimal for short-term prediction whereas the latter is optimal for simulations. Hence, depending on the purpose it is possible to select whether...... the parameter values are optimal for simulation or prediction. The data originates from Iceland and the model is designed for Icelandic conditions, including a snow routine for mountainous areas. The model demands only two input data series, precipitation and temperature and one output data series...

  8. Luminescence model with quantum impact parameter for low energy ions

    CERN Document Server

    Cruz-Galindo, H S; Martínez-Davalos, A; Belmont-Moreno, E; Galindo, S

    2002-01-01

    We have modified an analytical model of induced light production by energetic ions interacting in scintillating materials. The original model is based on the distribution of energy deposited by secondary electrons produced along the ion's track. The range of scattered electrons, and thus the energy distribution, depends on a classical impact parameter between the electron and the ion's track. The only adjustable parameter of the model is the quenching density rho sub q. The modification here presented, consists in proposing a quantum impact parameter that leads to a better fit of the model to the experimental data at low incident ion energies. The light output response of CsI(Tl) detectors to low energy ions (<3 MeV/A) is fitted with the modified model and comparison is made to the original model.

  9. Agricultural and Environmental Input Parameters for the Biosphere Model

    International Nuclear Information System (INIS)

    K. Rasmuson; K. Rautenstrauch

    2004-01-01

    This analysis is one of 10 technical reports that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN) (i.e., the biosphere model). It documents development of agricultural and environmental input parameters for the biosphere model, and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the total system performance assessment (TSPA) for the repository at Yucca Mountain. The ERMYN provides the TSPA with the capability to perform dose assessments. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships between the major activities and their products (the analysis and model reports) that were planned in ''Technical Work Plan for Biosphere Modeling and Expert Support'' (BSC 2004 [DIRS 169573]). The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes the ERMYN and its input parameters

  10. Combined Estimation of Hydrogeologic Conceptual Model and Parameter Uncertainty

    Energy Technology Data Exchange (ETDEWEB)

    Meyer, Philip D.; Ye, Ming; Neuman, Shlomo P.; Cantrell, Kirk J.

    2004-03-01

    The objective of the research described in this report is the development and application of a methodology for comprehensively assessing the hydrogeologic uncertainties involved in dose assessment, including uncertainties associated with conceptual models, parameters, and scenarios. This report describes and applies a statistical method to quantitatively estimate the combined uncertainty in model predictions arising from conceptual model and parameter uncertainties. The method relies on model averaging to combine the predictions of a set of alternative models. Implementation is driven by the available data. When there is minimal site-specific data the method can be carried out with prior parameter estimates based on generic data and subjective prior model probabilities. For sites with observations of system behavior (and optionally data characterizing model parameters), the method uses model calibration to update the prior parameter estimates and model probabilities based on the correspondence between model predictions and site observations. The set of model alternatives can contain both simplified and complex models, with the requirement that all models be based on the same set of data. The method was applied to the geostatistical modeling of air permeability at a fractured rock site. Seven alternative variogram models of log air permeability were considered to represent data from single-hole pneumatic injection tests in six boreholes at the site. Unbiased maximum likelihood estimates of variogram and drift parameters were obtained for each model. Standard information criteria provided an ambiguous ranking of the models, which would not justify selecting one of them and discarding all others as is commonly done in practice. Instead, some of the models were eliminated based on their negligibly small updated probabilities and the rest were used to project the measured log permeabilities by kriging onto a rock volume containing the six boreholes. These four

  11. The sensitivity of ecosystem service models to choices of input data and spatial resolution

    Science.gov (United States)

    Bagstad, Kenneth J.; Cohen, Erika; Ancona, Zachary H.; McNulty, Steven; Sun, Ge

    2018-01-01

    Although ecosystem service (ES) modeling has progressed rapidly in the last 10–15 years, comparative studies on data and model selection effects have become more common only recently. Such studies have drawn mixed conclusions about whether different data and model choices yield divergent results. In this study, we compared the results of different models to address these questions at national, provincial, and subwatershed scales in Rwanda. We compared results for carbon, water, and sediment as modeled using InVEST and WaSSI using (1) land cover data at 30 and 300 m resolution and (2) three different input land cover datasets. WaSSI and simpler InVEST models (carbon storage and annual water yield) were relatively insensitive to the choice of spatial resolution, but more complex InVEST models (seasonal water yield and sediment regulation) produced large differences when applied at differing resolution. Six out of nine ES metrics (InVEST annual and seasonal water yield and WaSSI) gave similar predictions for at least two different input land cover datasets. Despite differences in mean values when using different data sources and resolution, we found significant and highly correlated results when using Spearman's rank correlation, indicating consistent spatial patterns of high and low values. Our results confirm and extend conclusions of past studies, showing that in certain cases (e.g., simpler models and national-scale analyses), results can be robust to data and modeling choices. For more complex models, those with different output metrics, and subnational to site-based analyses in heterogeneous environments, data and model choices may strongly influence study findings.

  12. An aggregate method to calibrate the reference point of cumulative prospect theory-based route choice model for urban transit network

    Science.gov (United States)

    Zhang, Yufeng; Long, Man; Luo, Sida; Bao, Yu; Shen, Hanxia

    2015-12-01

    Transit route choice model is the key technology of public transit systems planning and management. Traditional route choice models are mostly based on expected utility theory which has an evident shortcoming that it cannot accurately portray travelers' subjective route choice behavior for their risk preferences are not taken into consideration. Cumulative prospect theory (CPT), a brand new theory, can be used to describe travelers' decision-making process under the condition of uncertainty of transit supply and risk preferences of multi-type travelers. The method to calibrate the reference point, a key parameter to CPT-based transit route choice model, determines the precision of the model to a great extent. In this paper, a new method is put forward to obtain the value of reference point which combines theoretical calculation and field investigation results. Comparing the proposed method with traditional method, it shows that the new method can promote the quality of CPT-based model by improving the accuracy in simulating travelers' route choice behaviors based on transit trip investigation from Nanjing City, China. The proposed method is of great significance to logical transit planning and management, and to some extent makes up the defect that obtaining the reference point is solely based on qualitative analysis.

  13. Stochastic user equilibrium with equilibrated choice sets: Part I - Model formulations under alternative distributions and restrictions

    DEFF Research Database (Denmark)

    Watling, David Paul; Rasmussen, Thomas Kjær; Prato, Carlo Giacomo

    2015-01-01

    the advantages of the two principles, namely the definition of unused routes in DUE and of mis-perception in SUE, such that the resulting choice sets of used routes are equilibrated. Two model families are formulated to address this issue: the first is a general version of SUE permitting bounded and discrete...... error distributions; the second is a Restricted SUE model with an additional constraint that must be satisfied for unused paths. The overall advantage of these model families consists in their ability to combine the unused routes with the use of random utility models for used routes, without the need...... to pre-specify the choice set. We present model specifications within these families, show illustrative examples, evaluate their relative merits, and identify key directions for further research....

  14. Simultaneous inference for model averaging of derived parameters

    DEFF Research Database (Denmark)

    Jensen, Signe Marie; Ritz, Christian

    2015-01-01

    Model averaging is a useful approach for capturing uncertainty due to model selection. Currently, this uncertainty is often quantified by means of approximations that do not easily extend to simultaneous inference. Moreover, in practice there is a need for both model averaging and simultaneous...... inference for derived parameters calculated in an after-fitting step. We propose a method for obtaining asymptotically correct standard errors for one or several model-averaged estimates of derived parameters and for obtaining simultaneous confidence intervals that asymptotically control the family...

  15. Updating parameters of the chicken processing line model

    DEFF Research Database (Denmark)

    Kurowicka, Dorota; Nauta, Maarten; Jozwiak, Katarzyna

    2010-01-01

    A mathematical model of chicken processing that quantitatively describes the transmission of Campylobacter on chicken carcasses from slaughter to chicken meat product has been developed in Nauta et al. (2005). This model was quantified with expert judgment. Recent availability of data allows...... updating parameters of the model to better describe processes observed in slaughterhouses. We propose Bayesian updating as a suitable technique to update expert judgment with microbiological data. Berrang and Dickens’s data are used to demonstrate performance of this method in updating parameters...... of the chicken processing line model....

  16. Lumped-Parameter Models for Windturbine Footings on Layered Ground

    DEFF Research Database (Denmark)

    Andersen, Lars

    The design of modern wind turbines is typically based on lifetime analyses using aeroelastic codes. In this regard, the impedance of the foundations must be described accurately without increasing the overall size of the computationalmodel significantly. This may be obtained by the fitting...... of a lumped-parameter model to the results of a rigorous model or experimental results. In this paper, guidelines are given for the formulation of such lumped-parameter models and examples are given in which the models are utilised for the analysis of a wind turbine supported by a surface footing on a layered...

  17. Multiple data sets and modelling choices in a comparative LCA of disposable beverage cups

    NARCIS (Netherlands)

    Harst, van der E.J.M.; Potting, J.; Kroeze, C.

    2014-01-01

    This study used multiple data sets and modelling choices in an environmental life cycle assessment (LCA) to compare typical disposable beverage cups made from polystyrene (PS), polylactic acid (PLA; bioplastic) and paper lined with bioplastic (biopaper). Incineration and recycling were considered as

  18. Generalized behavioral framework for choice models of social influence: Behavioral and data concerns in travel behavior

    NARCIS (Netherlands)

    M. Maness; C. Cirillo; E.R. Dugundji (Elenna)

    2015-01-01

    htmlabstractOver the past two decades, transportation has begun a shift from an individual focus to a social focus. Accordingly, discrete choice models have begun to integrate social context into its framework. Social influence, the process of having one’s behavior be affected by others, has been

  19. Rational inattention to discrete choices: a new foundation for the multinomial logit model

    Czech Academy of Sciences Publication Activity Database

    Matějka, Filip; McKay, A.

    2015-01-01

    Roč. 105, č. 1 (2015), s. 272-298 ISSN 0002-8282 R&D Projects: GA ČR(CZ) GPP402/11/P236 Institutional support: RVO:67985998 Keywords : discrete choice behavior * rational inattention * multinomial logit model Subject RIV: AH - Economics Impact factor: 3.833, year: 2015

  20. Rational inattention to discrete choices: a new foundation for the multinomial logit model

    Czech Academy of Sciences Publication Activity Database

    Matějka, Filip; McKay, A.

    2015-01-01

    Roč. 105, č. 1 (2015), s. 272-298 ISSN 0002-8282 Institutional support: PRVOUK-P23 Keywords : discrete choice behavior * rational inattention * multinomial logit model Subject RIV: AH - Economics Impact factor: 3.833, year: 2015

  1. Worldwide Diversity in Funded Pension Plans : Four Role Models on Choice and Participation

    NARCIS (Netherlands)

    Garcia Huitron, Manuel; Ponds, Eduard

    2015-01-01

    This paper provides an in-depth comparison of funded pension savings plans around the world. The large variety in plan designs is a reflection of historical, cultural and institutional diversity. We postulate a new classification of four role models of funded pension plans, primarily based on choice

  2. Parameter estimation and model selection in computational biology.

    Directory of Open Access Journals (Sweden)

    Gabriele Lillacci

    2010-03-01

    Full Text Available A central challenge in computational modeling of biological systems is the determination of the model parameters. Typically, only a fraction of the parameters (such as kinetic rate constants are experimentally measured, while the rest are often fitted. The fitting process is usually based on experimental time course measurements of observables, which are used to assign parameter values that minimize some measure of the error between these measurements and the corresponding model prediction. The measurements, which can come from immunoblotting assays, fluorescent markers, etc., tend to be very noisy and taken at a limited number of time points. In this work we present a new approach to the problem of parameter selection of biological models. We show how one can use a dynamic recursive estimator, known as extended Kalman filter, to arrive at estimates of the model parameters. The proposed method follows. First, we use a variation of the Kalman filter that is particularly well suited to biological applications to obtain a first guess for the unknown parameters. Secondly, we employ an a posteriori identifiability test to check the reliability of the estimates. Finally, we solve an optimization problem to refine the first guess in case it should not be accurate enough. The final estimates are guaranteed to be statistically consistent with the measurements. Furthermore, we show how the same tools can be used to discriminate among alternate models of the same biological process. We demonstrate these ideas by applying our methods to two examples, namely a model of the heat shock response in E. coli, and a model of a synthetic gene regulation system. The methods presented are quite general and may be applied to a wide class of biological systems where noisy measurements are used for parameter estimation or model selection.

  3. Assessing the relative importance of parameter and forcing uncertainty and their interactions in conceptual hydrological model simulations

    Science.gov (United States)

    Mockler, E. M.; Chun, K. P.; Sapriza-Azuri, G.; Bruen, M.; Wheater, H. S.

    2016-11-01

    Predictions of river flow dynamics provide vital information for many aspects of water management including water resource planning, climate adaptation, and flood and drought assessments. Many of the subjective choices that modellers make including model and criteria selection can have a significant impact on the magnitude and distribution of the output uncertainty. Hydrological modellers are tasked with understanding and minimising the uncertainty surrounding streamflow predictions before communicating the overall uncertainty to decision makers. Parameter uncertainty in conceptual rainfall-runoff models has been widely investigated, and model structural uncertainty and forcing data have been receiving increasing attention. This study aimed to assess uncertainties in streamflow predictions due to forcing data and the identification of behavioural parameter sets in 31 Irish catchments. By combining stochastic rainfall ensembles and multiple parameter sets for three conceptual rainfall-runoff models, an analysis of variance model was used to decompose the total uncertainty in streamflow simulations into contributions from (i) forcing data, (ii) identification of model parameters and (iii) interactions between the two. The analysis illustrates that, for our subjective choices, hydrological model selection had a greater contribution to overall uncertainty, while performance criteria selection influenced the relative intra-annual uncertainties in streamflow predictions. Uncertainties in streamflow predictions due to the method of determining parameters were relatively lower for wetter catchments, and more evenly distributed throughout the year when the Nash-Sutcliffe Efficiency of logarithmic values of flow (lnNSE) was the evaluation criterion.

  4. Multimodal route choice models of public transport passengers in the Greater Copenhagen Area

    DEFF Research Database (Denmark)

    Anderson, Marie Karen; Nielsen, Otto Anker; Prato, Carlo Giacomo

    2014-01-01

    Understanding route choice behavior is crucial to explain travelers’ preferences and to predict traffic flows under different scenarios. A growing body of literature has concentrated on public transport users without, however, concentrating on multimodal public transport networks because......,641 public transport users in the Greater Copenhagen Area.A two-stage approach consisting of choice set generation and route choice model estimation allowed uncovering the preferences of the users of this multimodal large-scale public transport network. The results illustrate the rates of substitution...... not only of the in-vehicle times for different public transport modes, but also of the other time components (e.g., access, walking, waiting, transfer) composing the door-to-door experience of using a multimodal public transport network, differentiating by trip length and purpose, and accounting...

  5. Specialty choice preference of medical students according to personality traits by Five-Factor Model

    Directory of Open Access Journals (Sweden)

    Oh Young Kwon

    2016-03-01

    Full Text Available Purpose: The purpose of this study was to determine the relationship between personality traits, using the Five-Factor Model, and characteristics and motivational factors affecting specialty choice in Korean medical students. Methods: A questionnaire survey of Year 4 medical students (n=110 in July 2015 was administered. We evaluated the personality traits of Extraversion, Agreeableness, Conscientiousness, Neuroticism, and Openness by using the Korean version of Big Five Inventory. Questions about general characteristics, medical specialties most preferred as a career, motivational factors in determining specialty choice were included. Data between five personality traits and general characteristics and motivational factors affecting specialty choice were analyzed using Student t-test, Mann-Whitney test and analysis of variance. Results: Of the 110 eligible medical students, 105 (95.4% response rate completed the questionnaire. More Agreeableness students preferred clinical medicine to basic medicine (p=0.010 and more Openness students preferred medical departments to others (p=0.031. Personal interest was the significant motivational factors in more Openness students (p=0.003 and Conscientiousness students (p=0.003. Conclusion: Medical students with more Agreeableness were more likely to prefer clinical medicine and those with more Openness preferred medical departments. Personal interest was a significant influential factor determining specialty choice in more Openness and Conscientiousness students. These findings may be helpful to medical educators or career counselors in the specialty choice process.

  6. Specialty choice preference of medical students according to personality traits by Five-Factor Model.

    Science.gov (United States)

    Kwon, Oh Young; Park, So Youn

    2016-03-01

    The purpose of this study was to determine the relationship between personality traits, using the Five-Factor Model, and characteristics and motivational factors affecting specialty choice in Korean medical students. A questionnaire survey of Year 4 medical students (n=110) in July 2015 was administered. We evaluated the personality traits of Extraversion, Agreeableness, Conscientiousness, Neuroticism, and Openness by using the Korean version of Big Five Inventory. Questions about general characteristics, medical specialties most preferred as a career, motivational factors in determining specialty choice were included. Data between five personality traits and general characteristics and motivational factors affecting specialty choice were analyzed using Student t-test, Mann-Whitney test and analysis of variance. Of the 110 eligible medical students, 105 (95.4% response rate) completed the questionnaire. More Agreeableness students preferred clinical medicine to basic medicine (p=0.010) and more Openness students preferred medical departments to others (p=0.031). Personal interest was the significant motivational factors in more Openness students (p=0.003) and Conscientiousness students (p=0.003). Medical students with more Agreeableness were more likely to prefer clinical medicine and those with more Openness preferred medical departments. Personal interest was a significant influential factor determining specialty choice in more Openness and Conscientiousness students. These findings may be helpful to medical educators or career counselors in the specialty choice process.

  7. Development of new model for high explosives detonation parameters calculation

    Directory of Open Access Journals (Sweden)

    Jeremić Radun

    2012-01-01

    Full Text Available The simple semi-empirical model for calculation of detonation pressure and velocity for CHNO explosives has been developed, which is based on experimental values of detonation parameters. Model uses Avakyan’s method for determination of detonation products' chemical composition, and is applicable in wide range of densities. Compared with the well-known Kamlet's method and numerical model of detonation based on BKW EOS, the calculated values from proposed model have significantly better accuracy.

  8. Belief in the "free choice" model of homosexuality: a correlate of homophobia in registered nurses.

    Science.gov (United States)

    Blackwell, Christopher W

    2007-01-01

    A great amount of social science research has supported the positive correlation between heterosexuals' belief in the free choice model of homosexuality and homophobia. Heterosexuals who believe gay, lesbian, bisexual, and transgender (GLBT) persons consciously choose their sexual orientation and practice a lifestyle conducive to that choice are much more likely to possess discriminatory, homophobic, homonegative, and heterosexist beliefs. In addition, these individuals are less likely to support gay rights initiatives such as nondiscrimination policies or same-sex partner benefits in the workplace or hate crime enhancement legislation inclusive of GLBT persons. Although researchers have demonstrated this phenomenon in the general population, none have specifically assessed it in the nursing workforce. The purpose of this study was to examine registered nurses' overall levels of homophobia and attitudes toward a workplace policy protective of gays and lesbians. These variables were then correlated with belief in the free choice model of homosexuality. Results indicated that belief in the free choice model of homosexuality was the strongest predictor of homophobia in nurses. Implications for nursing leadership and management, nursing education, and future research are discussed.

  9. Environmental Transport Input Parameters for the Biosphere Model

    International Nuclear Information System (INIS)

    M. Wasiolek

    2004-01-01

    This analysis report is one of the technical reports documenting the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), a biosphere model supporting the total system performance assessment for the license application (TSPA-LA) for the geologic repository at Yucca Mountain. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows relationships among the reports developed for biosphere modeling and biosphere abstraction products for the TSPA-LA, as identified in the ''Technical Work Plan for Biosphere Modeling and Expert Support'' (BSC 2004 [DIRS 169573]) (TWP). This figure provides an understanding of how this report contributes to biosphere modeling in support of the license application (LA). This report is one of the five reports that develop input parameter values for the biosphere model. The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes the conceptual model and the mathematical model. The input parameter reports, shown to the right of the Biosphere Model Report in Figure 1-1, contain detailed description of the model input parameters. The output of this report is used as direct input in the ''Nominal Performance Biosphere Dose Conversion Factor Analysis'' and in the ''Disruptive Event Biosphere Dose Conversion Factor Analysis'' that calculate the values of biosphere dose conversion factors (BDCFs) for the groundwater and volcanic ash exposure scenarios, respectively. The purpose of this analysis was to develop biosphere model parameter values related to radionuclide transport and accumulation in the environment. These parameters support calculations of radionuclide concentrations in the environmental media (e.g., soil, crops, animal products, and air) resulting from a given radionuclide concentration at the source of contamination (i.e., either in groundwater or in volcanic ash). The analysis was performed in accordance with the TWP (BSC 2004 [DIRS 169573])

  10. Environmental Transport Input Parameters for the Biosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    M. Wasiolek

    2004-09-10

    This analysis report is one of the technical reports documenting the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), a biosphere model supporting the total system performance assessment for the license application (TSPA-LA) for the geologic repository at Yucca Mountain. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows relationships among the reports developed for biosphere modeling and biosphere abstraction products for the TSPA-LA, as identified in the ''Technical Work Plan for Biosphere Modeling and Expert Support'' (BSC 2004 [DIRS 169573]) (TWP). This figure provides an understanding of how this report contributes to biosphere modeling in support of the license application (LA). This report is one of the five reports that develop input parameter values for the biosphere model. The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes the conceptual model and the mathematical model. The input parameter reports, shown to the right of the Biosphere Model Report in Figure 1-1, contain detailed description of the model input parameters. The output of this report is used as direct input in the ''Nominal Performance Biosphere Dose Conversion Factor Analysis'' and in the ''Disruptive Event Biosphere Dose Conversion Factor Analysis'' that calculate the values of biosphere dose conversion factors (BDCFs) for the groundwater and volcanic ash exposure scenarios, respectively. The purpose of this analysis was to develop biosphere model parameter values related to radionuclide transport and accumulation in the environment. These parameters support calculations of radionuclide concentrations in the environmental media (e.g., soil, crops, animal products, and air) resulting from a given radionuclide concentration at the source of contamination (i.e., either in groundwater or in volcanic ash). The analysis

  11. Рassenger survey on public transport in Zhitomir and evaluation of the main technical and operational parameters for the choice of city buses

    Directory of Open Access Journals (Sweden)

    Rudzynskyi V.V.

    2016-08-01

    Full Text Available The parameters of the passenger movements in the direction of public transport in Zhitomir are defined and conformity assessment of technical and operational parameters of urban shuttle buses is folded. Firstly, the amount of passenger traffic affects the optimal choice of passenger vehicles and secondly, the intensity of road traffic on the streets of areas where passengers pass routes. It should also be kept in mind that passenger traffic can fluctuate significantly depending on the time of day and days of the week. But virtually all carriers can be replaced within days with rolling at a large passenger capacity, and vice versa. Therefore, the choice of one type of rolling stock, the capacity of which is set taking into account the data on hourly passenger capacity on the most loaded part of the route up to an hour "peak", or its capacity per day on the route as a whole. Thus the research work on inspection of passenger-route passenger transport, and public electric transport in Zhitomir is conducted. Primary data was estimated to select the main criteria for urban passenger bus. It was found that the buses in the "peak" hours move on passenger congestion. Preliminary conclusions and recommendations on the criteria of optimal rolling of choice for the city bus route network are provided.

  12. Parameter uncertainty analysis of a biokinetic model of caesium

    International Nuclear Information System (INIS)

    Li, W.B.; Oeh, U.; Klein, W.; Blanchardon, E.; Puncher, M.; Leggett, R.W.; Breustedt, B.; Nosske, D.; Lopez, M.A.

    2015-01-01

    Parameter uncertainties for the biokinetic model of caesium (Cs) developed by Leggett et al. were inventoried and evaluated. The methods of parameter uncertainty analysis were used to assess the uncertainties of model predictions with the assumptions of model parameter uncertainties and distributions. Furthermore, the importance of individual model parameters was assessed by means of sensitivity analysis. The calculated uncertainties of model predictions were compared with human data of Cs measured in blood and in the whole body. It was found that propagating the derived uncertainties in model parameter values reproduced the range of bioassay data observed in human subjects at different times after intake. The maximum ranges, expressed as uncertainty factors (UFs) (defined as a square root of ratio between 97.5. and 2.5. percentiles) of blood clearance, whole-body retention and urinary excretion of Cs predicted at earlier time after intake were, respectively: 1.5, 1.0 and 2.5 at the first day; 1.8, 1.1 and 2.4 at Day 10 and 1.8, 2.0 and 1.8 at Day 100; for the late times (1000 d) after intake, the UFs were increased to 43, 24 and 31, respectively. The model parameters of transfer rates between kidneys and blood, muscle and blood and the rate of transfer from kidneys to urinary bladder content are most influential to the blood clearance and to the whole-body retention of Cs. For the urinary excretion, the parameters of transfer rates from urinary bladder content to urine and from kidneys to urinary bladder content impact mostly. The implication and effect on the estimated equivalent and effective doses of the larger uncertainty of 43 in whole-body retention in the later time, say, after Day 500 will be explored in a successive work in the framework of EURADOS. (authors)

  13. Inhalation Exposure Input Parameters for the Biosphere Model

    International Nuclear Information System (INIS)

    K. Rautenstrauch

    2004-01-01

    This analysis is one of 10 reports that support the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN) biosphere model. The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes in detail the conceptual model as well as the mathematical model and its input parameters. This report documents development of input parameters for the biosphere model that are related to atmospheric mass loading and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the total system performance assessment (TSPA) for a Yucca Mountain repository. Inhalation Exposure Input Parameters for the Biosphere Model is one of five reports that develop input parameters for the biosphere model. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling, and the plan for development of the biosphere abstraction products for TSPA, as identified in the Technical Work Plan for Biosphere Modeling and Expert Support (BSC 2004 [DIRS 169573]). This analysis report defines and justifies values of mass loading for the biosphere model. Mass loading is the total mass concentration of resuspended particles (e.g., dust, ash) in a volume of air. Mass loading values are used in the air submodel of ERMYN to calculate concentrations of radionuclides in air inhaled by a receptor and concentrations in air surrounding crops. Concentrations in air to which the receptor is exposed are then used in the inhalation submodel to calculate the dose contribution to the receptor from inhalation of contaminated airborne particles. Concentrations in air surrounding plants are used in the plant submodel to calculate the concentrations of radionuclides in foodstuffs contributed from uptake by foliar interception

  14. Inhalation Exposure Input Parameters for the Biosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    K. Rautenstrauch

    2004-09-10

    This analysis is one of 10 reports that support the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN) biosphere model. The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes in detail the conceptual model as well as the mathematical model and its input parameters. This report documents development of input parameters for the biosphere model that are related to atmospheric mass loading and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the total system performance assessment (TSPA) for a Yucca Mountain repository. Inhalation Exposure Input Parameters for the Biosphere Model is one of five reports that develop input parameters for the biosphere model. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling, and the plan for development of the biosphere abstraction products for TSPA, as identified in the Technical Work Plan for Biosphere Modeling and Expert Support (BSC 2004 [DIRS 169573]). This analysis report defines and justifies values of mass loading for the biosphere model. Mass loading is the total mass concentration of resuspended particles (e.g., dust, ash) in a volume of air. Mass loading values are used in the air submodel of ERMYN to calculate concentrations of radionuclides in air inhaled by a receptor and concentrations in air surrounding crops. Concentrations in air to which the receptor is exposed are then used in the inhalation submodel to calculate the dose contribution to the receptor from inhalation of contaminated airborne particles. Concentrations in air surrounding plants are used in the plant submodel to calculate the concentrations of radionuclides in foodstuffs contributed from uptake by foliar interception.

  15. Environmental Transport Input Parameters for the Biosphere Model

    International Nuclear Information System (INIS)

    Wasiolek, M. A.

    2003-01-01

    This analysis report is one of the technical reports documenting the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN), a biosphere model supporting the total system performance assessment (TSPA) for the geologic repository at Yucca Mountain. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows relationships among the reports developed for biosphere modeling and biosphere abstraction products for the TSPA, as identified in the ''Technical Work Plan: for Biosphere Modeling and Expert Support'' (TWP) (BSC 2003 [163602]). Some documents in Figure 1-1 may be under development and not available when this report is issued. This figure provides an understanding of how this report contributes to biosphere modeling in support of the license application (LA), but access to the listed documents is not required to understand the contents of this report. This report is one of the reports that develops input parameter values for the biosphere model. The ''Biosphere Model Report'' (BSC 2003 [160699]) describes the conceptual model, the mathematical model, and the input parameters. The purpose of this analysis is to develop biosphere model parameter values related to radionuclide transport and accumulation in the environment. These parameters support calculations of radionuclide concentrations in the environmental media (e.g., soil, crops, animal products, and air) resulting from a given radionuclide concentration at the source of contamination (i.e., either in groundwater or volcanic ash). The analysis was performed in accordance with the TWP (BSC 2003 [163602]). This analysis develops values of parameters associated with many features, events, and processes (FEPs) applicable to the reference biosphere (DTN: M00303SEPFEPS2.000 [162452]), which are addressed in the biosphere model (BSC 2003 [160699]). The treatment of these FEPs is described in BSC (2003 [160699], Section 6.2). Parameter values

  16. Environmental Transport Input Parameters for the Biosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    M. A. Wasiolek

    2003-06-27

    This analysis report is one of the technical reports documenting the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN), a biosphere model supporting the total system performance assessment (TSPA) for the geologic repository at Yucca Mountain. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows relationships among the reports developed for biosphere modeling and biosphere abstraction products for the TSPA, as identified in the ''Technical Work Plan: for Biosphere Modeling and Expert Support'' (TWP) (BSC 2003 [163602]). Some documents in Figure 1-1 may be under development and not available when this report is issued. This figure provides an understanding of how this report contributes to biosphere modeling in support of the license application (LA), but access to the listed documents is not required to understand the contents of this report. This report is one of the reports that develops input parameter values for the biosphere model. The ''Biosphere Model Report'' (BSC 2003 [160699]) describes the conceptual model, the mathematical model, and the input parameters. The purpose of this analysis is to develop biosphere model parameter values related to radionuclide transport and accumulation in the environment. These parameters support calculations of radionuclide concentrations in the environmental media (e.g., soil, crops, animal products, and air) resulting from a given radionuclide concentration at the source of contamination (i.e., either in groundwater or volcanic ash). The analysis was performed in accordance with the TWP (BSC 2003 [163602]). This analysis develops values of parameters associated with many features, events, and processes (FEPs) applicable to the reference biosphere (DTN: M00303SEPFEPS2.000 [162452]), which are addressed in the biosphere model (BSC 2003 [160699]). The treatment of these FEPs is described in BSC (2003 [160699

  17. Sensor placement for calibration of spatially varying model parameters

    Science.gov (United States)

    Nath, Paromita; Hu, Zhen; Mahadevan, Sankaran

    2017-08-01

    This paper presents a sensor placement optimization framework for the calibration of spatially varying model parameters. To account for the randomness of the calibration parameters over space and across specimens, the spatially varying parameter is represented as a random field. Based on this representation, Bayesian calibration of spatially varying parameter is investigated. To reduce the required computational effort during Bayesian calibration, the original computer simulation model is substituted with Kriging surrogate models based on the singular value decomposition (SVD) of the model response and the Karhunen-Loeve expansion (KLE) of the spatially varying parameters. A sensor placement optimization problem is then formulated based on the Bayesian calibration to maximize the expected information gain measured by the expected Kullback-Leibler (K-L) divergence. The optimization problem needs to evaluate the expected K-L divergence repeatedly which requires repeated calibration of the spatially varying parameter, and this significantly increases the computational effort of solving the optimization problem. To overcome this challenge, an approximation for the posterior distribution is employed within the optimization problem to facilitate the identification of the optimal sensor locations using the simulated annealing algorithm. A heat transfer problem with spatially varying thermal conductivity is used to demonstrate the effectiveness of the proposed method.

  18. Procedures for parameter estimates of computational models for localized failure

    NARCIS (Netherlands)

    Iacono, C.

    2007-01-01

    In the last years, many computational models have been developed for tensile fracture in concrete. However, their reliability is related to the correct estimate of the model parameters, not all directly measurable during laboratory tests. Hence, the development of inverse procedures is needed, that

  19. Geometry parameters for musculoskeletal modelling of the shoulder system

    NARCIS (Netherlands)

    Van der Helm, F C; Veeger, DirkJan (H. E. J.); Pronk, G M; Van der Woude, L H; Rozendal, R H

    A dynamical finite-element model of the shoulder mechanism consisting of thorax, clavicula, scapula and humerus is outlined. The parameters needed for the model are obtained in a cadaver experiment consisting of both shoulders of seven cadavers. In this paper, in particular, the derivation of

  20. Simulations of a epidemic model with parameters variation analysis for the dengue fever

    Science.gov (United States)

    Jardim, C. L. T. F.; Prates, D. B.; Silva, J. M.; Ferreira, L. A. F.; Kritz, M. V.

    2015-09-01

    Mathematical models can be widely found in the literature for describing and analyzing epidemics. The models that use differential equations to represent mathematically such description are specially sensible to parameters involved in the modelling. In this work, an already developed model, called SIR, is analyzed when applied to a scenario of a dengue fever epidemic. Such choice is powered by the existence of useful tools presented by a variation of this original model, which allow an inclusion of different aspects of the dengue fever disease, as its seasonal characteristics, the presence of more than one strain of the vector and of the biological factor of cross-immunity. The analysis and results interpretation are performed through numerical solutions of the model in question, and a special attention is given to the different solutions generated by the use of different values for the parameters present in this model. Slight variations are performed either dynamically or statically in those parameters, mimicking hypothesized changes in the biological scenario of this simulation and providing a source of evaluation of how those changes would affect the outcomes of the epidemic in a population.

  1. A software for parameter estimation in dynamic models

    Directory of Open Access Journals (Sweden)

    M. Yuceer

    2008-12-01

    Full Text Available A common problem in dynamic systems is to determine parameters in an equation used to represent experimental data. The goal is to determine the values of model parameters that provide the best fit to measured data, generally based on some type of least squares or maximum likelihood criterion. In the most general case, this requires the solution of a nonlinear and frequently non-convex optimization problem. Some of the available software lack in generality, while others do not provide ease of use. A user-interactive parameter estimation software was needed for identifying kinetic parameters. In this work we developed an integration based optimization approach to provide a solution to such problems. For easy implementation of the technique, a parameter estimation software (PARES has been developed in MATLAB environment. When tested with extensive example problems from literature, the suggested approach is proven to provide good agreement between predicted and observed data within relatively less computing time and iterations.

  2. Improving the realism of hydrologic model through multivariate parameter estimation

    Science.gov (United States)

    Rakovec, Oldrich; Kumar, Rohini; Attinger, Sabine; Samaniego, Luis

    2017-04-01

    Increased availability and quality of near real-time observations should improve understanding of predictive skills of hydrological models. Recent studies have shown the limited capability of river discharge data alone to adequately constrain different components of distributed model parameterizations. In this study, the GRACE satellite-based total water storage (TWS) anomaly is used to complement the discharge data with an aim to improve the fidelity of mesoscale hydrologic model (mHM) through multivariate parameter estimation. The study is conducted in 83 European basins covering a wide range of hydro-climatic regimes. The model parameterization complemented with the TWS anomalies leads to statistically significant improvements in (1) discharge simulations during low-flow period, and (2) evapotranspiration estimates which are evaluated against independent (FLUXNET) data. Overall, there is no significant deterioration in model performance for the discharge simulations when complemented by information from the TWS anomalies. However, considerable changes in the partitioning of precipitation into runoff components are noticed by in-/exclusion of TWS during the parameter estimation. A cross-validation test carried out to assess the transferability and robustness of the calibrated parameters to other locations further confirms the benefit of complementary TWS data. In particular, the evapotranspiration estimates show more robust performance when TWS data are incorporated during the parameter estimation, in comparison with the benchmark model constrained against discharge only. This study highlights the value for incorporating multiple data sources during parameter estimation to improve the overall realism of hydrologic model and its applications over large domains. Rakovec, O., Kumar, R., Attinger, S. and Samaniego, L. (2016): Improving the realism of hydrologic model functioning through multivariate parameter estimation. Water Resour. Res., 52, http://dx.doi.org/10

  3. Ground level enhancement (GLE) energy spectrum parameters model

    Science.gov (United States)

    Qin, G.; Wu, S.

    2017-12-01

    We study the ground level enhancement (GLE) events in solar cycle 23 with the four energy spectra parameters, the normalization parameter C, low-energy power-law slope γ 1, high-energy power-law slope γ 2, and break energy E0, obtained by Mewaldt et al. 2012 who fit the observations to the double power-law equation. we divide the GLEs into two groups, one with strong acceleration by interplanetary (IP) shocks and another one without strong acceleration according to the condition of solar eruptions. We next fit the four parameters with solar event conditions to get models of the parameters for the two groups of GLEs separately. So that we would establish a model of energy spectrum for GLEs for the future space weather prediction.

  4. Determination of appropriate models and parameters for premixing calculations

    Energy Technology Data Exchange (ETDEWEB)

    Park, Ik-Kyu; Kim, Jong-Hwan; Min, Beong-Tae; Hong, Seong-Wan

    2008-03-15

    The purpose of the present work is to use experiments that have been performed at Forschungszentrum Karlsruhe during about the last ten years for determining the most appropriate models and parameters for premixing calculations. The results of a QUEOS experiment are used to fix the parameters concerning heat transfer. The QUEOS experiments are especially suited for this purpose as they have been performed with small hot solid spheres. Therefore the area of heat exchange is known. With the heat transfer parameters fixed in this way, a PREMIX experiment is recalculated. These experiments have been performed with molten alumina (Al{sub 2}O{sub 3}) as a simulant of corium. Its initial temperature is 2600 K. With these experiments the models and parameters for jet and drop break-up are tested.

  5. Soil-related Input Parameters for the Biosphere Model

    International Nuclear Information System (INIS)

    A. J. Smith

    2003-01-01

    This analysis is one of the technical reports containing documentation of the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN), a biosphere model supporting the Total System Performance Assessment (TSPA) for the geologic repository at Yucca Mountain. The biosphere model is one of a series of process models supporting the Total System Performance Assessment (TSPA) for the Yucca Mountain repository. A graphical representation of the documentation hierarchy for the ERMYN biosphere model is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling, and the plan for development of the biosphere abstraction products for TSPA, as identified in the ''Technical Work Plan: for Biosphere Modeling and Expert Support'' (BSC 2003 [163602]). It should be noted that some documents identified in Figure 1-1 may be under development at the time this report is issued and therefore not available. This figure is included to provide an understanding of how this analysis report contributes to biosphere modeling in support of the license application, and is not intended to imply that access to the listed documents is required to understand the contents of this report. This report, ''Soil Related Input Parameters for the Biosphere Model'', is one of the five analysis reports that develop input parameters for use in the ERMYN model. This report is the source documentation for the six biosphere parameters identified in Table 1-1. ''The Biosphere Model Report'' (BSC 2003 [160699]) describes in detail the conceptual model as well as the mathematical model and its input parameters. The purpose of this analysis was to develop the biosphere model parameters needed to evaluate doses from pathways associated with the accumulation and depletion of radionuclides in the soil. These parameters support the calculation of radionuclide concentrations in soil from on-going irrigation and ash

  6. Parameter Estimation for Single Diode Models of Photovoltaic Modules

    Energy Technology Data Exchange (ETDEWEB)

    Hansen, Clifford [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Photovoltaic and Distributed Systems Integration Dept.

    2015-03-01

    Many popular models for photovoltaic system performance employ a single diode model to compute the I - V curve for a module or string of modules at given irradiance and temperature conditions. A single diode model requires a number of parameters to be estimated from measured I - V curves. Many available parameter estimation methods use only short circuit, o pen circuit and maximum power points for a single I - V curve at standard test conditions together with temperature coefficients determined separately for individual cells. In contrast, module testing frequently records I - V curves over a wide range of irradi ance and temperature conditions which, when available , should also be used to parameterize the performance model. We present a parameter estimation method that makes use of a fu ll range of available I - V curves. We verify the accuracy of the method by recov ering known parameter values from simulated I - V curves . We validate the method by estimating model parameters for a module using outdoor test data and predicting the outdoor performance of the module.

  7. Modeling Chinese ionospheric layer parameters based on EOF analysis

    Science.gov (United States)

    Yu, You; Wan, Weixing

    2016-04-01

    Using 24-ionosonde observations in and around China during the 20th solar cycle, an assimilative model is constructed to map the ionospheric layer parameters (foF2, hmF2, M(3000)F2, and foE) over China based on empirical orthogonal function (EOF) analysis. First, we decompose the background maps from the International Reference Ionosphere model 2007 (IRI-07) into different EOF modes. The obtained EOF modes consist of two factors: the EOF patterns and the corresponding EOF amplitudes. These two factors individually reflect the spatial distributions (e.g., the latitudinal dependence such as the equatorial ionization anomaly structure and the longitude structure with east-west difference) and temporal variations on different time scales (e.g., solar cycle, annual, semiannual, and diurnal variations) of the layer parameters. Then, the EOF patterns and long-term observations of ionosondes are assimilated to get the observed EOF amplitudes, which are further used to construct the Chinese Ionospheric Maps (CIMs) of the layer parameters. In contrast with the IRI-07 model, the mapped CIMs successfully capture the inherent temporal and spatial variations of the ionospheric layer parameters. Finally, comparison of the modeled (EOF and IRI-07 model) and observed values reveals that the EOF model reproduces the observation with smaller root-mean-square errors and higher linear correlation co- efficients. In addition, IRI discrepancy at the low latitude especially for foF2 is effectively removed by EOF model.

  8. How urban environment affects travel behavior? Integrated Choice and Latent Variable Model for Travel Schedules

    DEFF Research Database (Denmark)

    La Paix, Lissy; Bierlaire, Michel; Cherchi, Elisabetta

    2013-01-01

    The relationship between urban environment and travel behaviour is not a new problem. Neighbourhood characteristics may affect mobility of dwellers in different ways, such as frequency of trips, mode used, structure of the tours, and so on. At the same time, qualitative issues related to the indi......The relationship between urban environment and travel behaviour is not a new problem. Neighbourhood characteristics may affect mobility of dwellers in different ways, such as frequency of trips, mode used, structure of the tours, and so on. At the same time, qualitative issues related...... to the individual attitude towards specific behaviour have recently become important in transport modelling contributing to a better understanding of travel demand. Following this research line, in this paper we study the effect of neighbourhood characteristics in the choice of the type of tours performed, but we...... assume that neighbourhood characteristics can also affect the individual propensity to travel and hence the choice of the tours throughout the propensity to travel. Since the propensity to travel is not observed, we employ hybrid choice models to estimate jointly the discrete choice of tours...

  9. Parameters and variables appearing in repository design models

    International Nuclear Information System (INIS)

    Curtis, R.H.; Wart, R.J.

    1983-12-01

    This report defines the parameters and variables appearing in repository design models and presents typical values and ranges of values of each. Areas covered by this report include thermal, geomechanical, and coupled stress and flow analyses in rock. Particular emphasis is given to conductivity, radiation, and convection parameters for thermal analysis and elastic constants, failure criteria, creep laws, and joint properties for geomechanical analysis. The data in this report were compiled to help guide the selection of values of parameters and variables to be used in code benchmarking. 102 references, 33 figures, 51 tables

  10. A lumped parameter, low dimension model of heat exchanger

    International Nuclear Information System (INIS)

    Kanoh, Hideaki; Furushoo, Junji; Masubuchi, Masami

    1980-01-01

    This paper reports on the results of investigation of the distributed parameter model, the difference model, and the model of the method of weighted residuals for heat exchangers. By the method of weighted residuals (MWR), the opposite flow heat exchanger system is approximated by low dimension, lumped parameter model. By assuming constant specific heat, constant density, the same form of tube cross-section, the same form of the surface of heat exchange, uniform flow velocity, the linear relation of heat transfer to flow velocity, liquid heat carrier, and the thermal insulation of liquid from outside, fundamental equations are obtained. The experimental apparatus was made of acrylic resin. The response of the temperature at the exit of first liquid to the variation of the flow rate of second liquid was measured and compared with the models. The MWR model shows good approximation for the low frequency region, and as the number of division increases, good approximation spreads to higher frequency region. (Kato, T.)

  11. Estimating Parameters in Physical Models through Bayesian Inversion: A Complete Example

    KAUST Repository

    Allmaras, Moritz

    2013-02-07

    All mathematical models of real-world phenomena contain parameters that need to be estimated from measurements, either for realistic predictions or simply to understand the characteristics of the model. Bayesian statistics provides a framework for parameter estimation in which uncertainties about models and measurements are translated into uncertainties in estimates of parameters. This paper provides a simple, step-by-step example-starting from a physical experiment and going through all of the mathematics-to explain the use of Bayesian techniques for estimating the coefficients of gravity and air friction in the equations describing a falling body. In the experiment we dropped an object from a known height and recorded the free fall using a video camera. The video recording was analyzed frame by frame to obtain the distance the body had fallen as a function of time, including measures of uncertainty in our data that we describe as probability densities. We explain the decisions behind the various choices of probability distributions and relate them to observed phenomena. Our measured data are then combined with a mathematical model of a falling body to obtain probability densities on the space of parameters we seek to estimate. We interpret these results and discuss sources of errors in our estimation procedure. © 2013 Society for Industrial and Applied Mathematics.

  12. Control of the SCOLE configuration using distributed parameter models

    Science.gov (United States)

    Hsiao, Min-Hung; Huang, Jen-Kuang

    1994-01-01

    A continuum model for the SCOLE configuration has been derived using transfer matrices. Controller designs for distributed parameter systems have been analyzed. Pole-assignment controller design is considered easy to implement but stability is not guaranteed. An explicit transfer function of dynamic controllers has been obtained and no model reduction is required before the controller is realized. One specific LQG controller for continuum models had been derived, but other optimal controllers for more general performances need to be studied.

  13. Sensorimotor learning biases choice behavior: a learning neural field model for decision making.

    Directory of Open Access Journals (Sweden)

    Christian Klaes

    Full Text Available According to a prominent view of sensorimotor processing in primates, selection and specification of possible actions are not sequential operations. Rather, a decision for an action emerges from competition between different movement plans, which are specified and selected in parallel. For action choices which are based on ambiguous sensory input, the frontoparietal sensorimotor areas are considered part of the common underlying neural substrate for selection and specification of action. These areas have been shown capable of encoding alternative spatial motor goals in parallel during movement planning, and show signatures of competitive value-based selection among these goals. Since the same network is also involved in learning sensorimotor associations, competitive action selection (decision making should not only be driven by the sensory evidence and expected reward in favor of either action, but also by the subject's learning history of different sensorimotor associations. Previous computational models of competitive neural decision making used predefined associations between sensory input and corresponding motor output. Such hard-wiring does not allow modeling of how decisions are influenced by sensorimotor learning or by changing reward contingencies. We present a dynamic neural field model which learns arbitrary sensorimotor associations with a reward-driven Hebbian learning algorithm. We show that the model accurately simulates the dynamics of action selection with different reward contingencies, as observed in monkey cortical recordings, and that it correctly predicted the pattern of choice errors in a control experiment. With our adaptive model we demonstrate how network plasticity, which is required for association learning and adaptation to new reward contingencies, can influence choice behavior. The field model provides an integrated and dynamic account for the operations of sensorimotor integration, working memory and action

  14. Parameter estimation and uncertainty quantification in a biogeochemical model using optimal experimental design methods

    Science.gov (United States)

    Reimer, Joscha; Piwonski, Jaroslaw; Slawig, Thomas

    2016-04-01

    The statistical significance of any model-data comparison strongly depends on the quality of the used data and the criterion used to measure the model-to-data misfit. The statistical properties (such as mean values, variances and covariances) of the data should be taken into account by choosing a criterion as, e.g., ordinary, weighted or generalized least squares. Moreover, the criterion can be restricted onto regions or model quantities which are of special interest. This choice influences the quality of the model output (also for not measured quantities) and the results of a parameter estimation or optimization process. We have estimated the parameters of a three-dimensional and time-dependent marine biogeochemical model describing the phosphorus cycle in the ocean. For this purpose, we have developed a statistical model for measurements of phosphate and dissolved organic phosphorus. This statistical model includes variances and correlations varying with time and location of the measurements. We compared the obtained estimations of model output and parameters for different criteria. Another question is if (and which) further measurements would increase the model's quality at all. Using experimental design criteria, the information content of measurements can be quantified. This may refer to the uncertainty in unknown model parameters as well as the uncertainty regarding which model is closer to reality. By (another) optimization, optimal measurement properties such as locations, time instants and quantities to be measured can be identified. We have optimized such properties for additional measurement for the parameter estimation of the marine biogeochemical model. For this purpose, we have quantified the uncertainty in the optimal model parameters and the model output itself regarding the uncertainty in the measurement data using the (Fisher) information matrix. Furthermore, we have calculated the uncertainty reduction by additional measurements depending on time

  15. Calibration of sea ice dynamic parameters in an ocean-sea ice model using an ensemble Kalman filter

    Science.gov (United States)

    Massonnet, F.; Goosse, H.; Fichefet, T.; Counillon, F.

    2014-07-01

    The choice of parameter values is crucial in the course of sea ice model development, since parameters largely affect the modeled mean sea ice state. Manual tuning of parameters will soon become impractical, as sea ice models will likely include more parameters to calibrate, leading to an exponential increase of the number of possible combinations to test. Objective and automatic methods for parameter calibration are thus progressively called on to replace the traditional heuristic, "trial-and-error" recipes. Here a method for calibration of parameters based on the ensemble Kalman filter is implemented, tested and validated in the ocean-sea ice model NEMO-LIM3. Three dynamic parameters are calibrated: the ice strength parameter P*, the ocean-sea ice drag parameter Cw, and the atmosphere-sea ice drag parameter Ca. In twin, perfect-model experiments, the default parameter values are retrieved within 1 year of simulation. Using 2007-2012 real sea ice drift data, the calibration of the ice strength parameter P* and the oceanic drag parameter Cw improves clearly the Arctic sea ice drift properties. It is found that the estimation of the atmospheric drag Ca is not necessary if P* and Cw are already estimated. The large reduction in the sea ice speed bias with calibrated parameters comes with a slight overestimation of the winter sea ice areal export through Fram Strait and a slight improvement in the sea ice thickness distribution. Overall, the estimation of parameters with the ensemble Kalman filter represents an encouraging alternative to manual tuning for ocean-sea ice models.

  16. Variation in LCA results for disposable polystyrene beverage cups due to multiple data sets and modelling choices

    NARCIS (Netherlands)

    Harst, van der E.J.M.; Potting, J.

    2014-01-01

    Life Cycle Assessments (LCAs) of the same products often result in different, sometimes even contradictory outcomes. Reasons for these differences include using different data sets and deviating modelling choices. This paper purposely used different data sets and modelling choices to identify how

  17. Modeling Mode Choice Behavior Incorporating Household and Individual Sociodemographics and Travel Attributes Based on Rough Sets Theory

    Directory of Open Access Journals (Sweden)

    Long Cheng

    2014-01-01

    Full Text Available Most traditional mode choice models are based on the principle of random utility maximization derived from econometric theory. Alternatively, mode choice modeling can be regarded as a pattern recognition problem reflected from the explanatory variables of determining the choices between alternatives. The paper applies the knowledge discovery technique of rough sets theory to model travel mode choices incorporating household and individual sociodemographics and travel information, and to identify the significance of each attribute. The study uses the detailed travel diary survey data of Changxing county which contains information on both household and individual travel behaviors for model estimation and evaluation. The knowledge is presented in the form of easily understood IF-THEN statements or rules which reveal how each attribute influences mode choice behavior. These rules are then used to predict travel mode choices from information held about previously unseen individuals and the classification performance is assessed. The rough sets model shows high robustness and good predictive ability. The most significant condition attributes identified to determine travel mode choices are gender, distance, household annual income, and occupation. Comparative evaluation with the MNL model also proves that the rough sets model gives superior prediction accuracy and coverage on travel mode choice modeling.

  18. Test policy optimization for a complex system: an application for the differential model for equivalent parameters (DMEP)

    International Nuclear Information System (INIS)

    Vasseur, D.; Eid, M.

    1996-01-01

    One of EDF's current priorities is the optimisation of the preventive maintenance in all French nuclear power stations. This optimisation involves a rationalization of the choice of equipments to be maintained and maintenance tasks to be carried out, as well as a judicious choice of intervals between these tasks. This work is being carried out in cooperation between EDF and the CEA (Atomic Energy Commission), and suggests a procedure to provide assistance in optimising intervals between maintenance tasks respecting a global unavailability target. This work is based on the differential model for equivalent parameters (DMEP). (authors)

  19. Modelling of intermittent microwave convective drying: parameter sensitivity

    Directory of Open Access Journals (Sweden)

    Zhang Zhijun

    2017-06-01

    Full Text Available The reliability of the predictions of a mathematical model is a prerequisite to its utilization. A multiphase porous media model of intermittent microwave convective drying is developed based on the literature. The model considers the liquid water, gas and solid matrix inside of food. The model is simulated by COMSOL software. Its sensitivity parameter is analysed by changing the parameter values by ±20%, with the exception of several parameters. The sensitivity analysis of the process of the microwave power level shows that each parameter: ambient temperature, effective gas diffusivity, and evaporation rate constant, has significant effects on the process. However, the surface mass, heat transfer coefficient, relative and intrinsic permeability of the gas, and capillary diffusivity of water do not have a considerable effect. The evaporation rate constant has minimal parameter sensitivity with a ±20% value change, until it is changed 10-fold. In all results, the temperature and vapour pressure curves show the same trends as the moisture content curve. However, the water saturation at the medium surface and in the centre show different results. Vapour transfer is the major mass transfer phenomenon that affects the drying process.

  20. Modelling of intermittent microwave convective drying: parameter sensitivity

    Science.gov (United States)

    Zhang, Zhijun; Qin, Wenchao; Shi, Bin; Gao, Jingxin; Zhang, Shiwei

    2017-06-01

    The reliability of the predictions of a mathematical model is a prerequisite to its utilization. A multiphase porous media model of intermittent microwave convective drying is developed based on the literature. The model considers the liquid water, gas and solid matrix inside of food. The model is simulated by COMSOL software. Its sensitivity parameter is analysed by changing the parameter values by ±20%, with the exception of several parameters. The sensitivity analysis of the process of the microwave power level shows that each parameter: ambient temperature, effective gas diffusivity, and evaporation rate constant, has significant effects on the process. However, the surface mass, heat transfer coefficient, relative and intrinsic permeability of the gas, and capillary diffusivity of water do not have a considerable effect. The evaporation rate constant has minimal parameter sensitivity with a ±20% value change, until it is changed 10-fold. In all results, the temperature and vapour pressure curves show the same trends as the moisture content curve. However, the water saturation at the medium surface and in the centre show different results. Vapour transfer is the major mass transfer phenomenon that affects the drying process.

  1. Synchronous Generator Model Parameter Estimation Based on Noisy Dynamic Waveforms

    Science.gov (United States)

    Berhausen, Sebastian; Paszek, Stefan

    2016-01-01

    In recent years, there have occurred system failures in many power systems all over the world. They have resulted in a lack of power supply to a large number of recipients. To minimize the risk of occurrence of power failures, it is necessary to perform multivariate investigations, including simulations, of power system operating conditions. To conduct reliable simulations, the current base of parameters of the models of generating units, containing the models of synchronous generators, is necessary. In the paper, there is presented a method for parameter estimation of a synchronous generator nonlinear model based on the analysis of selected transient waveforms caused by introducing a disturbance (in the form of a pseudorandom signal) in the generator voltage regulation channel. The parameter estimation was performed by minimizing the objective function defined as a mean square error for deviations between the measurement waveforms and the waveforms calculated based on the generator mathematical model. A hybrid algorithm was used for the minimization of the objective function. In the paper, there is described a filter system used for filtering the noisy measurement waveforms. The calculation results of the model of a 44 kW synchronous generator installed on a laboratory stand of the Institute of Electrical Engineering and Computer Science of the Silesian University of Technology are also given. The presented estimation method can be successfully applied to parameter estimation of different models of high-power synchronous generators operating in a power system.

  2. End use technology choice in the National Energy Modeling System (NEMS): An analysis of the residential and commercial building sectors

    International Nuclear Information System (INIS)

    Wilkerson, Jordan T.; Cullenward, Danny; Davidian, Danielle; Weyant, John P.

    2013-01-01

    The National Energy Modeling System (NEMS) is arguably the most influential energy model in the United States. The U.S. Energy Information Administration uses NEMS to generate the federal government's annual long-term forecast of national energy consumption and to evaluate prospective federal energy policies. NEMS is considered such a standard tool that other models are calibrated to its forecasts, in both government and academic practice. As a result, NEMS has a significant influence over expert opinions of plausible energy futures. NEMS is a massively detailed model whose inner workings, despite its prominence, receive relatively scant critical attention. This paper analyzes how NEMS projects energy demand in the residential and commercial sectors. In particular, we focus on the role of consumers' preferences and financial constraints, investigating how consumers choose appliances and other end-use technologies. We identify conceptual issues in the approach the model takes to the same question across both sectors. Running the model with a range of consumer preferences, we estimate the extent to which this issue impacts projected consumption relative to the baseline model forecast for final energy demand in the year 2035. In the residential sector, the impact ranges from a decrease of 0.73 quads (− 6.0%) to an increase of 0.24 quads (+ 2.0%). In the commercial sector, the impact ranges from a decrease of 1.0 quads (− 9.0%) to an increase of 0.99 quads (+ 9.0%). - Highlights: • This paper examines the impact of consumer preferences on final energy in the Commercial and Residential sectors of the National Energy Modeling System (NEMS). • We describe the conceptual and empirical basis for modeling consumer technology choice in NEMS. • We offer a range of alternative parameters to show the energy demand sensitivity to technology choice. • We show there are significant potential savings available in both building sectors. • Because the model uses its own

  3. Assessment of Lumped-Parameter Models for Rigid Footings

    DEFF Research Database (Denmark)

    Andersen, Lars

    2010-01-01

    The quality of consistent lumped-parameter models of rigid footings is examined. Emphasis is put on the maximum response during excitation and the geometrical damping related to free vibrations. The optimal order of a lumped-parameter model is determined for each degree of freedom, i.e. horizontal...... and vertical translations as well as torsion and rocking, and the necessity of coupling between horizontal sliding and rocking is discussed. Most of the analyses are carried out for hexagonal footings; but in order to generalise the conclusions to a broader variety of footings, comparisons are made...... with the response of circular and square foundations....

  4. Consequences of gas flux model choice on the interpretation of metabolic balance across 15 lakes

    Science.gov (United States)

    Dugan, Hilary; Woolway, R. Iestyn; Santoso, Arianto; Corman, Jessica; Jaimes, Aline; Nodine, Emily; Patil, Vijay; Zwart, Jacob A.; Brentrup, Jennifer A.; Hetherington, Amy; Oliver, Samantha K.; Read, Jordan S.; Winters, Kirsten; Hanson, Paul; Read, Emily; Winslow, Luke; Weathers, Kathleen

    2016-01-01

    Ecosystem metabolism and the contribution of carbon dioxide from lakes to the atmosphere can be estimated from free-water gas measurements through the use of mass balance models, which rely on a gas transfer coefficient (k) to model gas exchange with the atmosphere. Theoretical and empirically based models of krange in complexity from wind-driven power functions to complex surface renewal models; however, model choice is rarely considered in most studies of lake metabolism. This study used high-frequency data from 15 lakes provided by the Global Lake Ecological Observatory Network (GLEON) to study how model choice of kinfluenced estimates of lake metabolism and gas exchange with the atmosphere. We tested 6 models of k on lakes chosen to span broad gradients in surface area and trophic states; a metabolism model was then fit to all 6 outputs of k data. We found that hourly values for k were substantially different between models and, at an annual scale, resulted in significantly different estimates of lake metabolism and gas exchange with the atmosphere.

  5. Parameter estimation in nonlinear models for pesticide degradation

    International Nuclear Information System (INIS)

    Richter, O.; Pestemer, W.; Bunte, D.; Diekkrueger, B.

    1991-01-01

    A wide class of environmental transfer models is formulated as ordinary or partial differential equations. With the availability of fast computers, the numerical solution of large systems became feasible. The main difficulty in performing a realistic and convincing simulation of the fate of a substance in the biosphere is not the implementation of numerical techniques but rather the incomplete data basis for parameter estimation. Parameter estimation is a synonym for statistical and numerical procedures to derive reasonable numerical values for model parameters from data. The classical method is the familiar linear regression technique which dates back to the 18th century. Because it is easy to handle, linear regression has long been established as a convenient tool for analysing relationships. However, the wide use of linear regression has led to an overemphasis of linear relationships. In nature, most relationships are nonlinear and linearization often gives a poor approximation of reality. Furthermore, pure regression models are not capable to map the dynamics of a process. Therefore, realistic models involve the evolution in time (and space). This leads in a natural way to the formulation of differential equations. To establish the link between data and dynamical models, numerical advanced parameter identification methods have been developed in recent years. This paper demonstrates the application of these techniques to estimation problems in the field of pesticide dynamics. (7 refs., 5 figs., 2 tabs.)

  6. Investigating Spatial Interdependence in E-Bike Choice Using Spatially Autoregressive Model

    Directory of Open Access Journals (Sweden)

    Chengcheng Xu

    2017-08-01

    Full Text Available Increased attention has been given to promoting e-bike usage in recent years. However, the research gap still exists in understanding the effects of spatial interdependence on e-bike choice. This study investigated how spatial interdependence affected the e-bike choice. The Moran’s I statistic test showed that spatial interdependence exists in e-bike choice at aggregated level. Bayesian spatial autoregressive logistic analyses were then used to investigate the spatial interdependence at individual level. Separate models were developed for commuting and non-commuting trips. The factors affecting e-bike choice are different between commuting and non-commuting trips. Spatial interdependence exists at both origin and destination sides of commuting and non-commuting trips. Travellers are more likely to choose e-bikes if their neighbours at the trip origin and destination also travel by e-bikes. And the magnitude of this spatial interdependence is different across various traffic analysis zones. The results suggest that, without considering spatial interdependence, the traditional methods may have biased estimation results and make systematic forecasting errors.

  7. Inhalation Exposure Input Parameters for the Biosphere Model

    International Nuclear Information System (INIS)

    M. Wasiolek

    2006-01-01

    This analysis is one of the technical reports that support the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), referred to in this report as the biosphere model. ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes in detail the conceptual model as well as the mathematical model and its input parameters. This report documents development of input parameters for the biosphere model that are related to atmospheric mass loading and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the total system performance assessment (TSPA) for a Yucca Mountain repository. ''Inhalation Exposure Input Parameters for the Biosphere Model'' is one of five reports that develop input parameters for the biosphere model. A graphical representation of the documentation hierarchy for the biosphere model is presented in Figure 1-1 (based on BSC 2006 [DIRS 176938]). This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling and how this analysis report contributes to biosphere modeling. This analysis report defines and justifies values of atmospheric mass loading for the biosphere model. Mass loading is the total mass concentration of resuspended particles (e.g., dust, ash) in a volume of air. Mass loading values are used in the air submodel of the biosphere model to calculate concentrations of radionuclides in air inhaled by a receptor and concentrations in air surrounding crops. Concentrations in air to which the receptor is exposed are then used in the inhalation submodel to calculate the dose contribution to the receptor from inhalation of contaminated airborne particles. Concentrations in air surrounding plants are used in the plant submodel to calculate the concentrations of radionuclides in foodstuffs contributed from uptake by foliar interception. This report is concerned primarily with the

  8. Inhalation Exposure Input Parameters for the Biosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    M. Wasiolek

    2006-06-05

    This analysis is one of the technical reports that support the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), referred to in this report as the biosphere model. ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes in detail the conceptual model as well as the mathematical model and its input parameters. This report documents development of input parameters for the biosphere model that are related to atmospheric mass loading and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the total system performance assessment (TSPA) for a Yucca Mountain repository. ''Inhalation Exposure Input Parameters for the Biosphere Model'' is one of five reports that develop input parameters for the biosphere model. A graphical representation of the documentation hierarchy for the biosphere model is presented in Figure 1-1 (based on BSC 2006 [DIRS 176938]). This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling and how this analysis report contributes to biosphere modeling. This analysis report defines and justifies values of atmospheric mass loading for the biosphere model. Mass loading is the total mass concentration of resuspended particles (e.g., dust, ash) in a volume of air. Mass loading values are used in the air submodel of the biosphere model to calculate concentrations of radionuclides in air inhaled by a receptor and concentrations in air surrounding crops. Concentrations in air to which the receptor is exposed are then used in the inhalation submodel to calculate the dose contribution to the receptor from inhalation of contaminated airborne particles. Concentrations in air surrounding plants are used in the plant submodel to calculate the concentrations of radionuclides in foodstuffs contributed from uptake by foliar interception. This

  9. The level density parameters for fermi gas model

    International Nuclear Information System (INIS)

    Zuang Youxiang; Wang Cuilan; Zhou Chunmei; Su Zongdi

    1986-01-01

    Nuclear level densities are crucial ingredient in the statistical models, for instance, in the calculations of the widths, cross sections, emitted particle spectra, etc. for various reaction channels. In this work 667 sets of more reliable and new experimental data are adopted, which include average level spacing D D , radiative capture width Γ γ 0 at neutron binding energy and cumulative level number N 0 at the low excitation energy. They are published during 1973 to 1983. Based on the parameters given by Gilbert-Cameon and Cook the physical quantities mentioned above are calculated. The calculated results have the deviation obviously from experimental values. In order to improve the fitting, the parameters in the G-C formula are adjusted and new set of level density parameters is obsained. The parameters is this work are more suitable to fit new measurements

  10. Multiple data sets and modelling choices in a comparative LCA of disposable beverage cups.

    Science.gov (United States)

    van der Harst, Eugenie; Potting, José; Kroeze, Carolien

    2014-10-01

    This study used multiple data sets and modelling choices in an environmental life cycle assessment (LCA) to compare typical disposable beverage cups made from polystyrene (PS), polylactic acid (PLA; bioplastic) and paper lined with bioplastic (biopaper). Incineration and recycling were considered as waste processing options, and for the PLA and biopaper cup also composting and anaerobic digestion. Multiple data sets and modelling choices were systematically used to calculate average results and the spread in results for each disposable cup in eleven impact categories. The LCA results of all combinations of data sets and modelling choices consistently identify three processes that dominate the environmental impact: (1) production of the cup's basic material (PS, PLA, biopaper), (2) cup manufacturing, and (3) waste processing. The large spread in results for impact categories strongly overlaps among the cups, however, and therefore does not allow a preference for one type of cup material. Comparison of the individual waste treatment options suggests some cautious preferences. The average waste treatment results indicate that recycling is the preferred option for PLA cups, followed by anaerobic digestion and incineration. Recycling is slightly preferred over incineration for the biopaper cups. There is no preferred waste treatment option for the PS cups. Taking into account the spread in waste treatment results for all cups, however, none of these preferences for waste processing options can be justified. The only exception is composting, which is least preferred for both PLA and biopaper cups. Our study illustrates that using multiple data sets and modelling choices can lead to considerable spread in LCA results. This makes comparing products more complex, but the outcomes more robust. Copyright © 2014 Elsevier B.V. All rights reserved.

  11. Identifiability and error minimization of receptor model parameters with PET

    International Nuclear Information System (INIS)

    Delforge, J.; Syrota, A.; Mazoyer, B.M.

    1989-01-01

    The identifiability problem and the general framework for experimental design optimization are presented. The methodology is applied to the problem of the receptor-ligand model parameter estimation with dynamic positron emission tomography data. The first attempts to identify the model parameters from data obtained with a single tracer injection led to disappointing numerical results. The possibility of improving parameter estimation using a new experimental design combining an injection of the labelled ligand and an injection of the cold ligand (displacement experiment) has been investigated. However, this second protocol led to two very different numerical solutions and it was necessary to demonstrate which solution was biologically valid. This has been possible by using a third protocol including both a displacement and a co-injection experiment. (authors). 16 refs.; 14 figs

  12. X-Parameter Based Modelling of Polar Modulated Power Amplifiers

    DEFF Research Database (Denmark)

    Wang, Yelin; Nielsen, Troels Studsgaard; Sira, Daniel

    2013-01-01

    X-parameters are developed as an extension of S-parameters capable of modelling non-linear devices driven by large signals. They are suitable for devices having only radio frequency (RF) and DC ports. In a polar power amplifier (PA), phase and envelope of the input modulated signal are applied...... at separate ports and the envelope port is neither an RF nor a DC port. As a result, X-parameters may fail to characterise the effect of the envelope port excitation and consequently the polar PA. This study introduces a solution to the problem for a commercial polar PA. In this solution, the RF-phase path...... PA for simulations. The simulated error vector magnitude (EVM) and adjacent channel power ratio (ACPR) were compared with the measured data to validate the model. The maximum differences between the simulated and measured EVM and ACPR are less than 2% point and 3 dB, respectively....

  13. Joint Dynamics Modeling and Parameter Identification for Space Robot Applications

    Directory of Open Access Journals (Sweden)

    Adenilson R. da Silva

    2007-01-01

    Full Text Available Long-term mission identification and model validation for in-flight manipulator control system in almost zero gravity with hostile space environment are extremely important for robotic applications. In this paper, a robot joint mathematical model is developed where several nonlinearities have been taken into account. In order to identify all the required system parameters, an integrated identification strategy is derived. This strategy makes use of a robust version of least-squares procedure (LS for getting the initial conditions and a general nonlinear optimization method (MCS—multilevel coordinate search—algorithm to estimate the nonlinear parameters. The approach is applied to the intelligent robot joint (IRJ experiment that was developed at DLR for utilization opportunity on the International Space Station (ISS. The results using real and simulated measurements have shown that the developed algorithm and strategy have remarkable features in identifying all the parameters with good accuracy.

  14. Parameter Identification of Ship Maneuvering Models Using Recursive Least Square Method Based on Support Vector Machines

    Directory of Open Access Journals (Sweden)

    Man Zhu

    2017-03-01

    Full Text Available Determination of ship maneuvering models is a tough task of ship maneuverability prediction. Among several prime approaches of estimating ship maneuvering models, system identification combined with the full-scale or free- running model test is preferred. In this contribution, real-time system identification programs using recursive identification method, such as the recursive least square method (RLS, are exerted for on-line identification of ship maneuvering models. However, this method seriously depends on the objects of study and initial values of identified parameters. To overcome this, an intelligent technology, i.e., support vector machines (SVM, is firstly used to estimate initial values of the identified parameters with finite samples. As real measured motion data of the Mariner class ship always involve noise from sensors and external disturbances, the zigzag simulation test data include a substantial quantity of Gaussian white noise. Wavelet method and empirical mode decomposition (EMD are used to filter the data corrupted by noise, respectively. The choice of the sample number for SVM to decide initial values of identified parameters is extensively discussed and analyzed. With de-noised motion data as input-output training samples, parameters of ship maneuvering models are estimated using RLS and SVM-RLS, respectively. The comparison between identification results and true values of parameters demonstrates that both the identified ship maneuvering models from RLS and SVM-RLS have reasonable agreements with simulated motions of the ship, and the increment of the sample for SVM positively affects the identification results. Furthermore, SVM-RLS using data de-noised by EMD shows the highest accuracy and best convergence.

  15. Prediction of interest rate using CKLS model with stochastic parameters

    Energy Technology Data Exchange (ETDEWEB)

    Ying, Khor Chia [Faculty of Computing and Informatics, Multimedia University, Jalan Multimedia, 63100 Cyberjaya, Selangor (Malaysia); Hin, Pooi Ah [Sunway University Business School, No. 5, Jalan Universiti, Bandar Sunway, 47500 Subang Jaya, Selangor (Malaysia)

    2014-06-19

    The Chan, Karolyi, Longstaff and Sanders (CKLS) model is a popular one-factor model for describing the spot interest rates. In this paper, the four parameters in the CKLS model are regarded as stochastic. The parameter vector φ{sup (j)} of four parameters at the (J+n)-th time point is estimated by the j-th window which is defined as the set consisting of the observed interest rates at the j′-th time point where j≤j′≤j+n. To model the variation of φ{sup (j)}, we assume that φ{sup (j)} depends on φ{sup (j−m)}, φ{sup (j−m+1)},…, φ{sup (j−1)} and the interest rate r{sub j+n} at the (j+n)-th time point via a four-dimensional conditional distribution which is derived from a [4(m+1)+1]-dimensional power-normal distribution. Treating the (j+n)-th time point as the present time point, we find a prediction interval for the future value r{sub j+n+1} of the interest rate at the next time point when the value r{sub j+n} of the interest rate is given. From the above four-dimensional conditional distribution, we also find a prediction interval for the future interest rate r{sub j+n+d} at the next d-th (d≥2) time point. The prediction intervals based on the CKLS model with stochastic parameters are found to have better ability of covering the observed future interest rates when compared with those based on the model with fixed parameters.

  16. Model parameters estimation and sensitivity by genetic algorithms

    International Nuclear Information System (INIS)

    Marseguerra, Marzio; Zio, Enrico; Podofillini, Luca

    2003-01-01

    In this paper we illustrate the possibility of extracting qualitative information on the importance of the parameters of a model in the course of a Genetic Algorithms (GAs) optimization procedure for the estimation of such parameters. The Genetic Algorithms' search of the optimal solution is performed according to procedures that resemble those of natural selection and genetics: an initial population of alternative solutions evolves within the search space through the four fundamental operations of parent selection, crossover, replacement, and mutation. During the search, the algorithm examines a large amount of solution points which possibly carries relevant information on the underlying model characteristics. A possible utilization of this information amounts to create and update an archive with the set of best solutions found at each generation and then to analyze the evolution of the statistics of the archive along the successive generations. From this analysis one can retrieve information regarding the speed of convergence and stabilization of the different control (decision) variables of the optimization problem. In this work we analyze the evolution strategy followed by a GA in its search for the optimal solution with the aim of extracting information on the importance of the control (decision) variables of the optimization with respect to the sensitivity of the objective function. The study refers to a GA search for optimal estimates of the effective parameters in a lumped nuclear reactor model of literature. The supporting observation is that, as most optimization procedures do, the GA search evolves towards convergence in such a way to stabilize first the most important parameters of the model and later those which influence little the model outputs. In this sense, besides estimating efficiently the parameters values, the optimization approach also allows us to provide a qualitative ranking of their importance in contributing to the model output. The

  17. Revised Parameters for the AMOEBA Polarizable Atomic Multipole Water Model.

    Science.gov (United States)

    Laury, Marie L; Wang, Lee-Ping; Pande, Vijay S; Head-Gordon, Teresa; Ponder, Jay W

    2015-07-23

    A set of improved parameters for the AMOEBA polarizable atomic multipole water model is developed. An automated procedure, ForceBalance, is used to adjust model parameters to enforce agreement with ab initio-derived results for water clusters and experimental data for a variety of liquid phase properties across a broad temperature range. The values reported here for the new AMOEBA14 water model represent a substantial improvement over the previous AMOEBA03 model. The AMOEBA14 model accurately predicts the temperature of maximum density and qualitatively matches the experimental density curve across temperatures from 249 to 373 K. Excellent agreement is observed for the AMOEBA14 model in comparison to experimental properties as a function of temperature, including the second virial coefficient, enthalpy of vaporization, isothermal compressibility, thermal expansion coefficient, and dielectric constant. The viscosity, self-diffusion constant, and surface tension are also well reproduced. In comparison to high-level ab initio results for clusters of 2-20 water molecules, the AMOEBA14 model yields results similar to AMOEBA03 and the direct polarization iAMOEBA models. With advances in computing power, calibration data, and optimization techniques, we recommend the use of the AMOEBA14 water model for future studies employing a polarizable water model.

  18. An improved cognitive model of the Iowa and Soochow Gambling Tasks with regard to model fitting performance and tests of parameter consistency.

    Science.gov (United States)

    Dai, Junyi; Kerestes, Rebecca; Upton, Daniel J; Busemeyer, Jerome R; Stout, Julie C

    2015-01-01

    The Iowa Gambling Task (IGT) and the Soochow Gambling Task (SGT) are two experience-based risky decision-making tasks for examining decision-making deficits in clinical populations. Several cognitive models, including the expectancy-valence learning (EVL) model and the prospect valence learning (PVL) model, have been developed to disentangle the motivational, cognitive, and response processes underlying the explicit choices in these tasks. The purpose of the current study was to develop an improved model that can fit empirical data better than the EVL and PVL models and, in addition, produce more consistent parameter estimates across the IGT and SGT. Twenty-six opiate users (mean age 34.23; SD 8.79) and 27 control participants (mean age 35; SD 10.44) completed both tasks. Eighteen cognitive models varying in evaluation, updating, and choice rules were fit to individual data and their performances were compared to that of a statistical baseline model to find a best fitting model. The results showed that the model combining the prospect utility function treating gains and losses separately, the decay-reinforcement updating rule, and the trial-independent choice rule performed the best in both tasks. Furthermore, the winning model produced more consistent individual parameter estimates across the two tasks than any of the other models.

  19. An Improved Cognitive Model of the Iowa and Soochow Gambling Tasks With Regard to Model Fitting Performance and Tests of Parameter Consistency

    Directory of Open Access Journals (Sweden)

    Junyi eDai

    2015-03-01

    Full Text Available The Iowa Gambling Task (IGT and the Soochow Gambling Task (SGT are two experience-based risky decision-making tasks for examining decision-making deficits in clinical populations. Several cognitive models, including the expectancy-valence learning model (EVL and the prospect valence learning model (PVL, have been developed to disentangle the motivational, cognitive, and response processes underlying the explicit choices in these tasks. The purpose of the current study was to develop an improved model that can fit empirical data better than the EVL and PVL models and, in addition, produce more consistent parameter estimates across the IGT and SGT. Twenty-six opiate users (mean age 34.23; SD 8.79 and 27 control participants (mean age 35; SD 10.44 completed both tasks. Eighteen cognitive models varying in evaluation, updating, and choice rules were fit to individual data and their performances were compared to that of a statistical baseline model to find a best fitting model. The results showed that the model combining the prospect utility function treating gains and losses separately, the decay-reinforcement updating rule, and the trial-independent choice rule performed the best in both tasks. Furthermore, the winning model produced more consistent individual parameter estimates across the two tasks than any of the other models.

  20. Parameter sensitivity analysis of a lumped-parameter model of a chain of lymphangions in series.

    Science.gov (United States)

    Jamalian, Samira; Bertram, Christopher D; Richardson, William J; Moore, James E

    2013-12-01

    Any disruption of the lymphatic system due to trauma or injury can lead to edema. There is no effective cure for lymphedema, partly because predictive knowledge of lymphatic system reactions to interventions is lacking. A well-developed model of the system could greatly improve our understanding of its function. Lymphangions, defined as the vessel segment between two valves, are the individual pumping units. Based on our previous lumped-parameter model of a chain of lymphangions, this study aimed to identify the parameters that affect the system output the most using a sensitivity analysis. The system was highly sensitive to minimum valve resistance, such that variations in this parameter caused an order-of-magnitude change in time-average flow rate for certain values of imposed pressure difference. Average flow rate doubled when contraction frequency was increased within its physiological range. Optimum lymphangion length was found to be some 13-14.5 diameters. A peak of time-average flow rate occurred when transmural pressure was such that the pressure-diameter loop for active contractions was centered near maximum passive vessel compliance. Increasing the number of lymphangions in the chain improved the pumping in the presence of larger adverse pressure differences. For a given pressure difference, the optimal number of lymphangions increased with the total vessel length. These results indicate that further experiments to estimate valve resistance more accurately are necessary. The existence of an optimal value of transmural pressure may provide additional guidelines for increasing pumping in areas affected by edema.

  1. Investigation of land use effects on Nash model parameters

    Science.gov (United States)

    Niazi, Faegheh; Fakheri Fard, Ahmad; Nourani, Vahid; Goodrich, David; Gupta, Hoshin

    2015-04-01

    Flood forecasting is of great importance in hydrologic planning, hydraulic structure design, water resources management and sustainable designs like flood control and management. Nash's instantaneous unit hydrograph is frequently used for simulating hydrological response in natural watersheds. Urban hydrology is gaining more attention due to population increases and associated construction escalation. Rapid development of urban areas affects the hydrologic processes of watersheds by decreasing soil permeability, flood base flow, lag time and increase in flood volume, peak runoff rates and flood frequency. In this study the influence of urbanization on the significant parameters of the Nash model have been investigated. These parameters were calculated using three popular methods (i.e. moment, root mean square error and random sampling data generation), in a small watershed consisting of one natural sub-watershed which drains into a residentially developed sub-watershed in the city of Sierra Vista, Arizona. The results indicated that for all three methods, the lag time, which is product of Nash parameters "K" and "n", in the natural sub-watershed is greater than the developed one. This logically implies more storage and/or attenuation in the natural sub-watershed. The median K and n parameters derived from the three methods using calibration events were tested via a set of verification events. The results indicated that all the three method have acceptable accuracy in hydrograph simulation. The CDF curves and histograms of the parameters clearly show the difference of the Nash parameter values between the natural and developed sub-watersheds. Some specific upper and lower percentile values of the median of the generated parameters (i.e. 10, 20 and 30 %) were analyzed to future investigates the derived parameters. The model was sensitive to variations in the value of the uncertain K and n parameter. Changes in n are smaller than K in both sub-watersheds indicating

  2. Revised models and genetic parameter estimates for production and ...

    African Journals Online (AJOL)

    Genetic parameters for production and reproduction traits in the Elsenburg Dormer sheep stud were estimated using records of 11743 lambs born between 1943 and 2002. An animal model with direct and maternal additive, maternal permanent and temporary environmental effects was fitted for traits considered traits of the ...

  3. Transformations among CE–CVM model parameters for ...

    Indian Academy of Sciences (India)

    In the development of thermodynamic databases for multicomponent systems using the cluster expansion–cluster variation methods, we need to have a consistent procedure for expressing the model parameters (CECs) of a higher order system in terms of those of the lower order subsystems and to an independent set of ...

  4. Parameter Estimation for a Computable General Equilibrium Model

    DEFF Research Database (Denmark)

    Arndt, Channing; Robinson, Sherman; Tarp, Finn

    We introduce a maximum entropy approach to parameter estimation for computable general equilibrium (CGE) models. The approach applies information theory to estimating a system of nonlinear simultaneous equations. It has a number of advantages. First, it imposes all general equilibrium constraints...

  5. Comparison of parameter estimation algorithms in hydrological modelling

    DEFF Research Database (Denmark)

    Blasone, Roberta-Serena; Madsen, Henrik; Rosbjerg, Dan

    2006-01-01

    for these types of models, although at a more expensive computational cost. The main purpose of this study is to investigate the performance of a global and a local parameter optimization algorithm, respectively, the Shuffled Complex Evolution (SCE) algorithm and the gradient-based Gauss...

  6. Parameter Estimation for a Computable General Equilibrium Model

    DEFF Research Database (Denmark)

    Arndt, Channing; Robinson, Sherman; Tarp, Finn

    2002-01-01

    We introduce a maximum entropy approach to parameter estimation for computable general equilibrium (CGE) models. The approach applies information theory to estimating a system of non-linear simultaneous equations. It has a number of advantages. First, it imposes all general equilibrium constraints...

  7. Constraint on Parameters of Inverse Compton Scattering Model for ...

    Indian Academy of Sciences (India)

    J. Astrophys. Astr. (2011) 32, 299–300 c Indian Academy of Sciences. Constraint on Parameters of Inverse Compton Scattering Model for PSR B2319+60. H. G. Wang. ∗. & M. Lv. Center for Astrophysics,Guangzhou University, Guangzhou, China. ∗ e-mail: cosmic008@yahoo.com.cn. Abstract. Using the multifrequency radio ...

  8. Death Valley regional groundwater flow model calibration using optimal parameter estimation methods and geoscientific information systems

    Science.gov (United States)

    D'Agnese, F. A.; Faunt, C.C.; Hill, M.C.; Turner, A.K.

    1996-01-01

    A three-layer Death Valley regional groundwater flow model was constructed to evaluate potential regional groundwater flow paths in the vicinity of Yucca Mountain, Nevada. Geoscientific information systems were used to characterize the complex surface and subsurface hydrogeological conditions of the area, and this characterization was used to construct likely conceptual models of the flow system. The high contrasts and abrupt contacts of the different hydrogeological units in the subsurface make zonation the logical choice for representing the hydraulic conductivity distribution. Hydraulic head and spring flow data were used to test different conceptual models by using nonlinear regression to determine parameter values that currently provide the best match between the measured and simulated heads and flows.

  9. Inhalation Exposure Input Parameters for the Biosphere Model

    International Nuclear Information System (INIS)

    M. A. Wasiolek

    2003-01-01

    This analysis is one of the nine reports that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN) biosphere model. The ''Biosphere Model Report'' (BSC 2003a) describes in detail the conceptual model as well as the mathematical model and its input parameters. This report documents a set of input parameters for the biosphere model, and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the Total System Performance Assessment (TSPA) for a Yucca Mountain repository. This report, ''Inhalation Exposure Input Parameters for the Biosphere Model'', is one of the five reports that develop input parameters for the biosphere model. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling, and the plan for development of the biosphere abstraction products for TSPA, as identified in the ''Technical Work Plan: for Biosphere Modeling and Expert Support'' (BSC 2003b). It should be noted that some documents identified in Figure 1-1 may be under development at the time this report is issued and therefore not available at that time. This figure is included to provide an understanding of how this analysis report contributes to biosphere modeling in support of the license application, and is not intended to imply that access to the listed documents is required to understand the contents of this analysis report. This analysis report defines and justifies values of mass loading, which is the total mass concentration of resuspended particles (e.g., dust, ash) in a volume of air. Measurements of mass loading are used in the air submodel of ERMYN to calculate concentrations of radionuclides in air surrounding crops and concentrations in air inhaled by a receptor. Concentrations in air to which the

  10. Integrating microbial diversity in soil carbon dynamic models parameters

    Science.gov (United States)

    Louis, Benjamin; Menasseri-Aubry, Safya; Leterme, Philippe; Maron, Pierre-Alain; Viaud, Valérie

    2015-04-01

    Faced with the numerous concerns about soil carbon dynamic, a large quantity of carbon dynamic models has been developed during the last century. These models are mainly in the form of deterministic compartment models with carbon fluxes between compartments represented by ordinary differential equations. Nowadays, lots of them consider the microbial biomass as a compartment of the soil organic matter (carbon quantity). But the amount of microbial carbon is rarely used in the differential equations of the models as a limiting factor. Additionally, microbial diversity and community composition are mostly missing, although last advances in soil microbial analytical methods during the two past decades have shown that these characteristics play also a significant role in soil carbon dynamic. As soil microorganisms are essential drivers of soil carbon dynamic, the question about explicitly integrating their role have become a key issue in soil carbon dynamic models development. Some interesting attempts can be found and are dominated by the incorporation of several compartments of different groups of microbial biomass in terms of functional traits and/or biogeochemical compositions to integrate microbial diversity. However, these models are basically heuristic models in the sense that they are used to test hypotheses through simulations. They have rarely been confronted to real data and thus cannot be used to predict realistic situations. The objective of this work was to empirically integrate microbial diversity in a simple model of carbon dynamic through statistical modelling of the model parameters. This work is based on available experimental results coming from a French National Research Agency program called DIMIMOS. Briefly, 13C-labelled wheat residue has been incorporated into soils with different pedological characteristics and land use history. Then, the soils have been incubated during 104 days and labelled and non-labelled CO2 fluxes have been measured at ten

  11. Inhalation Exposure Input Parameters for the Biosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    M. A. Wasiolek

    2003-09-24

    This analysis is one of the nine reports that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN) biosphere model. The ''Biosphere Model Report'' (BSC 2003a) describes in detail the conceptual model as well as the mathematical model and its input parameters. This report documents a set of input parameters for the biosphere model, and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the Total System Performance Assessment (TSPA) for a Yucca Mountain repository. This report, ''Inhalation Exposure Input Parameters for the Biosphere Model'', is one of the five reports that develop input parameters for the biosphere model. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling, and the plan for development of the biosphere abstraction products for TSPA, as identified in the ''Technical Work Plan: for Biosphere Modeling and Expert Support'' (BSC 2003b). It should be noted that some documents identified in Figure 1-1 may be under development at the time this report is issued and therefore not available at that time. This figure is included to provide an understanding of how this analysis report contributes to biosphere modeling in support of the license application, and is not intended to imply that access to the listed documents is required to understand the contents of this analysis report. This analysis report defines and justifies values of mass loading, which is the total mass concentration of resuspended particles (e.g., dust, ash) in a volume of air. Measurements of mass loading are used in the air submodel of ERMYN to calculate concentrations of radionuclides in air surrounding crops and concentrations in air

  12. Agricultural and Environmental Input Parameters for the Biosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    K. Rasmuson; K. Rautenstrauch

    2004-09-14

    This analysis is one of 10 technical reports that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN) (i.e., the biosphere model). It documents development of agricultural and environmental input parameters for the biosphere model, and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the total system performance assessment (TSPA) for the repository at Yucca Mountain. The ERMYN provides the TSPA with the capability to perform dose assessments. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships between the major activities and their products (the analysis and model reports) that were planned in ''Technical Work Plan for Biosphere Modeling and Expert Support'' (BSC 2004 [DIRS 169573]). The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes the ERMYN and its input parameters.

  13. Estimating model parameters in nonautonomous chaotic systems using synchronization

    International Nuclear Information System (INIS)

    Yang, Xiaoli; Xu, Wei; Sun, Zhongkui

    2007-01-01

    In this Letter, a technique is addressed for estimating unknown model parameters of multivariate, in particular, nonautonomous chaotic systems from time series of state variables. This technique uses an adaptive strategy for tracking unknown parameters in addition to a linear feedback coupling for synchronizing systems, and then some general conditions, by means of the periodic version of the LaSalle invariance principle for differential equations, are analytically derived to ensure precise evaluation of unknown parameters and identical synchronization between the concerned experimental system and its corresponding receiver one. Exemplifies are presented by employing a parametrically excited 4D new oscillator and an additionally excited Ueda oscillator. The results of computer simulations reveal that the technique not only can quickly track the desired parameter values but also can rapidly respond to changes in operating parameters. In addition, the technique can be favorably robust against the effect of noise when the experimental system is corrupted by bounded disturbance and the normalized absolute error of parameter estimation grows almost linearly with the cutoff value of noise strength in simulation

  14. Soil-Related Input Parameters for the Biosphere Model

    International Nuclear Information System (INIS)

    Smith, A. J.

    2004-01-01

    This report presents one of the analyses that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN). The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes the details of the conceptual model as well as the mathematical model and the required input parameters. The biosphere model is one of a series of process models supporting the postclosure Total System Performance Assessment (TSPA) for the Yucca Mountain repository. A schematic representation of the documentation flow for the Biosphere input to TSPA is presented in Figure 1-1. This figure shows the evolutionary relationships among the products (i.e., analysis and model reports) developed for biosphere modeling, and the biosphere abstraction products for TSPA, as identified in the ''Technical Work Plan for Biosphere Modeling and Expert Support'' (TWP) (BSC 2004 [DIRS 169573]). This figure is included to provide an understanding of how this analysis report contributes to biosphere modeling in support of the license application, and is not intended to imply that access to the listed documents is required to understand the contents of this report. This report, ''Soil-Related Input Parameters for the Biosphere Model'', is one of the five analysis reports that develop input parameters for use in the ERMYN model. This report is the source documentation for the six biosphere parameters identified in Table 1-1. The purpose of this analysis was to develop the biosphere model parameters associated with the accumulation and depletion of radionuclides in the soil. These parameters support the calculation of radionuclide concentrations in soil from on-going irrigation or ash deposition and, as a direct consequence, radionuclide concentration in other environmental media that are affected by radionuclide concentrations in soil. The analysis was performed in accordance with the TWP (BSC 2004 [DIRS 169573]) where the governing procedure was defined as AP-SIII.9Q, ''Scientific Analyses''. This

  15. Soil-Related Input Parameters for the Biosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    A. J. Smith

    2004-09-09

    This report presents one of the analyses that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN). The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes the details of the conceptual model as well as the mathematical model and the required input parameters. The biosphere model is one of a series of process models supporting the postclosure Total System Performance Assessment (TSPA) for the Yucca Mountain repository. A schematic representation of the documentation flow for the Biosphere input to TSPA is presented in Figure 1-1. This figure shows the evolutionary relationships among the products (i.e., analysis and model reports) developed for biosphere modeling, and the biosphere abstraction products for TSPA, as identified in the ''Technical Work Plan for Biosphere Modeling and Expert Support'' (TWP) (BSC 2004 [DIRS 169573]). This figure is included to provide an understanding of how this analysis report contributes to biosphere modeling in support of the license application, and is not intended to imply that access to the listed documents is required to understand the contents of this report. This report, ''Soil-Related Input Parameters for the Biosphere Model'', is one of the five analysis reports that develop input parameters for use in the ERMYN model. This report is the source documentation for the six biosphere parameters identified in Table 1-1. The purpose of this analysis was to develop the biosphere model parameters associated with the accumulation and depletion of radionuclides in the soil. These parameters support the calculation of radionuclide concentrations in soil from on-going irrigation or ash deposition and, as a direct consequence, radionuclide concentration in other environmental media that are affected by radionuclide concentrations in soil. The analysis was performed in accordance with the TWP (BSC 2004 [DIRS 169573]) where the governing procedure

  16. Space geodetic techniques for global modeling of ionospheric peak parameters

    Science.gov (United States)

    Alizadeh, M. Mahdi; Schuh, Harald; Schmidt, Michael

    The rapid development of new technological systems for navigation, telecommunication, and space missions which transmit signals through the Earth’s upper atmosphere - the ionosphere - makes the necessity of precise, reliable and near real-time models of the ionospheric parameters more crucial. In the last decades space geodetic techniques have turned into a capable tool for measuring ionospheric parameters in terms of Total Electron Content (TEC) or the electron density. Among these systems, the current space geodetic techniques, such as Global Navigation Satellite Systems (GNSS), Low Earth Orbiting (LEO) satellites, satellite altimetry missions, and others have found several applications in a broad range of commercial and scientific fields. This paper aims at the development of a three-dimensional integrated model of the ionosphere, by using various space geodetic techniques and applying a combination procedure for computation of the global model of electron density. In order to model ionosphere in 3D, electron density is represented as a function of maximum electron density (NmF2), and its corresponding height (hmF2). NmF2 and hmF2 are then modeled in longitude, latitude, and height using two sets of spherical harmonic expansions with degree and order 15. To perform the estimation, GNSS input data are simulated in such a way that the true position of the satellites are detected and used, but the STEC values are obtained through a simulation procedure, using the IGS VTEC maps. After simulating the input data, the a priori values required for the estimation procedure are calculated using the IRI-2012 model and also by applying the ray-tracing technique. The estimated results are compared with F2-peak parameters derived from the IRI model to assess the least-square estimation procedure and moreover, to validate the developed maps, the results are compared with the raw F2-peak parameters derived from the Formosat-3/Cosmic data.

  17. Mass balance model parameter transferability on a tropical glacier

    Science.gov (United States)

    Gurgiser, Wolfgang; Mölg, Thomas; Nicholson, Lindsey; Kaser, Georg

    2013-04-01

    The mass balance and melt water production of glaciers is of particular interest in the Peruvian Andes where glacier melt water has markedly increased water supply during the pronounced dry seasons in recent decades. However, the melt water contribution from glaciers is projected to decrease with appreciable negative impacts on the local society within the coming decades. Understanding mass balance processes on tropical glaciers is a prerequisite for modeling present and future glacier runoff. As a first step towards this aim we applied a process-based surface mass balance model in order to calculate observed ablation at two stakes in the ablation zone of Shallap Glacier (4800 m a.s.l., 9°S) in the Cordillera Blanca, Peru. Under the tropical climate, the snow line migrates very frequently across most of the ablation zone all year round causing large temporal and spatial variations of glacier surface conditions and related ablation. Consequently, pronounced differences between the two chosen stakes and the two years were observed. Hourly records of temperature, humidity, wind speed, short wave incoming radiation, and precipitation are available from an automatic weather station (AWS) on the moraine near the glacier for the hydrological years 2006/07 and 2007/08 while stake readings are available at intervals of between 14 to 64 days. To optimize model parameters, we used 1000 model simulations in which the most sensitive model parameters were varied randomly within their physically meaningful ranges. The modeled surface height change was evaluated against the two stake locations in the lower ablation zone (SH11, 4760m) and in the upper ablation zone (SH22, 4816m), respectively. The optimal parameter set for each point achieved good model skill but if we transfer the best parameter combination from one stake site to the other stake site model errors increases significantly. The same happens if we optimize the model parameters for each year individually and transfer

  18. Parameter estimation techniques and uncertainty in ground water flow model predictions

    International Nuclear Information System (INIS)

    Zimmerman, D.A.; Davis, P.A.

    1990-01-01

    Quantification of uncertainty in predictions of nuclear waste repository performance is a requirement of Nuclear Regulatory Commission regulations governing the licensing of proposed geologic repositories for high-level radioactive waste disposal. One of the major uncertainties in these predictions is in estimating the ground-water travel time of radionuclides migrating from the repository to the accessible environment. The cause of much of this uncertainty has been attributed to a lack of knowledge about the hydrogeologic properties that control the movement of radionuclides through the aquifers. A major reason for this lack of knowledge is the paucity of data that is typically available for characterizing complex ground-water flow systems. Because of this, considerable effort has been put into developing parameter estimation techniques that infer property values in regions where no measurements exist. Currently, no single technique has been shown to be superior or even consistently conservative with respect to predictions of ground-water travel time. This work was undertaken to compare a number of parameter estimation techniques and to evaluate how differences in the parameter estimates and the estimation errors are reflected in the behavior of the flow model predictions. That is, we wished to determine to what degree uncertainties in flow model predictions may be affected simply by the choice of parameter estimation technique used. 3 refs., 2 figs

  19. Honoring Choices Minnesota: preliminary data from a community-wide advance care planning model.

    Science.gov (United States)

    Wilson, Kent S; Kottke, Thomas E; Schettle, Sue

    2014-12-01

    Advance care planning (ACP) increases the likelihood that individuals who are dying receive the care that they prefer. It also reduces depression and anxiety in family members and increases family satisfaction with the process of care. Honoring Choices Minnesota is an ACP program based on the Respecting Choices model of La Crosse, Wisconsin. The objective of this report is to describe the process, which began in 2008, of implementing Honoring Choices Minnesota in a large, diverse metropolitan area. All eight large healthcare systems in the metropolitan area agreed to participate in the project, and as of April 30, 2013, the proportion of hospitalized individuals 65 and older with advance care directives in the electronic medical record was 12.1% to 65.6%. The proportion of outpatients aged 65 and older was 11.6% to 31.7%. Organizations that had sponsored recruitment initiatives had the highest proportions of records containing healthcare directives. It was concluded that it is possible to reduce redundancy by recruiting all healthcare systems in a metropolitan area to endorse the same ACP model, although significantly increasing the proportion of individuals with a healthcare directive in their medical record requires a campaign with recruitment of organizations and individuals. © 2014 The Authors.The Journal of the American Geriatrics Society published by Wiley Periodicals, Inc. on behalf of The American Geriatrics Society.

  20. Investigation of RADTRAN Stop Model input parameters for truck stops

    International Nuclear Information System (INIS)

    Griego, N.R.; Smith, J.D.; Neuhauser, K.S.

    1996-01-01

    RADTRAN is a computer code for estimating the risks and consequences as transport of radioactive materials (RAM). RADTRAN was developed and is maintained by Sandia National Laboratories for the US Department of Energy (DOE). For incident-free transportation, the dose to persons exposed while the shipment is stopped is frequently a major percentage of the overall dose. This dose is referred to as Stop Dose and is calculated by the Stop Model. Because stop dose is a significant portion of the overall dose associated with RAM transport, the values used as input for the Stop Model are important. Therefore, an investigation of typical values for RADTRAN Stop Parameters for truck stops was performed. The resulting data from these investigations were analyzed to provide mean values, standard deviations, and histograms. Hence, the mean values can be used when an analyst does not have a basis for selecting other input values for the Stop Model. In addition, the histograms and their characteristics can be used to guide statistical sampling techniques to measure sensitivity of the RADTRAN calculated Stop Dose to the uncertainties in the stop model input parameters. This paper discusses the details and presents the results of the investigation of stop model input parameters at truck stops

  1. Four-parameter analytical local model potential for atoms

    International Nuclear Information System (INIS)

    Fei, Yu; Jiu-Xun, Sun; Rong-Gang, Tian; Wei, Yang

    2009-01-01

    Analytical local model potential for modeling the interaction in an atom reduces the computational effort in electronic structure calculations significantly. A new four-parameter analytical local model potential is proposed for atoms Li through Lr, and the values of four parameters are shell-independent and obtained by fitting the results of X a method. At the same time, the energy eigenvalues, the radial wave functions and the total energies of electrons are obtained by solving the radial Schrödinger equation with a new form of potential function by Numerov's numerical method. The results show that our new form of potential function is suitable for high, medium and low Z atoms. A comparison among the new potential function and other analytical potential functions shows the greater flexibility and greater accuracy of the present new potential function. (atomic and molecular physics)

  2. Improving the transferability of hydrological model parameters under changing conditions

    Science.gov (United States)

    Huang, Yingchun; Bárdossy, András

    2014-05-01

    Hydrological models are widely utilized to describe catchment behaviors with observed hydro-meteorological data. Hydrological process may be considered as non-stationary under the changing climate and land use conditions. An applicable hydrological model should be able to capture the essential features of the target catchment and therefore be transferable to different conditions. At present, many model applications based on the stationary assumptions are not sufficient for predicting further changes or time variability. The aim of this study is to explore new model calibration methods in order to improve the transferability of model parameters. To cope with the instability of model parameters calibrated on catchments in non-stationary conditions, we investigate the idea of simultaneously calibration on streamflow records for the period with dissimilar climate characteristics. In additional, a weather based weighting function is implemented to adjust the calibration period to future trends. For regions with limited data and ungauged basins, the common calibration was applied by using information from similar catchments. Result shows the model performance and transfer quantity could be well improved via common calibration. This model calibration approach will be used to enhance regional water management and flood forecasting capabilities.

  3. PRO-ECOLOGICAL ACTIONS AND CONSUMER CHOICES IN THE MODEL OF RESPONSIBLE BUSINESS

    Directory of Open Access Journals (Sweden)

    Katarzyna Olejniczak

    2015-09-01

    Full Text Available The current farming conditions cause that recent social and environmental aspects of management play an important role for the functioning of modern enterprises. This results from the fact that on the one hand the activities of modern enterprises are determined by the surroundings’ increasing complexity, on the other hand the growing demands of various groups of stakeholders build company’s success based not only on a quest to maximize their profi t, but primarily on taking the responsibility for the consequences of their actions. Additionally, the growing awareness of consumers makes more and more enterprises implement the concept of corporate social responsibility (CSR in their actions. For this reason, it is important to discuss about the actions and choices of consumers in the model of CSR. The aim of this article is to present the results of the research on customers‘s environmentally conscious activities and choices.

  4. Application of a New Hybrid Fuzzy AHP Model to the Location Choice

    Directory of Open Access Journals (Sweden)

    Chien-Chang Chou

    2013-01-01

    Full Text Available The purpose of this paper is to propose a new hybrid fuzzy Analytic Hierarchy Process (AHP algorithm to deal with the decision-making problems in an uncertain and multiple-criteria environment. In this study, the proposed hybrid fuzzy AHP model is applied to the location choices of international distribution centers in international ports from the view of multiple-nation corporations. The results show that the proposed new hybrid fuzzy AHP model is an appropriate tool to solve the decision-making problems in an uncertain and multiple-criteria environment.

  5. Models of care choices in today's nursing workplace: where does team nursing sit?

    Science.gov (United States)

    Fairbrother, Greg; Chiarella, Mary; Braithwaite, Jeffrey

    2015-11-01

    This paper provides an overview of the developmental history of models of care (MOC) in nursing since Florence Nightingale introduced nurse training programs in a drive to make nursing a discipline-based career option. The four principal choices of models of nursing care delivery (primary nursing, individual patient allocation, team nursing and functional nursing) are outlined and discussed, and recent MOC literature reviewed. The paper suggests that, given the ways work is being rapidly reconfigured in healthcare services and the pressures on the nursing workforce projected into the future, team nursing seems to offer the best solutions.

  6. Local structural properties and attribute characteristisc in 2-mode networks: p* models to map choices of theater events

    NARCIS (Netherlands)

    Agneessens, F.; Roose, H.

    2008-01-01

    Choices of plays made by theatergoers can be considered as a 2-mode or affiliation network. In this article we illustrate how p* models (an exponential family of distributions for random graphs) can be used to uncover patterns of choices. Based on audience research in three theater institutions in

  7. A choice modelling analysis on the similarity between distribution utilities' and industrial customers' price and quality preferences

    International Nuclear Information System (INIS)

    Soederberg, Magnus

    2008-01-01

    The Swedish Electricity Act states that electricity distribution must comply with both price and quality requirements. In order to maintain efficient regulation it is necessary to firstly, define quality attributes and secondly, determine a customer's priorities concerning price and quality attributes. If distribution utilities gain an understanding of customer preferences and incentives for reporting them, the regulator can save a lot of time by surveying them rather than their customers. This study applies a choice modelling methodology where utilities and industrial customers are asked to evaluate the same twelve choice situations in which price and four specific quality attributes are varied. The preferences expressed by the utilities, and estimated by a random parameter logit, correspond quite well with the preferences expressed by the largest industrial customers. The preferences expressed by the utilities are reasonably homogenous in relation to forms of association (private limited, public and trading partnership). If the regulator acts according to the preferences expressed by the utilities, smaller industrial customers will have to pay for quality they have not asked for. (author)

  8. Robust linear parameter varying induction motor control with polytopic models

    Directory of Open Access Journals (Sweden)

    Dalila Khamari

    2013-01-01

    Full Text Available This paper deals with a robust controller for an induction motor which is represented as a linear parameter varying systems. To do so linear matrix inequality (LMI based approach and robust Lyapunov feedback controller are associated. This new approach is related to the fact that the synthesis of a linear parameter varying (LPV feedback controller for the inner loop take into account rotor resistance and mechanical speed as varying parameter. An LPV flux observer is also synthesized to estimate rotor flux providing reference to cited above regulator. The induction motor is described as a polytopic model because of speed and rotor resistance affine dependence their values can be estimated on line during systems operations. The simulation results are presented to confirm the effectiveness of the proposed approach where robustness stability and high performances have been achieved over the entire operating range of the induction motor.

  9. Model parameter learning using Kullback-Leibler divergence

    Science.gov (United States)

    Lin, Chungwei; Marks, Tim K.; Pajovic, Milutin; Watanabe, Shinji; Tung, Chih-kuan

    2018-02-01

    In this paper, we address the following problem: For a given set of spin configurations whose probability distribution is of the Boltzmann type, how do we determine the model coupling parameters? We demonstrate that directly minimizing the Kullback-Leibler divergence is an efficient method. We test this method against the Ising and XY models on the one-dimensional (1D) and two-dimensional (2D) lattices, and provide two estimators to quantify the model quality. We apply this method to two types of problems. First, we apply it to the real-space renormalization group (RG). We find that the obtained RG flow is sufficiently good for determining the phase boundary (within 1% of the exact result) and the critical point, but not accurate enough for critical exponents. The proposed method provides a simple way to numerically estimate amplitudes of the interactions typically truncated in the real-space RG procedure. Second, we apply this method to the dynamical system composed of self-propelled particles, where we extract the parameter of a statistical model (a generalized XY model) from a dynamical system described by the Viscek model. We are able to obtain reasonable coupling values corresponding to different noise strengths of the Viscek model. Our method is thus able to provide quantitative analysis of dynamical systems composed of self-propelled particles.

  10. Biosphere modelling for a HLW repository - scenario and parameter variations

    International Nuclear Information System (INIS)

    Grogan, H.

    1985-03-01

    In Switzerland high-level radioactive wastes have been considered for disposal in deep-lying crystalline formations. The individual doses to man resulting from radionuclides entering the biosphere via groundwater transport are calculated. The main recipient area modelled, which constitutes the base case, is a broad gravel terrace sited along the south bank of the river Rhine. An alternative recipient region, a small valley with a well, is also modelled. A number of parameter variations are performed in order to ascertain their impact on the doses. Finally two scenario changes are modelled somewhat simplistically, these consider different prevailing climates, namely tundra and a warmer climate than present. In the base case negligibly low doses to man in the long term, resulting from the existence of a HLW repository have been calculated. Cs-135 results in the largest dose (8.4E-7 mrem/y at 6.1E+6 y) while Np-237 gives the largest dose from the actinides (3.6E-8 mrem/y). The response of the model to parameter variations cannot be easily predicted due to non-linear coupling of many of the parameters. However, the calculated doses were negligibly low in all cases as were those resulting from the two scenario variations. (author)

  11. Thermal Model Parameter Identification of a Lithium Battery

    Directory of Open Access Journals (Sweden)

    Dirk Nissing

    2017-01-01

    Full Text Available The temperature of a Lithium battery cell is important for its performance, efficiency, safety, and capacity and is influenced by the environmental temperature and by the charging and discharging process itself. Battery Management Systems (BMS take into account this effect. As the temperature at the battery cell is difficult to measure, often the temperature is measured on or nearby the poles of the cell, although the accuracy of predicting the cell temperature with those quantities is limited. Therefore a thermal model of the battery is used in order to calculate and estimate the cell temperature. This paper uses a simple RC-network representation for the thermal model and shows how the thermal parameters are identified using input/output measurements only, where the load current of the battery represents the input while the temperatures at the poles represent the outputs of the measurement. With a single measurement the eight model parameters (thermal resistances, electric contact resistances, and heat capacities can be determined using the method of least-square. Experimental results show that the simple model with the identified parameters fits very accurately to the measurements.

  12. A Day-to-Day Route Choice Model Based on Reinforcement Learning

    Directory of Open Access Journals (Sweden)

    Fangfang Wei

    2014-01-01

    Full Text Available Day-to-day traffic dynamics are generated by individual traveler’s route choice and route adjustment behaviors, which are appropriate to be researched by using agent-based model and learning theory. In this paper, we propose a day-to-day route choice model based on reinforcement learning and multiagent simulation. Travelers’ memory, learning rate, and experience cognition are taken into account. Then the model is verified and analyzed. Results show that the network flow can converge to user equilibrium (UE if travelers can remember all the travel time they have experienced, but which is not necessarily the case under limited memory; learning rate can strengthen the flow fluctuation, but memory leads to the contrary side; moreover, high learning rate results in the cyclical oscillation during the process of flow evolution. Finally, both the scenarios of link capacity degradation and random link capacity are used to illustrate the model’s applications. Analyses and applications of our model demonstrate the model is reasonable and useful for studying the day-to-day traffic dynamics.

  13. Sensitivity of simulated regional Arctic climate to the choice of coupled model domain

    Directory of Open Access Journals (Sweden)

    Dmitry V. Sein

    2014-07-01

    Full Text Available The climate over the Arctic has undergone changes in recent decades. In order to evaluate the coupled response of the Arctic system to external and internal forcing, our study focuses on the estimation of regional climate variability and its dependence on large-scale atmospheric and regional ocean circulations. A global ocean–sea ice model with regionally high horizontal resolution is coupled to an atmospheric regional model and global terrestrial hydrology model. This way of coupling divides the global ocean model setup into two different domains: one coupled, where the ocean and the atmosphere are interacting, and one uncoupled, where the ocean model is driven by prescribed atmospheric forcing and runs in a so-called stand-alone mode. Therefore, selecting a specific area for the regional atmosphere implies that the ocean–atmosphere system can develop ‘freely’ in that area, whereas for the rest of the global ocean, the circulation is driven by prescribed atmospheric forcing without any feedbacks. Five different coupled setups are chosen for ensemble simulations. The choice of the coupled domains was done to estimate the influences of the Subtropical Atlantic, Eurasian and North Pacific regions on northern North Atlantic and Arctic climate. Our simulations show that the regional coupled ocean–atmosphere model is sensitive to the choice of the modelled area. The different model configurations reproduce differently both the mean climate and its variability. Only two out of five model setups were able to reproduce the Arctic climate as observed under recent climate conditions (ERA-40 Reanalysis. Evidence is found that the main source of uncertainty for Arctic climate variability and its predictability is the North Pacific. The prescription of North Pacific conditions in the regional model leads to significant correlation with observations, even if the whole North Atlantic is within the coupled model domain. However, the inclusion of the

  14. Model of parameters controlling resistance of pipeline steels to hydrogen-induced cracking

    KAUST Repository

    Traidia, Abderrazak

    2014-01-01

    NACE MR0175/ISO 15156-2 standard provides test conditions and acceptance criteria to evaluate the resistance of carbon and low-alloy steels to hydrogen-induced cracking (HIC). The second option proposed by this standard offers a large flexibility on the choice of test parameters (pH, H2S partial pressure, and test duration), with zero tolerance to HIC initiation as an acceptance condition. The present modeling work is a contribution for a better understanding on how the test parameters and inclusion size can influence HIC initiation, and is therefore of potential interest for both steel makers and endusers. A model able to link the test operating parameters (pH, partial pressure of H2S, and temperature) to the maximum hydrogen pressure generated in the microstructural defects is proposed. The model results are then used to back calculate the minimum fracture toughness below which HIC extends. A minimum fracture toughness of 400 MPa√mm, at the segregation zone, prevents HIC occurrence and leads to successfully pass the HIC qualification test, even under extreme test conditions. The computed results show that the maximum generated pressure can reach up to 1,500 MPa. The results emphasize that the H2S partial pressure and test temperature can both have a strong influence on the final test results, whereas the influence of the pH of the test solution is less significant. © 2014, NACE International.

  15. Contaminant transport in aquifers: improving the determination of model parameters

    International Nuclear Information System (INIS)

    Sabino, C.V.S.; Moreira, R.M.; Lula, Z.L.; Chausson, Y.; Magalhaes, W.F.; Vianna, M.N.

    1998-01-01

    Parameters conditioning the migration behavior of cesium and mercury are measured with their tracers 137 Cs and 203 Hg in the laboratory, using both batch and column experiments. Batch tests were used to define the sorption isotherm characteristics. Also investigated were the influences of some test parameters, in particular those due to the volume of water to mass of soil ratio (V/m). A provisional relationship between V/m and the distribution coefficient, K d , has been advanced, and a procedure to estimate K d 's valid for environmental values of the ratio V/m has been suggested. Column tests provided the parameters for a transport model. A major problem to be dealt with in such tests is the collimation of the radioactivity probe. Besides mechanically optimizing the collimator, a deconvolution procedure has been suggested and tested, with statistical criteria, to filter off both noise and spurious tracer signals. Correction procedures for the integrating effect introduced by sampling at the exit of columns have also been developed. These techniques may be helpful in increasing the accuracy required in the measurement of parameters conditioning contaminant migration in soils, thus allowing more reliable predictions based on mathematical model applications. (author)

  16. Housing land transaction data and structural econometric estimation of preference parameters for urban economic simulation models.

    Science.gov (United States)

    Caruso, Geoffrey; Cavailhès, Jean; Peeters, Dominique; Thomas, Isabelle; Frankhauser, Pierre; Vuidel, Gilles

    2015-12-01

    This paper describes a dataset of 6284 land transactions prices and plot surfaces in 3 medium-sized cities in France (Besançon, Dijon and Brest). The dataset includes road accessibility as obtained from a minimization algorithm, and the amount of green space available to households in the neighborhood of the transactions, as evaluated from a land cover dataset. Further to the data presentation, the paper describes how these variables can be used to estimate the non-observable parameters of a residential choice function explicitly derived from a microeconomic model. The estimates are used by Caruso et al. (2015) to run a calibrated microeconomic urban growth simulation model where households are assumed to trade-off accessibility and local green space amenities.

  17. Housing land transaction data and structural econometric estimation of preference parameters for urban economic simulation models

    Science.gov (United States)

    Caruso, Geoffrey; Cavailhès, Jean; Peeters, Dominique; Thomas, Isabelle; Frankhauser, Pierre; Vuidel, Gilles

    2015-01-01

    This paper describes a dataset of 6284 land transactions prices and plot surfaces in 3 medium-sized cities in France (Besançon, Dijon and Brest). The dataset includes road accessibility as obtained from a minimization algorithm, and the amount of green space available to households in the neighborhood of the transactions, as evaluated from a land cover dataset. Further to the data presentation, the paper describes how these variables can be used to estimate the non-observable parameters of a residential choice function explicitly derived from a microeconomic model. The estimates are used by Caruso et al. (2015) to run a calibrated microeconomic urban growth simulation model where households are assumed to trade-off accessibility and local green space amenities. PMID:26958606

  18. Improving the representation of modal choice into bottom-up optimization energy system models - The MoCho-TIMES model

    DEFF Research Database (Denmark)

    Tattini, Jacopo; Ramea, Kalai; Gargiulo, Maurizio

    2018-01-01

    This study presents MoCho-TIMES, an original methodology for incorporating modal choice into energy-economy-environment-engineering (E4) system models. MoCho-TIMES addresses the scarce ability of E4 models to realistically depict behaviour in transport and allows for modal shift towards transit...... and mathematical expressions required to develop the approach. This study develops MoCho-TIMES in the standalone transportation sector of TIMES-DK, the integrated energy system model for Denmark. The model is tested for the Business as Usual scenario and for four alternative scenarios that imply diverse...

  19. HOM study and parameter calculation of the TESLA cavity model

    CERN Document Server

    Zeng, Ri-Hua; Gerigk Frank; Wang Guang-Wei; Wegner Rolf; Liu Rong; Schuh Marcel

    2010-01-01

    The Superconducting Proton Linac (SPL) is the project for a superconducting, high current H-accelerator at CERN. To find dangerous higher order modes (HOMs) in the SPL superconducting cavities, simulation and analysis for the cavity model using simulation tools are necessary. The. existing TESLA 9-cell cavity geometry data have been used for the initial construction of the models in HFSS. Monopole, dipole and quadrupole modes have been obtained by applying different symmetry boundaries on various cavity models. In calculation, scripting language in HFSS was used to create scripts to automatically calculate the parameters of modes in these cavity models (these scripts are also available in other cavities with different cell numbers and geometric structures). The results calculated automatically are then compared with the values given in the TESLA paper. The optimized cavity model with the minimum error will be taken as the base for further simulation of the SPL cavities.

  20. The choice of boundary conditions and mesh for scaffolding FEM model on the basis of natural vibrations measurements

    Science.gov (United States)

    Cyniak, Patrycja; Błazik-Borowa, Ewa; Szer, Jacek; Lipecki, Tomasz; Szer, Iwona

    2018-01-01

    Scaffolding is a specific construction with high susceptibility to low frequency vibrations. The numerical model of scaffolding presented in this paper contains real imperfections received from geodetic measurements of real construction. Boundary conditions were verified on the basis of measured free vibrations. A simulation of a man walking on penultimate working level as a dynamic load variable in time was made for verified model. The paper presents procedure for a choice of selected parameters of the scaffolding FEM model. The main aim of analysis is the best projection of the real construction and correct modeling of worker walking on the scaffolding. Different boundary conditions are considered, because of their impact on construction vibrations. Natural vibrations obtained from FEM calculations are compared with free vibrations measured during in-situ tests. Structure accelerations caused by walking human are then considered in this paper. Methodology of creating numerical models of scaffoldings and analysis of dynamic effects during human walking are starting points for further considerations about dynamic loads acting on such structures and effects of these loads to construction and workers, whose workplaces are situated on the scaffolding.

  1. A flexible, interactive software tool for fitting the parameters of neuronal models

    Directory of Open Access Journals (Sweden)

    Péter eFriedrich

    2014-07-01

    Full Text Available The construction of biologically relevant neuronal models as well as model-based analysis of experimental data often requires the simultaneous fitting of multiple model parameters, so that the behavior of the model in a certain paradigm matches (as closely as possible the corresponding output of a real neuron according to some predefined criterion. Although the task of model optimization is often computationally hard, and the quality of the results depends heavily on technical issues such as the appropriate choice (and implementation of cost functions and optimization algorithms, no existing program provides access to the best available methods while also guiding the user through the process effectively. Our software, called Optimizer, implements a modular and extensible framework for the optimization of neuronal models, and also features a graphical interface which makes it easy for even non-expert users to handle many commonly occurring scenarios. Meanwhile, educated users can extend the capabilities of the program and customize it according to their needs with relatively little effort. Optimizer has been developed in Python, takes advantage of open-source Python modules for nonlinear optimization, and interfaces directly with the NEURON simulator to run the models. Other simulators are supported through an external interface. We have tested the program on several different types of problem of varying complexity, using different model classes. As targets, we used simulated traces from the same or a more complex model class, as well as experimental data. We successfully used Optimizer to determine passive parameters and conductance densities in compartmental models, and to fit simple (adaptive exponential integrate-and-fire neuronal models to complex biological data. Our detailed comparisons show that Optimizer can handle a wider range of problems, and delivers equally good or better performance than any other existing neuronal model fitting

  2. Socio-demographic characteristics affecting sport tourism choices: A structural model

    Directory of Open Access Journals (Sweden)

    Nataša Slak Valek

    2014-03-01

    Full Text Available Background: Effective tourism management in the field of sports tourism requires an understanding of differences in socioeconomic characteristics both within and between different market segments. Objective: In the broad tourism market demographic characteristics have been extensively analyzed for differences in destination choices, however little is known about demographic factors affecting sport tourists' decisions. Methods: A sample of Slovenian sports tourists was analyzed using data from a comprehensive survey of local and outbound tourist activity conducted by the Statistical Office of the Republic of Slovenia in 2008. After data weighting the information for 353,783 sports related trips were available for analysis. The research model adopted suggests that four socio-demographic characteristics (gender, age, level of education and income significantly affect a tourist's choice of sports related travel either locally within Slovenia or to a foreign country. Furthermore the destination (local or foreign has an influence on the choice of the type of accommodation selected and the tourist's total expenditure for the trip. For testing the first part of our model (the socio-demographic characteristics effects a linear regression was used, and for the final part of the model (the selection of accommodation type and travel expenditure t-test were applied. Results: The result shows the standardized β regression coefficients are all statistically significant at the .001 level for the tested socio-demographic characteristics and also the overall regression model was statistically significant at .001 level. Conclusions: With these results the study confirmed that all the selected socio-demographic characteristics have a significant influence on the sport-active tourist when choosing between a domestic and foreign tourism destination which in turn affect the type of accommodation chosen and the level of expenditure while travelling.

  3. Progressive Learning of Topic Modeling Parameters: A Visual Analytics Framework.

    Science.gov (United States)

    El-Assady, Mennatallah; Sevastjanova, Rita; Sperrle, Fabian; Keim, Daniel; Collins, Christopher

    2018-01-01

    Topic modeling algorithms are widely used to analyze the thematic composition of text corpora but remain difficult to interpret and adjust. Addressing these limitations, we present a modular visual analytics framework, tackling the understandability and adaptability of topic models through a user-driven reinforcement learning process which does not require a deep understanding of the underlying topic modeling algorithms. Given a document corpus, our approach initializes two algorithm configurations based on a parameter space analysis that enhances document separability. We abstract the model complexity in an interactive visual workspace for exploring the automatic matching results of two models, investigating topic summaries, analyzing parameter distributions, and reviewing documents. The main contribution of our work is an iterative decision-making technique in which users provide a document-based relevance feedback that allows the framework to converge to a user-endorsed topic distribution. We also report feedback from a two-stage study which shows that our technique results in topic model quality improvements on two independent measures.

  4. The definition of input parameters for modelling of energetic subsystems

    Directory of Open Access Journals (Sweden)

    Ptacek M.

    2013-06-01

    Full Text Available This paper is a short review and a basic description of mathematical models of renewable energy sources which present individual investigated subsystems of a system created in Matlab/Simulink. It solves the physical and mathematical relationships of photovoltaic and wind energy sources that are often connected to the distribution networks. The fuel cell technology is much less connected to the distribution networks but it could be promising in the near future. Therefore, the paper informs about a new dynamic model of the low-temperature fuel cell subsystem, and the main input parameters are defined as well. Finally, the main evaluated and achieved graphic results for the suggested parameters and for all the individual subsystems mentioned above are shown.

  5. The definition of input parameters for modelling of energetic subsystems

    Science.gov (United States)

    Ptacek, M.

    2013-06-01

    This paper is a short review and a basic description of mathematical models of renewable energy sources which present individual investigated subsystems of a system created in Matlab/Simulink. It solves the physical and mathematical relationships of photovoltaic and wind energy sources that are often connected to the distribution networks. The fuel cell technology is much less connected to the distribution networks but it could be promising in the near future. Therefore, the paper informs about a new dynamic model of the low-temperature fuel cell subsystem, and the main input parameters are defined as well. Finally, the main evaluated and achieved graphic results for the suggested parameters and for all the individual subsystems mentioned above are shown.

  6. Propagation channel characterization, parameter estimation, and modeling for wireless communications

    CERN Document Server

    Yin, Xuefeng

    2016-01-01

    Thoroughly covering channel characteristics and parameters, this book provides the knowledge needed to design various wireless systems, such as cellular communication systems, RFID and ad hoc wireless communication systems. It gives a detailed introduction to aspects of channels before presenting the novel estimation and modelling techniques which can be used to achieve accurate models. To systematically guide readers through the topic, the book is organised in three distinct parts. The first part covers the fundamentals of the characterization of propagation channels, including the conventional single-input single-output (SISO) propagation channel characterization as well as its extension to multiple-input multiple-output (MIMO) cases. Part two focuses on channel measurements and channel data post-processing. Wideband channel measurements are introduced, including the equipment, technology and advantages and disadvantages of different data acquisition schemes. The channel parameter estimation methods are ...

  7. Empirical flow parameters : a tool for hydraulic model validity

    Science.gov (United States)

    Asquith, William H.; Burley, Thomas E.; Cleveland, Theodore G.

    2013-01-01

    The objectives of this project were (1) To determine and present from existing data in Texas, relations between observed stream flow, topographic slope, mean section velocity, and other hydraulic factors, to produce charts such as Figure 1 and to produce empirical distributions of the various flow parameters to provide a methodology to "check if model results are way off!"; (2) To produce a statistical regional tool to estimate mean velocity or other selected parameters for storm flows or other conditional discharges at ungauged locations (most bridge crossings) in Texas to provide a secondary way to compare such values to a conventional hydraulic modeling approach. (3.) To present ancillary values such as Froude number, stream power, Rosgen channel classification, sinuosity, and other selected characteristics (readily determinable from existing data) to provide additional information to engineers concerned with the hydraulic-soil-foundation component of transportation infrastructure.

  8. A joint model of mode and shipment size choice using the first generation of Commodity Flow Survey Public Use Microdata

    Directory of Open Access Journals (Sweden)

    Monique Stinson

    2017-12-01

    Full Text Available A behavior-based supply chain and freight transportation model was developed and implemented for the Maricopa Association of Governments (MAG and Pima Association of Governments (PAG. This innovative, data-driven modeling system simulates commodity flows to, from and within Phoenix and Tucson Megaregion and is used for regional planning purposes. This paper details the logistics choice component of the system and describes the position and functioning of this component in the overall framework. The logistics choice model uses a nested logit formulation to evaluate mode choice and shipment size jointly. Modeling decisions related to integrating this component within the overall framework are discussed. This paper also describes practical insights gained from using the 2012 Commodity Flow Survey Public Use Microdata (released in 2015, which was the principal data source used to estimate the joint shipment size-mode choice nested logit model. Finally, the validation effort and related lessons learned are described.

  9. Lumped-parameter Model of a Bucket Foundation

    DEFF Research Database (Denmark)

    Andersen, Lars; Ibsen, Lars Bo; Liingaard, Morten

    2009-01-01

    As an alternative to gravity footings or pile foundations, offshore wind turbines at shallow water can be placed on a bucket foundation. The present analysis concerns the development of consistent lumped-parameter models for this type of foundation. The aim is to formulate a computationally effic...... be disregarded without significant loss of accuracy. Finally, special attention is drawn to the influence of the skirt stiffness, i.e. whether the embedded part of the caisson is rigid or flexible....

  10. Modeling Water Quality Parameters Using Data-driven Methods

    Directory of Open Access Journals (Sweden)

    Shima Soleimani

    2017-02-01

    Full Text Available Introduction: Surface water bodies are the most easily available water resources. Increase use and waste water withdrawal of surface water causes drastic changes in surface water quality. Water quality, importance as the most vulnerable and important water supply resources is absolutely clear. Unfortunately, in the recent years because of city population increase, economical improvement, and industrial product increase, entry of pollutants to water bodies has been increased. According to that water quality parameters express physical, chemical, and biological water features. So the importance of water quality monitoring is necessary more than before. Each of various uses of water, such as agriculture, drinking, industry, and aquaculture needs the water with a special quality. In the other hand, the exact estimation of concentration of water quality parameter is significant. Material and Methods: In this research, first two input variable models as selection methods (namely, correlation coefficient and principal component analysis were applied to select the model inputs. Data processing is consisting of three steps, (1 data considering, (2 identification of input data which have efficient on output data, and (3 selecting the training and testing data. Genetic Algorithm-Least Square Support Vector Regression (GA-LSSVR algorithm were developed to model the water quality parameters. In the LSSVR method is assumed that the relationship between input and output variables is nonlinear, but by using a nonlinear mapping relation can create a space which is named feature space in which relationship between input and output variables is defined linear. The developed algorithm is able to gain maximize the accuracy of the LSSVR method with auto LSSVR parameters. Genetic algorithm (GA is one of evolutionary algorithm which automatically can find the optimum coefficient of Least Square Support Vector Regression (LSSVR. The GA-LSSVR algorithm was employed to

  11. A procedure for determining parameters of a simplified ligament model.

    Science.gov (United States)

    Barrett, Jeff M; Callaghan, Jack P

    2018-01-03

    A previous mathematical model of ligament force-generation treated their behavior as a population of collagen fibres arranged in parallel. When damage was ignored in this model, an expression for ligament force in terms of the deflection, x, effective stiffness, k, mean collagen slack length, μ, and the standard deviation of slack lengths, σ, was obtained. We present a simple three-step method for determining the three model parameters (k, μ, and σ) from force-deflection data: (1) determine the equation of the line in the linear region of this curve, its slope is k and its x -intercept is -μ; (2) interpolate the force-deflection data when x is -μ to obtain F 0 ; (3) calculate σ with the equation σ=2πF 0 /k. Results from this method were in good agreement to those obtained from a least-squares procedure on experimental data - all falling within 6%. Therefore, parameters obtained using the proposed method provide a systematic way of reporting ligament parameters, or for obtaining an initial guess for nonlinear least-squares. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Modelling spatial-temporal and coordinative parameters in swimming.

    Science.gov (United States)

    Seifert, L; Chollet, D

    2009-07-01

    This study modelled the changes in spatial-temporal and coordinative parameters through race paces in the four swimming strokes. The arm and leg phases in simultaneous strokes (butterfly and breaststroke) and the inter-arm phases in alternating strokes (crawl and backstroke) were identified by video analysis to calculate the time gaps between propulsive phases. The relationships among velocity, stroke rate, stroke length and coordination were modelled by polynomial regression. Twelve elite male swimmers swam at four race paces. Quadratic regression modelled the changes in spatial-temporal and coordinative parameters with velocity increases for all four strokes. First, the quadratic regression between coordination and velocity showed changes common to all four strokes. Notably, the time gaps between the key points defining the beginning and end of the stroke phases decreased with increases in velocity, which led to decreases in glide times and increases in the continuity between propulsive phases. Conjointly, the quadratic regression among stroke rate, stroke length and velocity was similar to the changes in coordination, suggesting that these parameters may influence coordination. The main practical application for coaches and scientists is that ineffective time gaps can be distinguished from those that simply reflect an individual swimmer's profile by monitoring the glide times within a stroke cycle. In the case of ineffective time gaps, targeted training could improve the swimmer's management of glide time.

  13. The influence of phylodynamic model specifications on parameter estimates of the Zika virus epidemic.

    Science.gov (United States)

    Boskova, Veronika; Stadler, Tanja; Magnus, Carsten

    2018-01-01

    Each new virus introduced into the human population could potentially spread and cause a worldwide epidemic. Thus, early quantification of epidemic spread is crucial. Real-time sequencing followed by Bayesian phylodynamic analysis has proven to be extremely informative in this respect. Bayesian phylodynamic analyses require a model to be chosen and prior distributions on model parameters to be specified. We study here how choices regarding the tree prior influence quantification of epidemic spread in an emerging epidemic by focusing on estimates of the parameters clock rate, tree height, and reproductive number in the currently ongoing Zika virus epidemic in the Americas. While parameter estimates are quite robust to reasonable variations in the model settings when studying the complete data set, it is impossible to obtain unequivocal estimates when reducing the data to local Zika epidemics in Brazil and Florida, USA. Beyond the empirical insights, this study highlights the conceptual differences between the so-called birth-death and coalescent tree priors: while sequence sampling times alone can strongly inform the tree height and reproductive number under a birth-death model, the coalescent tree height prior is typically only slightly influenced by this information. Such conceptual differences together with non-trivial interactions of different priors complicate proper interpretation of empirical results. Overall, our findings indicate that phylodynamic analyses of early viral spread data must be carried out with care as data sets may not necessarily be informative enough yet to provide estimates robust to prior settings. It is necessary to do a robustness check of these data sets by scanning several models and prior distributions. Only if the posterior distributions are robust to reasonable changes of the prior distribution, the parameter estimates can be trusted. Such robustness tests will help making real-time phylodynamic analyses of spreading epidemic more

  14. Integrated Mode Choice, Small Aircraft Demand, and Airport Operations Model User's Guide

    Science.gov (United States)

    Yackovetsky, Robert E. (Technical Monitor); Dollyhigh, Samuel M.

    2004-01-01

    A mode choice model that generates on-demand air travel forecasts at a set of GA airports based on changes in economic characteristics, vehicle performance characteristics such as speed and cost, and demographic trends has been integrated with a model to generate itinerate aircraft operations by airplane category at a set of 3227 airports. Numerous intermediate outputs can be generated, such as the number of additional trips diverted from automobiles and schedule air by the improved performance and cost of on-demand air vehicles. The total number of transported passenger miles that are diverted is also available. From these results the number of new aircraft to service the increased demand can be calculated. Output from the models discussed is in the format to generate the origin and destination traffic flow between the 3227 airports based on solutions to a gravity model.

  15. The Impact of Three Factors on the Recovery of Item Parameters for the Three-Parameter Logistic Model

    Science.gov (United States)

    Kim, Kyung Yong; Lee, Won-Chan

    2017-01-01

    This article provides a detailed description of three factors (specification of the ability distribution, numerical integration, and frame of reference for the item parameter estimates) that might affect the item parameter estimation of the three-parameter logistic model, and compares five item calibration methods, which are combinations of the…

  16. Local sensitivity analysis of a distributed parameters water quality model

    International Nuclear Information System (INIS)

    Pastres, R.; Franco, D.; Pecenik, G.; Solidoro, C.; Dejak, C.

    1997-01-01

    A local sensitivity analysis is presented of a 1D water-quality reaction-diffusion model. The model describes the seasonal evolution of one of the deepest channels of the lagoon of Venice, that is affected by nutrient loads from the industrial area and heat emission from a power plant. Its state variables are: water temperature, concentrations of reduced and oxidized nitrogen, Reactive Phosphorous (RP), phytoplankton, and zooplankton densities, Dissolved Oxygen (DO) and Biological Oxygen Demand (BOD). Attention has been focused on the identifiability and the ranking of the parameters related to primary production in different mixing conditions

  17. Surrogate based approaches to parameter inference in ocean models

    KAUST Repository

    Knio, Omar

    2016-01-06

    This talk discusses the inference of physical parameters using model surrogates. Attention is focused on the use of sampling schemes to build suitable representations of the dependence of the model response on uncertain input data. Non-intrusive spectral projections and regularized regressions are used for this purpose. A Bayesian inference formalism is then applied to update the uncertain inputs based on available measurements or observations. To perform the update, we consider two alternative approaches, based on the application of Markov Chain Monte Carlo methods or of adjoint-based optimization techniques. We outline the implementation of these techniques to infer dependence of wind drag, bottom drag, and internal mixing coefficients.

  18. Information Theoretic Tools for Parameter Fitting in Coarse Grained Models

    KAUST Repository

    Kalligiannaki, Evangelia

    2015-01-07

    We study the application of information theoretic tools for model reduction in the case of systems driven by stochastic dynamics out of equilibrium. The model/dimension reduction is considered by proposing parametrized coarse grained dynamics and finding the optimal parameter set for which the relative entropy rate with respect to the atomistic dynamics is minimized. The minimization problem leads to a generalization of the force matching methods to non equilibrium systems. A multiplicative noise example reveals the importance of the diffusion coefficient in the optimization problem.

  19. Investigation of design parameters and choice of substrate resistivity and crystal orientation for the CMS silicon microstrip detector

    CERN Document Server

    Braibant, S

    2000-01-01

    The electrical characteristics ( interstrip and backplane capacitance, leakage current, depletion and breakdown voltage) of silicon microstrip detectors were measured for strip pitches between 60 um and 240 um and various strip implant and metal widths on multi-geometry devices. Both AC and DC coupled devices wereinvestigated. Measurements on detectors were performed before and after irradiation with 24 GeV/c protons up to a fluence of 4.1x10E14 cm-2. We found that the total strip capacitance can be parametrized as a linear function of the ratio of the implant width over the read-out pitch only. We found a significant increase in the interstrip capacitance after radiation on detectors with standard <111> crystal orientation but not on sensors with <100> crystal orientation. We analyzed the measured depletion voltages as a function of the detector geometrical parameters ( read-out pitch, strip width and substrate thickness) found in the literature and we found a linear dependence in...

  20. Comparison of methods for optimal choice of the regularization parameter for linear electrical impedance tomography of brain function.

    Science.gov (United States)

    Abascal, Juan-Felipe P J; Arridge, Simon R; Bayford, Richard H; Holder, David S

    2008-11-01

    Electrical impedance tomography has the potential to provide a portable non-invasive method for imaging brain function. Clinical data collection has largely been undertaken with time difference data and linear image reconstruction methods. The purpose of this work was to determine the best method for selecting the regularization parameter of the inverse procedure, using the specific application of evoked brain activity in neonatal babies as an exemplar. The solution error norm and image SNR for the L-curve (LC), discrepancy principle (DP), generalized cross validation (GCV) and unbiased predictive risk estimator (UPRE) selection methods were evaluated in simulated data using an anatomically accurate finite element method (FEM) of the neonatal head and impedance changes due to blood flow in the visual cortex recorded in vivo. For simulated data, LC, GCV and UPRE were equally best. In human data in four neonatal infants, no significant differences were found among selection methods. We recommend that GCV or LC be employed for reconstruction of human neonatal images, as UPRE requires an empirical estimate of the noise variance.

  1. Does cost-effectiveness of influenza vaccine choice vary across the U.S.? An agent-based modeling study.

    Science.gov (United States)

    DePasse, Jay V; Nowalk, Mary Patricia; Smith, Kenneth J; Raviotta, Jonathan M; Shim, Eunha; Zimmerman, Richard K; Brown, Shawn T

    2017-07-13

    In a prior agent-based modeling study, offering a choice of influenza vaccine type was shown to be cost-effective when the simulated population represented the large, Washington DC metropolitan area. This study calculated the public health impact and cost-effectiveness of the same four strategies: No Choice, Pediatric Choice, Adult Choice, or Choice for Both Age Groups in five United States (U.S.) counties selected to represent extremes in population age distribution. The choice offered was either inactivated influenza vaccine delivered intramuscularly with a needle (IIV-IM) or an age-appropriate needle-sparing vaccine, specifically, the nasal spray (LAIV) or intradermal (IIV-ID) delivery system. Using agent-based modeling, individuals were simulated as they interacted with others, and influenza was tracked as it spread through each population. Influenza vaccination coverage derived from Centers for Disease Control and Prevention (CDC) data, was increased by 6.5% (range 3.25%-11.25%) to reflect the effects of vaccine choice. Assuming moderate influenza infectivity, the number of averted cases was highest for the Choice for Both Age Groups in all five counties despite differing demographic profiles. In a cost-effectiveness analysis, Choice for Both Age Groups was the dominant strategy. Sensitivity analyses varying influenza infectivity, costs, and degrees of vaccine coverage increase due to choice, supported the base case findings. Offering a choice to receive a needle-sparing influenza vaccine has the potential to significantly reduce influenza disease burden and to be cost saving. Consistent findings across diverse populations confirmed these findings. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Finding the effective parameter perturbations in atmospheric models: the LORENZ63 model as case study

    NARCIS (Netherlands)

    Moolenaar, H.E.; Selten, F.M.

    2004-01-01

    Climate models contain numerous parameters for which the numeric values are uncertain. In the context of climate simulation and prediction, a relevant question is what range of climate outcomes is possible given the range of parameter uncertainties. Which parameter perturbation changes the climate

  3. Flow-induced coalescence: arbitrarily mobile interface model and choice of its parameters

    Czech Academy of Sciences Publication Activity Database

    Fortelný, Ivan; Jůza, Josef

    2015-01-01

    Roč. 60, č. 10 (2015), s. 628-635 ISSN 0032-2725 R&D Projects: GA ČR GAP106/11/1069 Institutional support: RVO:61389013 Keywords : flow-induced coalescence * polymer blends * interface mobility Subject RIV: BK - Fluid Dynamics Impact factor: 0.718, year: 2015

  4. Development of the Model of Decision Support for Alternative Choice in the Transportation Transit System

    Directory of Open Access Journals (Sweden)

    Kabashkin Igor

    2015-02-01

    Full Text Available The decision support system is one of the instruments for choosing the most effective decision for cargo owner in constant fluctuated business environment. The objective of this Paper is to suggest the multiple-criteria approach for evaluation and choice the alternatives of cargo transportation in the large scale transportation transit system for the decision makers - cargo owners. The large scale transportation transit system is presented by directed finite graph. Each of 57 alternatives is represented by the set of key performance indicators Kvi and set of parameters Paj. There has been developed a two-level hierarchy system of criteria with ranging expert evaluations based on Analytic Hierarchy Process Method. The best alternatives were suggested according to this method.

  5. Comparison of parameter estimation algorithms in hydrological modelling

    DEFF Research Database (Denmark)

    Blasone, Roberta-Serena; Madsen, Henrik; Rosbjerg, Dan

    2006-01-01

    Local search methods have been applied successfully in calibration of simple groundwater models, but might fail in locating the optimum for models of increased complexity, due to the more complex shape of the response surface. Global search algorithms have been demonstrated to perform well...... for these types of models, although at a more expensive computational cost. The main purpose of this study is to investigate the performance of a global and a local parameter optimization algorithm, respectively, the Shuffled Complex Evolution (SCE) algorithm and the gradient-based Gauss......-Marquardt-Levenberg algorithm (implemented in the PEST software), when applied to a steady-state and a transient groundwater model. The results show that PEST can have severe problems in locating the global optimum and in being trapped in local regions of attractions. The global SCE procedure is, in general, more effective...

  6. Flare parameters inferred from a 3D loop model database

    Science.gov (United States)

    Cuambe, Valente A.; Costa, J. E. R.; Simões, P. J. A.

    2018-04-01

    We developed a database of pre-calculated flare images and spectra exploring a set of parameters which describe the physical characteristics of coronal loops and accelerated electron distribution. Due to the large number of parameters involved in describing the geometry and the flaring atmosphere in the model used (Costa et al. 2013), we built a large database of models (˜250 000) to facilitate the flare analysis. The geometry and characteristics of non-thermal electrons are defined on a discrete grid with spatial resolution greater than 4 arcsec. The database was constructed based on general properties of known solar flares and convolved with instrumental resolution to replicate the observations from the Nobeyama radio polarimeter (NoRP) spectra and Nobeyama radio-heliograph (NoRH) brightness maps. Observed spectra and brightness distribution maps are easily compared with the modelled spectra and images in the database, indicating a possible range of solutions. The parameter search efficiency in this finite database is discussed. Eight out of ten parameters analysed for one thousand simulated flare searches were recovered with a relative error of less than 20 per cent on average. In addition, from the analysis of the observed correlation between NoRH flare sizes and intensities at 17 GHz, some statistical properties were derived. From these statistics the energy spectral index was found to be δ ˜ 3, with non-thermal electron densities showing a peak distribution ⪅107 cm-3, and Bphotosphere ⪆2000 G. Some bias for larger loops with heights as great as ˜2.6 × 109 cm, and looptop events were noted. An excellent match of the spectrum and the brightness distribution at 17 and 34 GHz of the 2002 May 31 flare, is presented as well.

  7. Calibration of a joint time assignment and mode choice model system

    OpenAIRE

    Greeven, Paulina; Jara-Diaz, Sergio R.; Munizaga, Marcela A.; Axhausen, Kay W.

    2005-01-01

    In this paper we report the results of applying a new microeconomic framework to model time assignment to activities, goods consumption and mode choice jointly (Jara-Díaz and Guevara, 2003; Jara-Díaz and Guerra, 2003) that identifies the links between these decisions and permits the calculation of all the components of the subjective value of time defined in the literature: the value of time as a resource, value of assigning time to a specific activity and the value of saving time in a specif...

  8. Modeling bistable cell-fate choices in the Drosophila eye: qualitative and quantitative perspectives

    Science.gov (United States)

    Graham, Thomas G. W.; Tabei, S. M. Ali; Dinner, Aaron R.; Rebay, Ilaria

    2010-01-01

    A major goal of developmental biology is to understand the molecular mechanisms whereby genetic signaling networks establish and maintain distinct cell types within multicellular organisms. Here, we review cell-fate decisions in the developing eye of Drosophila melanogaster and the experimental results that have revealed the topology of the underlying signaling circuitries. We then propose that switch-like network motifs based on positive feedback play a central role in cell-fate choice, and discuss how mathematical modeling can be used to understand and predict the bistable or multistable behavior of such networks. PMID:20570936

  9. Models in cooperative game theory crisp, fuzzy, and multi-choice games

    CERN Document Server

    Branzei, Rodica; Tijs, Stef

    2005-01-01

    This book investigates models in cooperative game theory in which the players have the possibility to cooperate partially. In a crisp game the agents are either fully involved or not involved at all in coperation with some other agents, while in a fuzzy game players are allowed to cooperate with infinite many different participation levels, varying from non-cooperation to full cooperation. A multi-choice game describes the intermediate case in which each player may have a fixed number of activity levels. Different set and one-point solution concepts for these games are presented. The propertie

  10. Parameter and uncertainty estimation for mechanistic, spatially explicit epidemiological models

    Science.gov (United States)

    Finger, Flavio; Schaefli, Bettina; Bertuzzo, Enrico; Mari, Lorenzo; Rinaldo, Andrea

    2014-05-01

    Epidemiological models can be a crucially important tool for decision-making during disease outbreaks. The range of possible applications spans from real-time forecasting and allocation of health-care resources to testing alternative intervention mechanisms such as vaccines, antibiotics or the improvement of sanitary conditions. Our spatially explicit, mechanistic models for cholera epidemics have been successfully applied to several epidemics including, the one that struck Haiti in late 2010 and is still ongoing. Calibration and parameter estimation of such models represents a major challenge because of properties unusual in traditional geoscientific domains such as hydrology. Firstly, the epidemiological data available might be subject to high uncertainties due to error-prone diagnosis as well as manual (and possibly incomplete) data collection. Secondly, long-term time-series of epidemiological data are often unavailable. Finally, the spatially explicit character of the models requires the comparison of several time-series of model outputs with their real-world counterparts, which calls for an appropriate weighting scheme. It follows that the usual assumption of a homoscedastic Gaussian error distribution, used in combination with classical calibration techniques based on Markov chain Monte Carlo algorithms, is likely to be violated, whereas the construction of an appropriate formal likelihood function seems close to impossible. Alternative calibration methods, which allow for accurate estimation of total model uncertainty, particularly regarding the envisaged use of the models for decision-making, are thus needed. Here we present the most recent developments regarding methods for parameter and uncertainty estimation to be used with our mechanistic, spatially explicit models for cholera epidemics, based on informal measures of goodness of fit.

  11. Importance of the habitat choice behavior assumed when modeling the effects of food and temperature on fish populations

    Science.gov (United States)

    Wildhaber, Mark L.; Lamberson, Peter J.

    2004-01-01

    Various mechanisms of habitat choice in fishes based on food and/or temperature have been proposed: optimal foraging for food alone; behavioral thermoregulation for temperature alone; and behavioral energetics and discounted matching for food and temperature combined. Along with development of habitat choice mechanisms, there has been a major push to develop and apply to fish populations individual-based models that incorporate various forms of these mechanisms. However, it is not known how the wide variation in observed and hypothesized mechanisms of fish habitat choice could alter fish population predictions (e.g. growth, size distributions, etc.). We used spatially explicit, individual-based modeling to compare predicted fish populations using different submodels of patch choice behavior under various food and temperature distributions. We compared predicted growth, temperature experience, food consumption, and final spatial distribution using the different models. Our results demonstrated that the habitat choice mechanism assumed in fish population modeling simulations was critical to predictions of fish distribution and growth rates. Hence, resource managers who use modeling results to predict fish population trends should be very aware of and understand the underlying patch choice mechanisms used in their models to assure that those mechanisms correctly represent the fish populations being modeled.

  12. Monte-Carlo modelling to determine optimum filter choices for sub-microsecond optical pyrometry

    Science.gov (United States)

    Ota, Thomas A.; Chapman, David J.; Eakins, Daniel E.

    2017-04-01

    When designing a spectral-band pyrometer for use at high time resolutions (sub-μs), there is ambiguity regarding the optimum characteristics for a spectral filter(s). In particular, while prior work has discussed uncertainties in spectral-band pyrometry, there has been little discussion of the effects of noise which is an important consideration in time-resolved, high speed experiments. Using a Monte-Carlo process to simulate the effects of noise, a model of collection from a black body has been developed to give insights into the optimum choices for centre wavelength and passband width. The model was validated and then used to explore the effects of centre wavelength and passband width on measurement uncertainty. This reveals a transition centre wavelength below which uncertainties in calculated temperature are high. To further investigate system performance, simultaneous variation of the centre wavelength and bandpass width of a filter is investigated. Using data reduction, the effects of temperature and noise levels are illustrated and an empirical approximation is determined. The results presented show that filter choice can significantly affect instrument performance and, while best practice requires detailed modelling to achieve optimal performance, the expression presented can be used to aid filter selection.

  13. A review of distributed parameter groundwater management modeling methods

    Science.gov (United States)

    Gorelick, Steven M.

    1983-01-01

    Models which solve the governing groundwater flow or solute transport equations in conjunction with optimization techniques, such as linear and quadratic programing, are powerful aquifer management tools. Groundwater management models fall in two general categories: hydraulics or policy evaluation and water allocation. Groundwater hydraulic management models enable the determination of optimal locations and pumping rates of numerous wells under a variety of restrictions placed upon local drawdown, hydraulic gradients, and water production targets. Groundwater policy evaluation and allocation models can be used to study the influence upon regional groundwater use of institutional policies such as taxes and quotas. Furthermore, fairly complex groundwater-surface water allocation problems can be handled using system decomposition and multilevel optimization. Experience from the few real world applications of groundwater optimization-management techniques is summarized. Classified separately are methods for groundwater quality management aimed at optimal waste disposal in the subsurface. This classification is composed of steady state and transient management models that determine disposal patterns in such a way that water quality is protected at supply locations. Classes of research missing from the literature are groundwater quality management models involving nonlinear constraints, models which join groundwater hydraulic and quality simulations with political-economic management considerations, and management models that include parameter uncertainty.

  14. Some notes on unobserved parameters (frailties) in reliability modeling

    International Nuclear Information System (INIS)

    Cha, Ji Hwan; Finkelstein, Maxim

    2014-01-01

    Unobserved random quantities (frailties) often appear in various reliability problems especially when dealing with the failure rates of items from heterogeneous populations. As the failure rate is a conditional characteristic, the distributions of these random quantities, similar to Bayesian approaches, are updated in accordance with the corresponding survival information. At some instances, apart from a statistical meaning, frailties can have also useful interpretations describing the underlying lifetime model. We discuss and clarify these issues in reliability context and present and analyze several meaningful examples. We consider the proportional hazards model with a random factor; the stress–strength model, where the unobserved strength of a system can be viewed as frailty; a parallel system with a random number of components and, finally, the first passage time problem for the Wiener process with random parameters. - Highlights: • We discuss and clarify the notion of frailty in reliability context and present and analyze several meaningful examples. • The paper provides a new insight and general perspective on reliability models with unobserved parameters. • The main message of the paper is well illustrated by several meaningful examples and emphasized by detailed discussion

  15. Hydrological Modelling and Parameter Identification for Green Roof

    Science.gov (United States)

    Lo, W.; Tung, C.

    2012-12-01

    Green roofs, a multilayered system covered by plants, can be used to replace traditional concrete roofs as one of various measures to mitigate the increasing stormwater runoff in the urban environment. Moreover, facing the high uncertainty of the climate change, the present engineering method as adaptation may be regarded as improper measurements; reversely, green roofs are unregretful and flexible, and thus are rather important and suitable. The related technology has been developed for several years and the researches evaluating the stormwater reduction performance of green roofs are ongoing prosperously. Many European counties, cities in the U.S., and other local governments incorporate green roof into the stormwater control policy. Therefore, in terms of stormwater management, it is necessary to develop a robust hydrologic model to quantify the efficacy of green roofs over different types of designs and environmental conditions. In this research, a physical based hydrologic model is proposed to simulate water flowing process in the green roof system. In particular, the model adopts the concept of water balance, bringing a relatively simple and intuitive idea. Also, the research compares the two methods in the surface water balance calculation. One is based on Green-Ampt equation, and the other is under the SCS curve number calculation. A green roof experiment is designed to collect weather data and water discharge. Then, the proposed model is verified with these observed data; furthermore, the parameters using in the model are calibrated to find appropriate values in the green roof hydrologic simulation. This research proposes a simple physical based hydrologic model and the measures to determine parameters for the model.

  16. Modelling Technical and Economic Parameters in Selection of Manufacturing Devices

    Directory of Open Access Journals (Sweden)

    Naqib Daneshjo

    2017-11-01

    Full Text Available Sustainable science and technology development is also conditioned by continuous development of means of production which have a key role in structure of each production system. Mechanical nature of the means of production is complemented by controlling and electronic devices in context of intelligent industry. A selection of production machines for a technological process or technological project has so far been practically resolved, often only intuitively. With regard to increasing intelligence, the number of variable parameters that have to be considered when choosing a production device is also increasing. It is necessary to use computing techniques and decision making methods according to heuristic methods and more precise methodological procedures during the selection. The authors present an innovative model for optimization of technical and economic parameters in the selection of manufacturing devices for industry 4.0.

  17. Taking dietary habits into account: A computational method for modeling food choices that goes beyond price.

    Directory of Open Access Journals (Sweden)

    Rahmatollah Beheshti

    Full Text Available Computational models have gained popularity as a predictive tool for assessing proposed policy changes affecting dietary choice. Specifically, they have been used for modeling dietary changes in response to economic interventions, such as price and income changes. Herein, we present a novel addition to this type of model by incorporating habitual behaviors that drive individuals to maintain or conform to prior eating patterns. We examine our method in a simulated case study of food choice behaviors of low-income adults in the US. We use data from several national datasets, including the National Health and Nutrition Examination Survey (NHANES, the US Bureau of Labor Statistics and the USDA, to parameterize our model and develop predictive capabilities in 1 quantifying the influence of prior diet preferences when food budgets are increased and 2 simulating the income elasticities of demand for four food categories. Food budgets can increase because of greater affordability (due to food aid and other nutritional assistance programs, or because of higher income. Our model predictions indicate that low-income adults consume unhealthy diets when they have highly constrained budgets, but that even after budget constraints are relaxed, these unhealthy eating behaviors are maintained. Specifically, diets in this population, before and after changes in food budgets, are characterized by relatively low consumption of fruits and vegetables and high consumption of fat. The model results for income elasticities also show almost no change in consumption of fruit and fat in response to changes in income, which is in agreement with data from the World Bank's International Comparison Program (ICP. Hence, the proposed method can be used in assessing the influences of habitual dietary patterns on the effectiveness of food policies.

  18. Taking dietary habits into account: A computational method for modeling food choices that goes beyond price.

    Science.gov (United States)

    Beheshti, Rahmatollah; Jones-Smith, Jessica C; Igusa, Takeru

    2017-01-01

    Computational models have gained popularity as a predictive tool for assessing proposed policy changes affecting dietary choice. Specifically, they have been used for modeling dietary changes in response to economic interventions, such as price and income changes. Herein, we present a novel addition to this type of model by incorporating habitual behaviors that drive individuals to maintain or conform to prior eating patterns. We examine our method in a simulated case study of food choice behaviors of low-income adults in the US. We use data from several national datasets, including the National Health and Nutrition Examination Survey (NHANES), the US Bureau of Labor Statistics and the USDA, to parameterize our model and develop predictive capabilities in 1) quantifying the influence of prior diet preferences when food budgets are increased and 2) simulating the income elasticities of demand for four food categories. Food budgets can increase because of greater affordability (due to food aid and other nutritional assistance programs), or because of higher income. Our model predictions indicate that low-income adults consume unhealthy diets when they have highly constrained budgets, but that even after budget constraints are relaxed, these unhealthy eating behaviors are maintained. Specifically, diets in this population, before and after changes in food budgets, are characterized by relatively low consumption of fruits and vegetables and high consumption of fat. The model results for income elasticities also show almost no change in consumption of fruit and fat in response to changes in income, which is in agreement with data from the World Bank's International Comparison Program (ICP). Hence, the proposed method can be used in assessing the influences of habitual dietary patterns on the effectiveness of food policies.

  19. Model complexity and choice of model approaches for practical simulations of CO2 injection, migration, leakage and long-term fate

    Energy Technology Data Exchange (ETDEWEB)

    Celia, Michael A. [Princeton Univ., NJ (United States)

    2016-12-30

    This report documents the accomplishments achieved during the project titled “Model complexity and choice of model approaches for practical simulations of CO2 injection,migration, leakage and long-term fate” funded by the US Department of Energy, Office of Fossil Energy. The objective of the project was to investigate modeling approaches of various levels of complexity relevant to geologic carbon storage (GCS) modeling with the goal to establish guidelines on choice of modeling approach.

  20. Dynamic systems models new methods of parameter and state estimation

    CERN Document Server

    2016-01-01

    This monograph is an exposition of a novel method for solving inverse problems, a method of parameter estimation for time series data collected from simulations of real experiments. These time series might be generated by measuring the dynamics of aircraft in flight, by the function of a hidden Markov model used in bioinformatics or speech recognition or when analyzing the dynamics of asset pricing provided by the nonlinear models of financial mathematics. Dynamic Systems Models demonstrates the use of algorithms based on polynomial approximation which have weaker requirements than already-popular iterative methods. Specifically, they do not require a first approximation of a root vector and they allow non-differentiable elements in the vector functions being approximated. The text covers all the points necessary for the understanding and use of polynomial approximation from the mathematical fundamentals, through algorithm development to the application of the method in, for instance, aeroplane flight dynamic...

  1. Parameter Estimation for a Class of Lifetime Models

    Directory of Open Access Journals (Sweden)

    Xinyang Ji

    2014-01-01

    Full Text Available Our purpose in this paper is to present a better method of parametric estimation for a bivariate nonlinear regression model, which takes the performance indicator of rubber aging as the dependent variable and time and temperature as the independent variables. We point out that the commonly used two-step method (TSM, which splits the model and estimate parameters separately, has limitation. Instead, we apply the Marquardt’s method (MM to implement parametric estimation directly for the model and compare these two methods of parametric estimation by random simulation. Our results show that MM has better effect of data fitting, more reasonable parametric estimates, and smaller prediction error compared with TSM.

  2. The parameter space of Cubic Galileon models for cosmic acceleration

    CERN Document Server

    Bellini, Emilio

    2013-01-01

    We use recent measurements of the expansion history of the universe to place constraints on the parameter space of cubic Galileon models. This gives strong constraints on the Lagrangian of these models. Most dynamical terms in the Galileon Lagrangian are constraint to be small and the acceleration is effectively provided by a constant term in the scalar potential, thus reducing, effectively, to a LCDM model for current acceleration. The effective equation of state is indistinguishable from that of a cosmological constant w = -1 and the data constraint it to have no temporal variations of more than at the few % level. The energy density of the Galileon can contribute only to about 10% of the acceleration energy density, being the other 90% a cosmological constant term. This demonstrates how useful direct measurements of the expansion history of the universe are at constraining the dynamical nature of dark energy.

  3. Westinghouse-GOTHIC distributed parameter modelling for HDR test E11.2

    International Nuclear Information System (INIS)

    Narula, J.S.; Woodcock, J.

    1994-01-01

    The Westinghouse-GOTHIC (WGOTHIC) code is a sophisticated mathematical computer code designed specifically for the thermal hydraulic analysis of nuclear power plant containment and auxiliary buildings. The code is capable of sophisticated flow analysis via the solution of mass, momentum, and energy conservation equations. Westinghouse has investigated the use of subdivided noding to model the flow patterns of hydrogen following its release into a containment atmosphere. For the investigation, several simple models were constructed to represent a scale similar to the German HDR containment. The calculational models were simplified to test the basic capability of the plume modeling methods to predict stratification while minimizing the number of parameters. A large empty volume was modeled, with the same volume and height as HDR. A scenario was selected that would be expected to stably stratify, and the effects of noding on the prediction of stratification was studied. A single phase hot gas was injected into the volume at a height similar to that of HDR test E11.2, and there were no heat sinks modeled. Helium was released into the calculational models, and the resulting flow patterns were judged relative to the expected results. For each model, only the number of subdivisions within the containment volume was varied. The results of the investigation of noding schemes has provided evidence of the capability of subdivided (distributed parameter) noding. The results also showed that highly inaccurate flow patterns could be obtained by using an insufficient number of subdivided nodes. This presents a significant challenge to the containment analyst, who must weigh the benefits of increased noding with the penalties the noding may incur on computational efficiency. Clearly, however, an incorrect noding choice may yield erroneous results even if great care has been taken in modeling accurately all other characteristics of containments. (author). 9 refs., 9 figs

  4. Variation in estimated ozone-related health impacts of climate change due to modeling choices and assumptions.

    Science.gov (United States)

    Post, Ellen S; Grambsch, Anne; Weaver, Chris; Morefield, Philip; Huang, Jin; Leung, Lai-Yung; Nolte, Christopher G; Adams, Peter; Liang, Xin-Zhong; Zhu, Jin-Hong; Mahoney, Hardee

    2012-11-01

    Future climate change may cause air quality degradation via climate-induced changes in meteorology, atmospheric chemistry, and emissions into the air. Few studies have explicitly modeled the potential relationships between climate change, air quality, and human health, and fewer still have investigated the sensitivity of estimates to the underlying modeling choices. Our goal was to assess the sensitivity of estimated ozone-related human health impacts of climate change to key modeling choices. Our analysis included seven modeling systems in which a climate change model is linked to an air quality model, five population projections, and multiple concentration-response functions. Using the U.S. Environmental Protection Agency's (EPA's) Environmental Benefits Mapping and Analysis Program (BenMAP), we estimated future ozone (O(3))-related health effects in the United States attributable to simulated climate change between the years 2000 and approximately 2050, given each combination of modeling choices. Health effects and concentration-response functions were chosen to match those used in the U.S. EPA's 2008 Regulatory Impact Analysis of the National Ambient Air Quality Standards for O(3). Different combinations of methodological choices produced a range of estimates of national O(3)-related mortality from roughly 600 deaths avoided as a result of climate change to 2,500 deaths attributable to climate change (although the large majority produced increases in mortality). The choice of the climate change and the air quality model reflected the greatest source of uncertainty, with the other modeling choices having lesser but still substantial effects. Our results highlight the need to use an ensemble approach, instead of relying on any one set of modeling choices, to assess the potential risks associated with O(3)-related human health effects resulting from climate change.

  5. Flying personal planes: modeling the airport choices of general aviation pilots using stated preference methodology.

    Science.gov (United States)

    Camasso, M J; Jagannathan, R

    2001-01-01

    This study employed stated preference (SP) models to determine why general aviation pilots choose to base and operate their aircraft at some airports and not others. Thirteen decision variables identified in pilot focus groups and in the general aviation literature were incorporated into a series of hypothetical choice tasks or scenarios. The scenarios were offered within a fractional factorial design to establish orthogonality and to preclude dominance in any combination of variables. Data from 113 pilots were analyzed for individual differences across pilots using conditional logit regression with and without controls. The results demonstrate that some airport attributes (e.g., full-range hospitality services, paved parallel taxiway, and specific types of runway lighting and landing aids) increase pilot utility. Heavy airport congestion and airport landing fees, on the other hand, decrease pilot utility. The importance of SP methodology as a vehicle for modeling choice behavior and as an input into the planning and prioritization process is discussed. Actual or potential applications include the development of structured decision-making instruments in the behavioral sciences and in human service programs.

  6. How robotics programs influence young women's career choices : a grounded theory model

    Science.gov (United States)

    Craig, Cecilia Dosh-Bluhm

    The fields of engineering, computer science, and physics have a paucity of women despite decades of intervention by universities and organizations. Women's graduation rates in these fields continue to stagnate, posing a critical problem for society. This qualitative grounded theory (GT) study sought to understand how robotics programs influenced young women's career decisions and the program's effect on engineering, physics, and computer science career interests. To test this, a study was mounted to explore how the FIRST (For Inspiration and Recognition of Science and Technology) Robotics Competition (FRC) program influenced young women's college major and career choices. Career theories suggested that experiential programs coupled with supportive relationships strongly influence career decisions, especially for science, technology, engineering, and mathematics careers. The study explored how and when young women made career decisions and how the experiential program and! its mentors and role models influenced career choice. Online focus groups and interviews (online and face-to-face) with 10 female FRC alumnae and GT processes (inductive analysis, open coding, categorizations using mind maps and content clouds) were used to generate a general systems theory style model of the career decision process for these young women. The study identified gender stereotypes and other career obstacles for women. The study's conclusions include recommendations to foster connections to real-world challenges, to develop training programs for mentors, and to nurture social cohesion, a mostly untapped area. Implementing these recommendations could help grow a critical mass of women in engineering, physics, and computer science careers, a social change worth pursuing.

  7. The unified model of vegetarian identity: A conceptual framework for understanding plant-based food choices.

    Science.gov (United States)

    Rosenfeld, Daniel L; Burrow, Anthony L

    2017-05-01

    By departing from social norms regarding food behaviors, vegetarians acquire membership in a distinct social group and can develop a salient vegetarian identity. However, vegetarian identities are diverse, multidimensional, and unique to each individual. Much research has identified fundamental psychological aspects of vegetarianism, and an identity framework that unifies these findings into common constructs and conceptually defines variables is needed. Integrating psychological theories of identity with research on food choices and vegetarianism, this paper proposes a conceptual model for studying vegetarianism: The Unified Model of Vegetarian Identity (UMVI). The UMVI encompasses ten dimensions-organized into three levels (contextual, internalized, and externalized)-that capture the role of vegetarianism in an individual's self-concept. Contextual dimensions situate vegetarianism within contexts; internalized dimensions outline self-evaluations; and externalized dimensions describe enactments of identity through behavior. Together, these dimensions form a coherent vegetarian identity, characterizing one's thoughts, feelings, and behaviors regarding being vegetarian. By unifying dimensions that capture psychological constructs universally, the UMVI can prevent discrepancies in operationalization, capture the inherent diversity of vegetarian identities, and enable future research to generate greater insight into how people understand themselves and their food choices. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Analysis of Model Parameters for a Polymer Filtration Simulator

    Directory of Open Access Journals (Sweden)

    N. Brackett-Rozinsky

    2011-01-01

    Full Text Available We examine a simulation model for polymer extrusion filters and determine its sensitivity to filter parameters. The simulator is a three-dimensional, time-dependent discretization of a coupled system of nonlinear partial differential equations used to model fluid flow and debris transport, along with statistical relationships that define debris distributions and retention probabilities. The flow of polymer fluid, and suspended debris particles, is tracked to determine how well a filter performs and how long it operates before clogging. A filter may have multiple layers, characterized by thickness, porosity, and average pore diameter. In this work, the thickness of each layer is fixed, while the porosities and pore diameters vary for a two-layer and three-layer study. The effects of porosity and average pore diameter on the measures of filter quality are calculated. For the three layer model, these effects are tested for statistical significance using analysis of variance. Furthermore, the effects of each pair of interacting parameters are considered. This allows the detection of complexity, where in changing two aspects of a filter together may generate results substantially different from what occurs when those same aspects change separately. The principal findings indicate that the first layer of a filter is the most important.

  9. Optimization of Experimental Model Parameter Identification for Energy Storage Systems

    Directory of Open Access Journals (Sweden)

    Rosario Morello

    2013-09-01

    Full Text Available The smart grid approach is envisioned to take advantage of all available modern technologies in transforming the current power system to provide benefits to all stakeholders in the fields of efficient energy utilisation and of wide integration of renewable sources. Energy storage systems could help to solve some issues that stem from renewable energy usage in terms of stabilizing the intermittent energy production, power quality and power peak mitigation. With the integration of energy storage systems into the smart grids, their accurate modeling becomes a necessity, in order to gain robust real-time control on the network, in terms of stability and energy supply forecasting. In this framework, this paper proposes a procedure to identify the values of the battery model parameters in order to best fit experimental data and integrate it, along with models of energy sources and electrical loads, in a complete framework which represents a real time smart grid management system. The proposed method is based on a hybrid optimisation technique, which makes combined use of a stochastic and a deterministic algorithm, with low computational burden and can therefore be repeated over time in order to account for parameter variations due to the battery’s age and usage.

  10. Applying Atmospheric Measurements to Constrain Parameters of Terrestrial Source Models

    Science.gov (United States)

    Hyer, E. J.; Kasischke, E. S.; Allen, D. J.

    2004-12-01

    Quantitative inversions of atmospheric measurements have been widely applied to constrain atmospheric budgets of a range of trace gases. Experiments of this type have revealed persistent discrepancies between 'bottom-up' and 'top-down' estimates of source magnitudes. The most common atmospheric inversion uses the absolute magnitude as the sole parameter for each source, and returns the optimal value of that parameter. In order for atmospheric measurements to be useful for improving 'bottom-up' models of terrestrial sources, information about other properties of the sources must be extracted. As the density and quality of atmospheric trace gas measurements improve, examination of higher-order properties of trace gas sources should become possible. Our model of boreal forest fire emissions is parameterized to permit flexible examination of the key uncertainties in this source. Using output from this model together with the UM CTM, we examined the sensitivity of CO concentration measurements made by the MOPITT instrument to various uncertainties in the boreal source: geographic distribution of burned area, fire type (crown fires vs. surface fires), and fuel consumption in above-ground and ground-layer fuels. Our results indicate that carefully designed inversion experiments have the potential to help constrain not only the absolute magnitudes of terrestrial sources, but also the key uncertainties associated with 'bottom-up' estimates of those sources.

  11. Bayesian parameter estimation for stochastic models of biological cell migration

    Science.gov (United States)

    Dieterich, Peter; Preuss, Roland

    2013-08-01

    Cell migration plays an essential role under many physiological and patho-physiological conditions. It is of major importance during embryonic development and wound healing. In contrast, it also generates negative effects during inflammation processes, the transmigration of tumors or the formation of metastases. Thus, a reliable quantification and characterization of cell paths could give insight into the dynamics of these processes. Typically stochastic models are applied where parameters are extracted by fitting models to the so-called mean square displacement of the observed cell group. We show that this approach has several disadvantages and problems. Therefore, we propose a simple procedure directly relying on the positions of the cell's trajectory and the covariance matrix of the positions. It is shown that the covariance is identical with the spatial aging correlation function for the supposed linear Gaussian models of Brownian motion with drift and fractional Brownian motion. The technique is applied and illustrated with simulated data showing a reliable parameter estimation from single cell paths.

  12. Mode choice models' ability to express intention to change travel behaviour considering non-compensatory rules and latent variables

    OpenAIRE

    Sanko, Nobuhiro; Morikawa, Takayuki; Kurauchi, Shinya

    2013-01-01

    Disaggregate behaviour choice models have been improved in many aspects, but they are rarely evaluated from the viewpoint of their ability to express intention to change travel behaviour. This study compared various models, including objective and latent models and compensatory and non-compensatory decision-making models. Latent models contain latent factors calculated using the LISREL (linear structural relations) model. Non-compensatory models are based on a lexicographic-semiorder heuristi...

  13. Data analysis and approximate models model choice, location-scale, analysis of variance, nonparametric regression and image analysis

    CERN Document Server

    Davies, Patrick Laurie

    2014-01-01

    Introduction IntroductionApproximate Models Notation Two Modes of Statistical AnalysisTowards One Mode of Analysis Approximation, Randomness, Chaos, Determinism ApproximationA Concept of Approximation Approximation Approximating a Data Set by a Model Approximation Regions Functionals and EquivarianceRegularization and Optimality Metrics and DiscrepanciesStrong and Weak Topologies On Being (almost) Honest Simulations and Tables Degree of Approximation and p-values ScalesStability of Analysis The Choice of En(α, P) Independence Procedures, Approximation and VaguenessDiscrete Models The Empirical Density Metrics and Discrepancies The Total Variation Metric The Kullback-Leibler and Chi-Squared Discrepancies The Po(λ) ModelThe b(k, p) and nb(k, p) Models The Flying Bomb Data The Student Study Times Data OutliersOutliers, Data Analysis and Models Breakdown Points and Equivariance Identifying Outliers and Breakdown Outliers in Multivariate Data Outliers in Linear Regression Outliers in Structured Data The Location...

  14. Application of a free parameter model to plastic scintillation samples

    Energy Technology Data Exchange (ETDEWEB)

    Tarancon Sanz, Alex, E-mail: alex.tarancon@ub.edu [Departament de Quimica Analitica, Universitat de Barcelona, Diagonal 647, E-08028 Barcelona (Spain); Kossert, Karsten, E-mail: Karsten.Kossert@ptb.de [Physikalisch-Technische Bundesanstalt (PTB), Bundesallee 100, 38116 Braunschweig (Germany)

    2011-08-21

    In liquid scintillation (LS) counting, the CIEMAT/NIST efficiency tracing method and the triple-to-double coincidence ratio (TDCR) method have proved their worth for reliable activity measurements of a number of radionuclides. In this paper, an extended approach to apply a free-parameter model to samples containing a mixture of solid plastic scintillation microspheres and radioactive aqueous solutions is presented. Several beta-emitting radionuclides were measured in a TDCR system at PTB. For the application of the free parameter model, the energy loss in the aqueous phase must be taken into account, since this portion of the particle energy does not contribute to the creation of scintillation light. The energy deposit in the aqueous phase is determined by means of Monte Carlo calculations applying the PENELOPE software package. To this end, great efforts were made to model the geometry of the samples. Finally, a new geometry parameter was defined, which was determined by means of a tracer radionuclide with known activity. This makes the analysis of experimental TDCR data of other radionuclides possible. The deviations between the determined activity concentrations and reference values were found to be lower than 3%. The outcome of this research work is also important for a better understanding of liquid scintillation counting. In particular the influence of (inverse) micelles, i.e. the aqueous spaces embedded in the organic scintillation cocktail, can be investigated. The new approach makes clear that it is important to take the energy loss in the aqueous phase into account. In particular for radionuclides emitting low-energy electrons (e.g. M-Auger electrons from {sup 125}I), this effect can be very important.

  15. Microbial Communities Model Parameter Calculation for TSPA/SR

    Energy Technology Data Exchange (ETDEWEB)

    D. Jolley

    2001-07-16

    This calculation has several purposes. First the calculation reduces the information contained in ''Committed Materials in Repository Drifts'' (BSC 2001a) to useable parameters required as input to MING V1.O (CRWMS M&O 1998, CSCI 30018 V1.O) for calculation of the effects of potential in-drift microbial communities as part of the microbial communities model. The calculation is intended to replace the parameters found in Attachment II of the current In-Drift Microbial Communities Model revision (CRWMS M&O 2000c) with the exception of Section 11-5.3. Second, this calculation provides the information necessary to supercede the following DTN: M09909SPAMING1.003 and replace it with a new qualified dataset (see Table 6.2-1). The purpose of this calculation is to create the revised qualified parameter input for MING that will allow {Delta}G (Gibbs Free Energy) to be corrected for long-term changes to the temperature of the near-field environment. Calculated herein are the quadratic or second order regression relationships that are used in the energy limiting calculations to potential growth of microbial communities in the in-drift geochemical environment. Third, the calculation performs an impact review of a new DTN: M00012MAJIONIS.000 that is intended to replace the currently cited DTN: GS9809083 12322.008 for water chemistry data used in the current ''In-Drift Microbial Communities Model'' revision (CRWMS M&O 2000c). Finally, the calculation updates the material lifetimes reported on Table 32 in section 6.5.2.3 of the ''In-Drift Microbial Communities'' AMR (CRWMS M&O 2000c) based on the inputs reported in BSC (2001a). Changes include adding new specified materials and updating old materials information that has changed.

  16. Microbial Communities Model Parameter Calculation for TSPA/SR

    International Nuclear Information System (INIS)

    D. Jolley

    2001-01-01

    This calculation has several purposes. First the calculation reduces the information contained in ''Committed Materials in Repository Drifts'' (BSC 2001a) to useable parameters required as input to MING V1.O (CRWMS M and O 1998, CSCI 30018 V1.O) for calculation of the effects of potential in-drift microbial communities as part of the microbial communities model. The calculation is intended to replace the parameters found in Attachment II of the current In-Drift Microbial Communities Model revision (CRWMS M and O 2000c) with the exception of Section 11-5.3. Second, this calculation provides the information necessary to supercede the following DTN: M09909SPAMING1.003 and replace it with a new qualified dataset (see Table 6.2-1). The purpose of this calculation is to create the revised qualified parameter input for MING that will allow ΔG (Gibbs Free Energy) to be corrected for long-term changes to the temperature of the near-field environment. Calculated herein are the quadratic or second order regression relationships that are used in the energy limiting calculations to potential growth of microbial communities in the in-drift geochemical environment. Third, the calculation performs an impact review of a new DTN: M00012MAJIONIS.000 that is intended to replace the currently cited DTN: GS9809083 12322.008 for water chemistry data used in the current ''In-Drift Microbial Communities Model'' revision (CRWMS M and O 2000c). Finally, the calculation updates the material lifetimes reported on Table 32 in section 6.5.2.3 of the ''In-Drift Microbial Communities'' AMR (CRWMS M and O 2000c) based on the inputs reported in BSC (2001a). Changes include adding new specified materials and updating old materials information that has changed

  17. Modelled basic parameters for semi-industrial irradiation plant design

    International Nuclear Information System (INIS)

    Mangussi, J.

    2009-01-01

    The basic parameters of an irradiation plant design are the total activity, the product uniformity ratio and the efficiency process. The target density, the minimum dose required and the throughput depends on the use to which the irradiator will be put at. In this work, a model for calculating the specific dose rate at several depths in an infinite homogeneous medium produced by a slab source irradiator is presented. The product minimum dose rate for a set of target thickness is obtained. The design method steps are detailed and an illustrative example is presented. (author)

  18. Lumped-parameter fuel rod model for rapid thermal transients

    International Nuclear Information System (INIS)

    Perkins, K.R.; Ramshaw, J.D.

    1975-07-01

    The thermal behavior of fuel rods during simulated accident conditions is extremely sensitive to the heat transfer coefficient which is, in turn, very sensitive to the cladding surface temperature and the fluid conditions. The development of a semianalytical, lumped-parameter fuel rod model which is intended to provide accurate calculations, in a minimum amount of computer time, of the thermal response of fuel rods during a simulated loss-of-coolant accident is described. The results show good agreement with calculations from a comprehensive fuel-rod code (FRAP-T) currently in use at Aerojet Nuclear Company

  19. Taming Many-Parameter BSM Models with Bayesian Neural Networks

    Science.gov (United States)

    Kuchera, M. P.; Karbo, A.; Prosper, H. B.; Sanchez, A.; Taylor, J. Z.

    2017-09-01

    The search for physics Beyond the Standard Model (BSM) is a major focus of large-scale high energy physics experiments. One method is to look for specific deviations from the Standard Model that are predicted by BSM models. In cases where the model has a large number of free parameters, standard search methods become intractable due to computation time. This talk presents results using Bayesian Neural Networks, a supervised machine learning method, to enable the study of higher-dimensional models. The popular phenomenological Minimal Supersymmetric Standard Model was studied as an example of the feasibility and usefulness of this method. Graphics Processing Units (GPUs) are used to expedite the calculations. Cross-section predictions for 13 TeV proton collisions will be presented. My participation in the Conference Experience for Undergraduates (CEU) in 2004-2006 exposed me to the national and global significance of cutting-edge research. At the 2005 CEU, I presented work from the previous summer's SULI internship at Lawrence Berkeley Laboratory, where I learned to program while working on the Majorana Project. That work inspired me to follow a similar research path, which led me to my current work on computational methods applied to BSM physics.

  20. Bayesian analysis of inflation: Parameter estimation for single field models

    International Nuclear Information System (INIS)

    Mortonson, Michael J.; Peiris, Hiranya V.; Easther, Richard

    2011-01-01

    Future astrophysical data sets promise to strengthen constraints on models of inflation, and extracting these constraints requires methods and tools commensurate with the quality of the data. In this paper we describe ModeCode, a new, publicly available code that computes the primordial scalar and tensor power spectra for single-field inflationary models. ModeCode solves the inflationary mode equations numerically, avoiding the slow roll approximation. It is interfaced with CAMB and CosmoMC to compute cosmic microwave background angular power spectra and perform likelihood analysis and parameter estimation. ModeCode is easily extendable to additional models of inflation, and future updates will include Bayesian model comparison. Errors from ModeCode contribute negligibly to the error budget for analyses of data from Planck or other next generation experiments. We constrain representative single-field models (φ n with n=2/3, 1, 2, and 4, natural inflation, and 'hilltop' inflation) using current data, and provide forecasts for Planck. From current data, we obtain weak but nontrivial limits on the post-inflationary physics, which is a significant source of uncertainty in the predictions of inflationary models, while we find that Planck will dramatically improve these constraints. In particular, Planck will link the inflationary dynamics with the post-inflationary growth of the horizon, and thus begin to probe the ''primordial dark ages'' between TeV and grand unified theory scale energies.

  1. Empirically modelled Pc3 activity based on solar wind parameters

    Directory of Open Access Journals (Sweden)

    B. Heilig

    2010-09-01

    Full Text Available It is known that under certain solar wind (SW/interplanetary magnetic field (IMF conditions (e.g. high SW speed, low cone angle the occurrence of ground-level Pc3–4 pulsations is more likely. In this paper we demonstrate that in the event of anomalously low SW particle density, Pc3 activity is extremely low regardless of otherwise favourable SW speed and cone angle. We re-investigate the SW control of Pc3 pulsation activity through a statistical analysis and two empirical models with emphasis on the influence of SW density on Pc3 activity. We utilise SW and IMF measurements from the OMNI project and ground-based magnetometer measurements from the MM100 array to relate SW and IMF measurements to the occurrence of Pc3 activity. Multiple linear regression and artificial neural network models are used in iterative processes in order to identify sets of SW-based input parameters, which optimally reproduce a set of Pc3 activity data. The inclusion of SW density in the parameter set significantly improves the models. Not only the density itself, but other density related parameters, such as the dynamic pressure of the SW, or the standoff distance of the magnetopause work equally well in the model. The disappearance of Pc3s during low-density events can have at least four reasons according to the existing upstream wave theory: 1. Pausing the ion-cyclotron resonance that generates the upstream ultra low frequency waves in the absence of protons, 2. Weakening of the bow shock that implies less efficient reflection, 3. The SW becomes sub-Alfvénic and hence it is not able to sweep back the waves propagating upstream with the Alfvén-speed, and 4. The increase of the standoff distance of the magnetopause (and of the bow shock. Although the models cannot account for the lack of Pc3s during intervals when the SW density is extremely low, the resulting sets of optimal model inputs support the generation of mid latitude Pc3 activity predominantly through

  2. Modelling of bio-optical parameters of open ocean waters

    Directory of Open Access Journals (Sweden)

    Vadim N. Pelevin

    2001-12-01

    Full Text Available An original method for estimating the concentration of chlorophyll pigments, absorption of yellow substance and absorption of suspended matter without pigments and yellow substance in detritus using spectral diffuse attenuation coefficient for downwelling irradiance and irradiance reflectance data has been applied to sea waters of different types in the open ocean (case 1. Using the effective numerical single parameter classification with the water type optical index m as a parameter over the whole range of the open ocean waters, the calculations have been carried out and the light absorption spectra of sea waters tabulated. These spectra are used to optimize the absorption models and thus to estimate the concentrations of the main admixtures in sea water. The value of m can be determined from direct measurements of the downward irradiance attenuation coefficient at 500 nm or calculated from remote sensing data using the regressions given in the article. The sea water composition can then be readily estimated from the tables given for any open ocean area if that one parameter m characterizing the basin is known.

  3. Application of regression model on stream water quality parameters

    International Nuclear Information System (INIS)

    Suleman, M.; Maqbool, F.; Malik, A.H.; Bhatti, Z.A.

    2012-01-01

    Statistical analysis was conducted to evaluate the effect of solid waste leachate from the open solid waste dumping site of Salhad on the stream water quality. Five sites were selected along the stream. Two sites were selected prior to mixing of leachate with the surface water. One was of leachate and other two sites were affected with leachate. Samples were analyzed for pH, water temperature, electrical conductivity (EC), total dissolved solids (TDS), Biological oxygen demand (BOD), chemical oxygen demand (COD), dissolved oxygen (DO) and total bacterial load (TBL). In this study correlation coefficient r among different water quality parameters of various sites were calculated by using Pearson model and then average of each correlation between two parameters were also calculated, which shows TDS and EC and pH and BOD have significantly increasing r value, while temperature and TDS, temp and EC, DO and BL, DO and COD have decreasing r value. Single factor ANOVA at 5% level of significance was used which shows EC, TDS, TCL and COD were significantly differ among various sites. By the application of these two statistical approaches TDS and EC shows strongly positive correlation because the ions from the dissolved solids in water influence the ability of that water to conduct an electrical current. These two parameters significantly vary among 5 sites which are further confirmed by using linear regression. (author)

  4. Does a peer model's task proficiency influence children's solution choice and innovation?

    Science.gov (United States)

    Wood, Lara A; Kendal, Rachel L; Flynn, Emma G

    2015-11-01

    The current study investigated whether 4- to 6-year-old children's task solution choice was influenced by the past proficiency of familiar peer models and the children's personal prior task experience. Peer past proficiency was established through behavioral assessments of interactions with novel tasks alongside peer and teacher predictions of each child's proficiency. Based on these assessments, one peer model with high past proficiency and one age-, sex-, dominance-, and popularity-matched peer model with lower past proficiency were trained to remove a capsule using alternative solutions from a three-solution artificial fruit task. Video demonstrations of the models were shown to children after they had either a personal successful interaction or no interaction with the task. In general, there was not a strong bias toward the high past-proficiency model, perhaps due to a motivation to acquire multiple methods and the salience of other transmission biases. However, there was some evidence of a model-based past-proficiency bias; when the high past-proficiency peer matched the participants' original solution, there was increased use of that solution, whereas if the high past-proficiency peer demonstrated an alternative solution, there was increased use of the alternative social solution and novel solutions. Thus, model proficiency influenced innovation. Copyright © 2015 Elsevier Inc. All rights reserved.

  5. On the choice of statistical models for estimating occurrence and extinction from animal surveys

    Science.gov (United States)

    Dorazio, R.M.

    2007-01-01

    In surveys of natural animal populations the number of animals that are present and available to be detected at a sample location is often low, resulting in few or no detections. Low detection frequencies are especially common in surveys of imperiled species; however, the choice of sampling method and protocol also may influence the size of the population that is vulnerable to detection. In these circumstances, probabilities of animal occurrence and extinction will generally be estimated more accurately if the models used in data analysis account for differences in abundance among sample locations and for the dependence between site-specific abundance and detection. Simulation experiments are used to illustrate conditions wherein these types of models can be expected to outperform alternative estimators of population site occupancy and extinction. ?? 2007 by the Ecological Society of America.

  6. Making Energy-Efficiency and Productivity Investments in Commercial Buildings: Choice of Investment Models

    Energy Technology Data Exchange (ETDEWEB)

    Jones, D.W.

    2002-05-16

    This study examines the decision to invest in buildings and the types of investment decision rules that may be employed to inform the ''go--no go'' decision. There is a range of decision making tools available to help in investment choices, which range from simple rules of thumb such as payback periods, to life-cycle analysis, to decision theoretic approaches. Payback period analysis tends to point toward lower first costs, whereas life-cycle analysis tends to minimize uncertainties over future events that can affect profitability. We conclude that investment models that integrate uncertainty offer better explanations for the behavior that is observed, i.e., people tend to delay investments in technologies that life-cycle analysis finds cost-effective, and these models also lead to an alternative set of policies targeted at reducing of managing uncertainty.

  7. Estimating health state utility values from discrete choice experiments--a QALY space model approach.

    Science.gov (United States)

    Gu, Yuanyuan; Norman, Richard; Viney, Rosalie

    2014-09-01

    Using discrete choice experiments (DCEs) to estimate health state utility values has become an important alternative to the conventional methods of Time Trade-Off and Standard Gamble. Studies using DCEs have typically used the conditional logit to estimate the underlying utility function. The conditional logit is known for several limitations. In this paper, we propose two types of models based on the mixed logit: one using preference space and the other using quality-adjusted life year (QALY) space, a concept adapted from the willingness-to-pay literature. These methods are applied to a dataset collected using the EQ-5D. The results showcase the advantages of using QALY space and demonstrate that the preferred QALY space model provides lower estimates of the utility values than the conditional logit, with the divergence increasing with worsening health states. Copyright © 2014 John Wiley & Sons, Ltd.

  8. Convergence of surface diffusion parameters with model crystal size

    Science.gov (United States)

    Cohen, Jennifer M.; Voter, Arthur F.

    1994-07-01

    A study of the variation in the calculated quantities for adatom diffusion with respect to the size of the model crystal is presented. The reported quantities include surface diffusion barrier heights, pre-exponential factors, and dynamical correction factors. Embedded atom method (EAM) potentials were used throughout this effort. Both the layer size and the depth of the crystal were found to influence the values of the Arrhenius factors significantly. In particular, exchange type mechanisms required a significantly larger model than standard hopping mechanisms to determine adatom diffusion barriers of equivalent accuracy. The dynamical events that govern the corrections to transition state theory (TST) did not appear to be as sensitive to crystal depth. Suitable criteria for the convergence of the diffusion parameters with regard to the rate properties are illustrated.

  9. Empirical study of travel mode forecasting improvement for the combined revealed preference/stated preference data–based discrete choice model

    Directory of Open Access Journals (Sweden)

    Yanfu Qiao

    2016-01-01

    Full Text Available The combined revealed preference/stated preference data–based discrete choice model has provided the actual choice-making restraints as well as reduced the prediction errors. But the random error variance of alternatives belonging to different data would impact its universality. In this article, we studied the traffic corridor between Chengdu and Longquan with the revealed preference/stated preference joint model, and the single stated preference data model separately predicted the choice probability of each mode. We found the revealed preference/stated preference joint model is universal only when there is a significant difference between the random error terms in different data. The single stated preference data would amplify the travelers’ preference and cause prediction error. We proposed a universal way that uses revealed preference data to modify the single stated preference data parameter estimation results to achieve the composite utility and reduce the prediction error. And the result suggests that prediction results are more reasonable based on the composite utility than the results based on the single stated preference data, especially forecasting the mode share of bus. The future metro line will be the main travel mode in this corridor, and 45% of passenger flow will transfer to the metro.

  10. Diabatic models with transferrable parameters for generalized chemical reactions

    Science.gov (United States)

    Reimers, Jeffrey R.; McKemmish, Laura K.; McKenzie, Ross H.; Hush, Noel S.

    2017-05-01

    Diabatic models applied to adiabatic electron-transfer theory yield many equations involving just a few parameters that connect ground-state geometries and vibration frequencies to excited-state transition energies and vibration frequencies to the rate constants for electron-transfer reactions, utilizing properties of the conical-intersection seam linking the ground and excited states through the Pseudo Jahn-Teller effect. We review how such simplicity in basic understanding can also be obtained for general chemical reactions. The key feature that must be recognized is that electron-transfer (or hole transfer) processes typically involve one electron (hole) moving between two orbitals, whereas general reactions typically involve two electrons or even four electrons for processes in aromatic molecules. Each additional moving electron leads to new high-energy but interrelated conical-intersection seams that distort the shape of the critical lowest-energy seam. Recognizing this feature shows how conical-intersection descriptors can be transferred between systems, and how general chemical reactions can be compared using the same set of simple parameters. Mathematical relationships are presented depicting how different conical-intersection seams relate to each other, showing that complex problems can be reduced into an effective interaction between the ground-state and a critical excited state to provide the first semi-quantitative implementation of Shaik’s “twin state” concept. Applications are made (i) demonstrating why the chemistry of the first-row elements is qualitatively so different to that of the second and later rows, (ii) deducing the bond-length alternation in hypothetical cyclohexatriene from the observed UV spectroscopy of benzene, (iii) demonstrating that commonly used procedures for modelling surface hopping based on inclusion of only the first-derivative correction to the Born-Oppenheimer approximation are valid in no region of the chemical

  11. Classical algorithms for automated parameter-search methods in compartmental neural models - A critical survey based on simulations using neuron

    International Nuclear Information System (INIS)

    Mutihac, R.; Mutihac, R.C.; Cicuttin, A.

    2001-09-01

    Parameter-search methods are problem-sensitive. All methods depend on some meta-parameters of their own, which must be determined experimentally in advance. A better choice of these intrinsic parameters for a certain parameter-search method may improve its performance. Moreover, there are various implementations of the same method, which may also affect its performance. The choice of the matching (error) function has a great impact on the search process in terms of finding the optimal parameter set and minimizing the computational cost. An initial assessment of the matching function ability to distinguish between good and bad models is recommended, before launching exhaustive computations. However, different runs of a parameter search method may result in the same optimal parameter set or in different parameter sets (the model is insufficiently constrained to accurately characterize the real system). Robustness of the parameter set is expressed by the extent to which small perturbations in the parameter values are not affecting the best solution. A parameter set that is not robust is unlikely to be physiologically relevant. Robustness can also be defined as the stability of the optimal parameter set to small variations of the inputs. When trying to estimate things like the minimum, or the least-squares optimal parameters of a nonlinear system, the existence of multiple local minima can cause problems with the determination of the global optimum. Techniques such as Newton's method, the Simplex method and Least-squares Linear Taylor Differential correction technique can be useful provided that one is lucky enough to start sufficiently close to the global minimum. All these methods suffer from the inability to distinguish a local minimum from a global one because they follow the local gradients towards the minimum, even if some methods are resetting the search direction when it is likely to get stuck in presumably a local minimum. Deterministic methods based on

  12. Translational research into intertemporal choice: the Western scrub-jay as an animal model for future-thinking.

    Science.gov (United States)

    Thom, James M; Clayton, Nicola S

    2015-03-01

    Decisions often involve outcomes that will not materialise until later, and choices between immediate gratification and future consequences are thought to be important for human health and welfare. Combined human and animal research has identified impulsive intertemporal choice as an important factor in drug-taking and pathological gambling. In this paper, we give an overview of recent research into intertemporal choice in non-human animals, and argue that this work could offer insight into human behaviour through the development of animal models. As an example, we discuss the role of future-thinking in intertemporal choice, and review the case for the Western scrub-jay (Aphelocoma californica) as an animal model of such prospective cognition. This article is part of a Special Issue entitled: Tribute to Tom Zentall. Copyright © 2014 Elsevier B.V. All rights reserved.

  13. Piecewise Model and Parameter Obtainment of Governor Actuator in Turbine

    Directory of Open Access Journals (Sweden)

    Jie Zhao

    2015-01-01

    Full Text Available The governor actuators in some heat-engine plants have nonlinear valves. This nonlinearity of valves may lead to the inaccuracy of the opening and closing time constants calculated based on the whole segment fully open and fully close experimental test curves of the valve. An improved mathematical model of the turbine governor actuator is proposed to reflect the nonlinearity of the valve, in which the main and auxiliary piecewise opening and closing time constants instead of the fixed oil motive opening and closing time constants are adopted to describe the characteristics of the actuator. The main opening and closing time constants are obtained from the linear segments of the whole fully open and close curves. The parameters of proportional integral derivative (PID controller are identified based on the small disturbance experimental tests of the valve. Then the auxiliary opening and closing time constants and the piecewise opening and closing valve points are determined by the fully open/close experimental tests. Several testing functions are selected to compare genetic algorithm and particle swarm optimization algorithm (GA-PSO with other basic intelligence algorithms. The effectiveness of the piecewise linear model and its parameters are validated by practical power plant case studies.

  14. Consideration sets, intentions and the inclusion of "don't know" in a two-stage model for voter choice

    NARCIS (Netherlands)

    Paap, R; van Nierop, E; van Heerde, HJ; Wedel, M; Franses, PH; Alsem, KJ

    2005-01-01

    We present a statistical model for voter choice that incorporates a consideration set stage and final vote intention stage. The first stage involves a multivariate probit (MVP) model to describe the probabilities that a candidate or a party gets considered. The second stage of the model is a

  15. Standard model parameters and the search for new physics

    International Nuclear Information System (INIS)

    Marciano, W.J.

    1988-04-01

    In these lectures, my aim is to present an up-to-date status report on the standard model and some key tests of electroweak unification. Within that context, I also discuss how and where hints of new physics may emerge. To accomplish those goals, I have organized my presentation as follows: I discuss the standard model parameters with particular emphasis on the gauge coupling constants and vector boson masses. Examples of new physics appendages are also briefly commented on. In addition, because these lectures are intended for students and thus somewhat pedagogical, I have included an appendix on dimensional regularization and a simple computational example that employs that technique. Next, I focus on weak charged current phenomenology. Precision tests of the standard model are described and up-to-date values for the Cabibbo-Kobayashi-Maskawa (CKM) mixing matrix parameters are presented. Constraints implied by those tests for a 4th generation, supersymmetry, extra Z/prime/ bosons, and compositeness are also discussed. I discuss weak neutral current phenomenology and the extraction of sin/sup 2/ /theta//sub W/ from experiment. The results presented there are based on a recently completed global analysis of all existing data. I have chosen to concentrate that discussion on radiative corrections, the effect of a heavy top quark mass, and implications for grand unified theories (GUTS). The potential for further experimental progress is also commented on. I depart from the narrowest version of the standard model and discuss effects of neutrino masses and mixings. I have chosen to concentrate on oscillations, the Mikheyev-Smirnov- Wolfenstein (MSW) effect, and electromagnetic properties of neutrinos. On the latter topic, I will describe some recent work on resonant spin-flavor precession. Finally, I conclude with a prospectus on hopes for the future. 76 refs

  16. Inverse modeling of hydrologic parameters using surface flux and runoff observations in the Community Land Model

    Science.gov (United States)

    Sun, Y.; Hou, Z.; Huang, M.; Tian, F.; Leung, L. Ruby

    2013-12-01

    This study demonstrates the possibility of inverting hydrologic parameters using surface flux and runoff observations in version 4 of the Community Land Model (CLM4). Previous studies showed that surface flux and runoff calculations are sensitive to major hydrologic parameters in CLM4 over different watersheds, and illustrated the necessity and possibility of parameter calibration. Both deterministic least-square fitting and stochastic Markov-chain Monte Carlo (MCMC)-Bayesian inversion approaches are evaluated by applying them to CLM4 at selected sites with different climate and soil conditions. The unknowns to be estimated include surface and subsurface runoff generation parameters and vadose zone soil water parameters. We find that using model parameters calibrated by the sampling-based stochastic inversion approaches provides significant improvements in the model simulations compared to using default CLM4 parameter values, and that as more information comes in, the predictive intervals (ranges of posterior distributions) of the calibrated parameters become narrower. In general, parameters that are identified to be significant through sensitivity analyses and statistical tests are better calibrated than those with weak or nonlinear impacts on flux or runoff observations. Temporal resolution of observations has larger impacts on the results of inverse modeling using heat flux data than runoff data. Soil and vegetation cover have important impacts on parameter sensitivities, leading to different patterns of posterior distributions of parameters at different sites. Overall, the MCMC-Bayesian inversion approach effectively and reliably improves the simulation of CLM under different climates and environmental conditions. Bayesian model averaging of the posterior estimates with different reference acceptance probabilities can smooth the posterior distribution and provide more reliable parameter estimates, but at the expense of wider uncertainty bounds.

  17. Choice Overload, Satisficing Behavior, and Price Distribution in a Time Allocation Model

    Directory of Open Access Journals (Sweden)

    Francisco Álvarez

    2014-01-01

    Full Text Available Recent psychological research indicates that consumers that search exhaustively for the best option of a market product—known as maximizers—eventually feel worse than consumers who just look for something good enough—called satisficers. We formulate a time allocation model to explore the relationship between different distributions of prices of the product and the satisficing behavior and the related welfare of the consumer. We show numerically that, as the number of options becomes large, the maximizing behavior produces less and less welfare and eventually leads to choice paralysis—these are effects of choice overload—whereas satisficing conducts entail higher levels of satisfaction and do not end up in paralysis. For different price distributions, we provide consistent evidence that maximizers are better off for a low number of options, whereas satisficers are better off for a sufficiently large number of options. We also show how the optimal satisficing behavior is affected when the underlying price distribution varies. We provide evidence that the mean and the dispersion of a symmetric distribution of prices—but not the shape of the distribution—condition the satisficing behavior of consumers. We also show that this need not be the case for asymmetric distributions.

  18. Choice of a High-Level Fault Model for the Optimization of Validation Test Set Reused for Manufacturing Test

    Directory of Open Access Journals (Sweden)

    Yves Joannon

    2008-01-01

    Full Text Available With the growing complexity of wireless systems on chip integrating hundreds-of-millions of transistors, electronic design methods need to be upgraded to reduce time-to-market. In this paper, the test benches defined for design validation or characterization of AMS & RF SoCs are optimized and reused for production testing. Although the original validation test set allows the verification of both design functionalities and performances, this test set is not well adapted to manufacturing test due to its high execution time and high test equipment costs requirement. The optimization of this validation test set is based on the evaluation of each test vector. This evaluation relies on high-level fault modeling and fault simulation. Hence, a fault model based on the variations of the parameters of high abstraction level descriptions and its related qualification metric are presented. The choice of functional or behavioral abstraction levels is discussed by comparing their impact on structural fault coverage. Experiments are performed on the receiver part of a WCDMA transceiver. Results show that for this SoC, using behavioral abstraction level is justified for the generation of manufacturing test benches.

  19. Performance Analysis of Different NeQuick Ionospheric Model Parameters

    Directory of Open Access Journals (Sweden)

    WANG Ningbo

    2017-04-01

    Full Text Available Galileo adopts NeQuick model for single-frequency ionospheric delay corrections. For the standard operation of Galileo, NeQuick model is driven by the effective ionization level parameter Az instead of the solar activity level index, and the three broadcast ionospheric coefficients are determined by a second-polynomial through fitting the Az values estimated from globally distributed Galileo Sensor Stations (GSS. In this study, the processing strategies for the estimation of NeQuick ionospheric coefficients are discussed and the characteristics of the NeQuick coefficients are also analyzed. The accuracy of Global Position System (GPS broadcast Klobuchar, original NeQuick2 and fitted NeQuickC as well as Galileo broadcast NeQuickG models is evaluated over the continental and oceanic regions, respectively, in comparison with the ionospheric total electron content (TEC provided by global ionospheric maps (GIM, GPS test stations and JASON-2 altimeter. The results show that NeQuickG can mitigate ionospheric delay by 54.2%~65.8% on a global scale, and NeQuickC can correct for 71.1%~74.2% of the ionospheric delay. NeQuick2 performs at the same level with NeQuickG, which is a bit better than that of GPS broadcast Klobuchar model.

  20. Exploring parameter constraints on quintessential dark energy: The exponential model

    International Nuclear Information System (INIS)

    Bozek, Brandon; Abrahamse, Augusta; Albrecht, Andreas; Barnard, Michael

    2008-01-01

    We present an analysis of a scalar field model of dark energy with an exponential potential using the Dark Energy Task Force (DETF) simulated data models. Using Markov Chain Monte Carlo sampling techniques we examine the ability of each simulated data set to constrain the parameter space of the exponential potential for data sets based on a cosmological constant and a specific exponential scalar field model. We compare our results with the constraining power calculated by the DETF using their 'w 0 -w a ' parametrization of the dark energy. We find that respective increases in constraining power from one stage to the next produced by our analysis give results consistent with DETF results. To further investigate the potential impact of future experiments, we also generate simulated data for an exponential model background cosmology which cannot be distinguished from a cosmological constant at DETF 'Stage 2', and show that for this cosmology good DETF Stage 4 data would exclude a cosmological constant by better than 3σ

  1. Choice Model and Influencing Factor Analysis of Travel Mode for Migrant Workers: Case Study in Xi’an, China

    Directory of Open Access Journals (Sweden)

    Hong Chen

    2015-01-01

    Full Text Available Based on the basic theory and methods of disaggregate choice model, the influencing factors in travel mode choice for migrant workers are analyzed, according to 1366 data samples of Xi’an migrant workers. Walking, bus, subway, and taxi are taken as the alternative parts of travel modes for migrant workers, and a multinomial logit (MNL model of travel mode for migrant workers is set up. The validity of the model is verified by the hit rate, and the hit rates of four travel modes are all greater than 80%. Finally, the influence of different factors affecting the choice of travel mode is analyzed in detail, and the inelasticity of each factor is analyzed with the elasticity theory. Influencing factors such as age, education level, and monthly gross income have significant impact on travel choice mode for migrant workers. The elasticity values of education degree are greater than 1, indicating that it on the travel mode choice is of elasticity, while the elasticity values of gender, industry distribution, and travel purpose are less than 1, indicating that these factors on travel mode choice are of inelasticity.

  2. Bayesian estimation of regularization parameters for deformable surface models

    Energy Technology Data Exchange (ETDEWEB)

    Cunningham, G.S.; Lehovich, A.; Hanson, K.M.

    1999-02-20

    In this article the authors build on their past attempts to reconstruct a 3D, time-varying bolus of radiotracer from first-pass data obtained by the dynamic SPECT imager, FASTSPECT, built by the University of Arizona. The object imaged is a CardioWest total artificial heart. The bolus is entirely contained in one ventricle and its associated inlet and outlet tubes. The model for the radiotracer distribution at a given time is a closed surface parameterized by 482 vertices that are connected to make 960 triangles, with nonuniform intensity variations of radiotracer allowed inside the surface on a voxel-to-voxel basis. The total curvature of the surface is minimized through the use of a weighted prior in the Bayesian framework, as is the weighted norm of the gradient of the voxellated grid. MAP estimates for the vertices, interior intensity voxels and background count level are produced. The strength of the priors, or hyperparameters, are determined by maximizing the probability of the data given the hyperparameters, called the evidence. The evidence is calculated by first assuming that the posterior is approximately normal in the values of the vertices and voxels, and then by evaluating the integral of the multi-dimensional normal distribution. This integral (which requires evaluating the determinant of a covariance matrix) is computed by applying a recent algorithm from Bai et. al. that calculates the needed determinant efficiently. They demonstrate that the radiotracer is highly inhomogeneous in early time frames, as suspected in earlier reconstruction attempts that assumed a uniform intensity of radiotracer within the closed surface, and that the optimal choice of hyperparameters is substantially different for different time frames.

  3. Application of multi-parameter chorus and plasmaspheric hiss wave models in radiation belt modeling

    Science.gov (United States)

    Aryan, H.; Kang, S. B.; Balikhin, M. A.; Fok, M. C. H.; Agapitov, O. V.; Komar, C. M.; Kanekal, S. G.; Nagai, T.; Sibeck, D. G.

    2017-12-01

    Numerical simulation studies of the Earth's radiation belts are important to understand the acceleration and loss of energetic electrons. The Comprehensive Inner Magnetosphere-Ionosphere (CIMI) model along with many other radiation belt models require inputs for pitch angle, energy, and cross diffusion of electrons, due to chorus and plasmaspheric hiss waves. These parameters are calculated using statistical wave distribution models of chorus and plasmaspheric hiss amplitudes. In this study we incorporate recently developed multi-parameter chorus and plasmaspheric hiss wave models based on geomagnetic index and solar wind parameters. We perform CIMI simulations for two geomagnetic storms and compare the flux enhancement of MeV electrons with data from the Van Allen Probes and Akebono satellites. We show that the relativistic electron fluxes calculated with multi-parameter wave models resembles the observations more accurately than the relativistic electron fluxes calculated with single-parameter wave models. This indicates that wave models based on a combination of geomagnetic index and solar wind parameters are more effective as inputs to radiation belt models.

  4. Decisions with Endogenous Preference Parameters (Replaced by CentER DP 2010-142)

    NARCIS (Netherlands)

    Dalton, P.S.; Ghosal, S.

    2010-01-01

    We relate the normative implications of a model of decision-making with endogenous preference parameters to choice theoretic models (Bernheim and Rangel 2007, 2009; Rubinstein and Salant, 2008) in which observed choices are determined by frames or ancillary conditions.

  5. Modeling the choice to switch from fuelwood to electricity. Implications for giant panda habitat conservation

    Energy Technology Data Exchange (ETDEWEB)

    An, Li; Liu, Jianguo; Linderman, Marc A. [Department of Fisheries and Wildlife, Michigan State University, 13 Natural Resources Building, 48824 East Lansing, MI (United States); Lupi, Frank [Departments of Agricultural Economics and Fisheries and Wildlife, Michigan State University, 213F Agriculture Hall, 48824 East Lansing, MI (United States); Huang, Jinyan [Wolong Nature Reserve Administration, Wenchuan County, 623002 Sichuan Province (China)

    2002-09-01

    Despite its status as a nature reserve, Wolong Nature Reserve (China) has experienced continued loss of giant panda habitat due to human activities such as fuelwood collection. Electricity, though available throughout Wolong, has not replaced fuelwood as an energy source. We used stated preference data obtained from in-person interviews to estimate a random utility model of the choice of adopting electricity for cooking and heating. Willingness to switch to electricity was explained by demographic and electricity factors (price, voltage, and outage frequency). In addition to price, non-price factors such as voltage and outage frequency significantly affect the demand. Thus, lowering electricity prices and increasing electricity quality would encourage local residents to switch from fuelwood to electricity and should be considered in the mix of policies to promote conservation of panda habitat.

  6. Modeling the choice to switch from fuelwood to electricity. Implications for giant panda habitat conservation

    International Nuclear Information System (INIS)

    An, Li; Liu, Jianguo; Linderman, Marc A.; Lupi, Frank; Huang, Jinyan

    2002-01-01

    Despite its status as a nature reserve, Wolong Nature Reserve (China) has experienced continued loss of giant panda habitat due to human activities such as fuelwood collection. Electricity, though available throughout Wolong, has not replaced fuelwood as an energy source. We used stated preference data obtained from in-person interviews to estimate a random utility model of the choice of adopting electricity for cooking and heating. Willingness to switch to electricity was explained by demographic and electricity factors (price, voltage, and outage frequency). In addition to price, non-price factors such as voltage and outage frequency significantly affect the demand. Thus, lowering electricity prices and increasing electricity quality would encourage local residents to switch from fuelwood to electricity and should be considered in the mix of policies to promote conservation of panda habitat

  7. Functional forms and price elasticities in a discrete continuous choice model of the residential water demand

    Science.gov (United States)

    Vásquez Lavín, F. A.; Hernandez, J. I.; Ponce, R. D.; Orrego, S. A.

    2017-07-01

    During recent decades, water demand estimation has gained considerable attention from scholars. From an econometric perspective, the most used functional forms include log-log and linear specifications. Despite the advances in this field and the relevance for policymaking, little attention has been paid to the functional forms used in these estimations, and most authors have not provided justifications for their selection of functional forms. A discrete continuous choice model of the residential water demand is estimated using six functional forms (log-log, full-log, log-quadratic, semilog, linear, and Stone-Geary), and the expected consumption and price elasticity are evaluated. From a policy perspective, our results highlight the relevance of functional form selection for both the expected consumption and price elasticity.

  8. Physical microscopic free-choice model in the framework of a Darwinian approach to quantum mechanics

    Energy Technology Data Exchange (ETDEWEB)

    Baladron, Carlos [Departamento de Fisica Teorica, Atomica y Optica, Universidad de Valladolid, E-47011, Valladolid (Spain)

    2017-06-15

    A compatibilistic model of free choice for a fundamental particle is built within a general framework that explores the possibility that quantum mechanics be the emergent result of generalised Darwinian evolution acting on the abstract landscape of possible physical theories. The central element in this approach is a probabilistic classical Turing machine -basically an information processor plus a randomiser- methodologically associated with every fundamental particle. In this scheme every system acts not under a general law, but as a consequence of the command of a particular, evolved algorithm. This evolved programme enables the particle to algorithmically anticipate possible future world configurations in information space, and as a consequence, without altering the natural forward causal order in physical space, to incorporate elements to the decision making procedure that are neither purely random nor strictly in the past, but in a possible future. (copyright 2016 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)

  9. Parameters-related uncertainty in modeling sugar cane yield with an agro-Land Surface Model

    Science.gov (United States)

    Valade, A.; Ciais, P.; Vuichard, N.; Viovy, N.; Ruget, F.; Gabrielle, B.

    2012-12-01

    Agro-Land Surface Models (agro-LSM) have been developed from the coupling of specific crop models and large-scale generic vegetation models. They aim at accounting for the spatial distribution and variability of energy, water and carbon fluxes within soil-vegetation-atmosphere continuum with a particular emphasis on how crop phenology and agricultural management practice influence the turbulent fluxes exchanged with the atmosphere, and the underlying water and carbon pools. A part of the uncertainty in these models is related to the many parameters included in the models' equations. In this study, we quantify the parameter-based uncertainty in the simulation of sugar cane biomass production with the agro-LSM ORCHIDEE-STICS on a multi-regional approach with data from sites in Australia, La Reunion and Brazil. First, the main source of uncertainty for the output variables NPP, GPP, and sensible heat flux (SH) is determined through a screening of the main parameters of the model on a multi-site basis leading to the selection of a subset of most sensitive parameters causing most of the uncertainty. In a second step, a sensitivity analysis is carried out on the parameters selected from the screening analysis at a regional scale. For this, a Monte-Carlo sampling method associated with the calculation of Partial Ranked Correlation Coefficients is used. First, we quantify the sensitivity of the output variables to individual input parameters on a regional scale for two regions of intensive sugar cane cultivation in Australia and Brazil. Then, we quantify the overall uncertainty in the simulation's outputs propagated from the uncertainty in the input parameters. Seven parameters are identified by the screening procedure as driving most of the uncertainty in the agro-LSM ORCHIDEE-STICS model output at all sites. These parameters control photosynthesis (optimal temperature of photosynthesis, optimal carboxylation rate), radiation interception (extinction coefficient), root

  10. The electronic disability record: purpose, parameters, and model use case.

    Science.gov (United States)

    Tulu, Bengisu; Horan, Thomas A

    2009-01-01

    The active engagement of consumers is an important factor in achieving widespread success of health information systems. The disability community represents a major segment of the healthcare arena, with more than 50 million Americans experiencing some form of disability. In keeping with the "consumer-driven" approach to e-health systems, this paper considers the distinctive aspects of electronic and personal health record use by this segment of society. Drawing upon the information shared during two national policy forums on this topic, the authors present the concept of Electronic Disability Records (EDR). The authors outline the purpose and parameters of such records, with specific attention to its ability to organize health and financial data in a manner that can be used to expedite the disability determination process. In doing so, the authors discuss its interaction with Electronic Health Records (EHR) and Personal Health Records (PHR). The authors then draw upon these general parameters to outline a model use case for disability determination and discuss related implications for disability health management. The paper further reports on the subsequent considerations of these and related deliberations by the American Health Information Community (AHIC).

  11. Choice Model and Influencing Factor Analysis of Travel Mode for Migrant Workers: Case Study in Xi’an, China

    OpenAIRE

    Chen, Hong; Gan, Zuo-xian; He, Yu-ting

    2015-01-01

    Based on the basic theory and methods of disaggregate choice model, the influencing factors in travel mode choice for migrant workers are analyzed, according to 1366 data samples of Xi’an migrant workers. Walking, bus, subway, and taxi are taken as the alternative parts of travel modes for migrant workers, and a multinomial logit (MNL) model of travel mode for migrant workers is set up. The validity of the model is verified by the hit rate, and the hit rates of four travel modes are all great...

  12. The S-parameter in Holographic Technicolor Models

    CERN Document Server

    Agashe, Kaustubh; Grojean, Christophe; Reece, Matthew

    2007-01-01

    We study the S parameter, considering especially its sign, in models of electroweak symmetry breaking (EWSB) in extra dimensions, with fermions localized near the UV brane. Such models are conjectured to be dual to 4D strong dynamics triggering EWSB. The motivation for such a study is that a negative value of S can significantly ameliorate the constraints from electroweak precision data on these models, allowing lower mass scales (TeV or below) for the new particles and leading to easier discovery at the LHC. We first extend an earlier proof of S>0 for EWSB by boundary conditions in arbitrary metric to the case of general kinetic functions for the gauge fields or arbitrary kinetic mixing. We then consider EWSB in the bulk by a Higgs VEV showing that S is positive for arbitrary metric and Higgs profile, assuming that the effects from higher-dimensional operators in the 5D theory are sub-leading and can therefore be neglected. For the specific case of AdS_5 with a power law Higgs profile, we also show that S ~ ...

  13. Extracting Structure Parameters of Dimers for Molecular Tunneling Ionization Model

    Science.gov (United States)

    Zhao, Song-Feng; Huang, Fang; Wang, Guo-Li; Zhou, Xiao-Xin

    2016-03-01

    We determine structure parameters of the highest occupied molecular orbital (HOMO) of 27 dimers for the molecular tunneling ionization (so called MO-ADK) model of Tong et al. [Phys. Rev. A 66 (2002) 033402]. The molecular wave functions with correct asymptotic behavior are obtained by solving the time-independent Schrödinger equation with B-spline functions and molecular potentials which are numerically created using the density functional theory. We examine the alignment-dependent tunneling ionization probabilities from MO-ADK model for several molecules by comparing with the molecular strong-field approximation (MO-SFA) calculations. We show the molecular Perelomov–Popov–Terent'ev (MO-PPT) can successfully give the laser wavelength dependence of ionization rates (or probabilities). Based on the MO-PPT model, two diatomic molecules having valence orbital with antibonding systems (i.e., Cl2, Ne2) show strong ionization suppression when compared with their corresponding closest companion atoms. Supported by National Natural Science Foundation of China under Grant Nos. 11164025, 11264036, 11465016, 11364038, the Specialized Research Fund for the Doctoral Program of Higher Education of China under Grant No. 20116203120001, and the Basic Scientific Research Foundation for Institution of Higher Learning of Gansu Province

  14. Sound propagation and absorption in foam - A distributed parameter model.

    Science.gov (United States)

    Manson, L.; Lieberman, S.

    1971-01-01

    Liquid-base foams are highly effective sound absorbers. A better understanding of the mechanisms of sound absorption in foams was sought by exploration of a mathematical model of bubble pulsation and coupling and the development of a distributed-parameter mechanical analog. A solution by electric-circuit analogy was thus obtained and transmission-line theory was used to relate the physical properties of the foams to the characteristic impedance and propagation constants of the analog transmission line. Comparison of measured physical properties of the foam with values obtained from measured acoustic impedance and propagation constants and the transmission-line theory showed good agreement. We may therefore conclude that the sound propagation and absorption mechanisms in foam are accurately described by the resonant response of individual bubbles coupled to neighboring bubbles.

  15. Verification Techniques for Parameter Selection and Bayesian Model Calibration Presented for an HIV Model

    Science.gov (United States)

    Wentworth, Mami Tonoe

    Uncertainty quantification plays an important role when making predictive estimates of model responses. In this context, uncertainty quantification is defined as quantifying and reducing uncertainties, and the objective is to quantify uncertainties in parameter, model and measurements, and propagate the uncertainties through the model, so that one can make a predictive estimate with quantified uncertainties. Two of the aspects of uncertainty quantification that must be performed prior to propagating uncertainties are model calibration and parameter selection. There are several efficient techniques for these processes; however, the accuracy of these methods are often not verified. This is the motivation for our work, and in this dissertation, we present and illustrate verification frameworks for model calibration and parameter selection in the context of biological and physical models. First, HIV models, developed and improved by [2, 3, 8], describe the viral infection dynamics of an HIV disease. These are also used to make predictive estimates of viral loads and T-cell counts and to construct an optimal control for drug therapy. Estimating input parameters is an essential step prior to uncertainty quantification. However, not all the parameters are identifiable, implying that they cannot be uniquely determined by the observations. These unidentifiable parameters can be partially removed by performing parameter selection, a process in which parameters that have minimal impacts on the model response are determined. We provide verification techniques for Bayesian model calibration and parameter selection for an HIV model. As an example of a physical model, we employ a heat model with experimental measurements presented in [10]. A steady-state heat model represents a prototypical behavior for heat conduction and diffusion process involved in a thermal-hydraulic model, which is a part of nuclear reactor models. We employ this simple heat model to illustrate verification

  16. Modeling metabolic networks in C. glutamicum: a comparison of rate laws in combination with various parameter optimization strategies

    Directory of Open Access Journals (Sweden)

    Oldiges Marco

    2009-01-01

    Full Text Available Abstract Background To understand the dynamic behavior of cellular systems, mathematical modeling is often necessary and comprises three steps: (1 experimental measurement of participating molecules, (2 assignment of rate laws to each reaction, and (3 parameter calibration with respect to the measurements. In each of these steps the modeler is confronted with a plethora of alternative approaches, e. g., the selection of approximative rate laws in step two as specific equations are often unknown, or the choice of an estimation procedure with its specific settings in step three. This overall process with its numerous choices and the mutual influence between them makes it hard to single out the best modeling approach for a given problem. Results We investigate the modeling process using multiple kinetic equations together with various parameter optimization methods for a well-characterized example network, the biosynthesis of valine and leucine in C. glutamicum. For this purpose, we derive seven dynamic models based on generalized mass action, Michaelis-Menten and convenience kinetics as well as the stochastic Langevin equation. In addition, we introduce two modeling approaches for feedback inhibition to the mass action kinetics. The parameters of each model are estimated using eight optimization strategies. To determine the most promising modeling approaches together with the best optimization algorithms, we carry out a two-step benchmark: (1 coarse-grained comparison of the algorithms on all models and (2 fine-grained tuning of the best optimization algorithms and models. To analyze the space of the best parameters found for each model, we apply clustering, variance, and correlation analysis. Conclusion A mixed model based on the convenience rate law and the Michaelis-Menten equation, in which all reactions are assumed to be reversible, is the most suitable deterministic modeling approach followed by a reversible generalized mass action kinetics

  17. Coupled 1D-2D hydrodynamic inundation model for sewer overflow: Influence of modeling parameters

    Directory of Open Access Journals (Sweden)

    Adeniyi Ganiyu Adeogun

    2015-10-01

    Full Text Available This paper presents outcome of our investigation on the influence of modeling parameters on 1D-2D hydrodynamic inundation model for sewer overflow, developed through coupling of an existing 1D sewer network model (SWMM and 2D inundation model (BREZO. The 1D-2D hydrodynamic model was developed for the purpose of examining flood incidence due to surcharged water on overland surface. The investigation was carried out by performing sensitivity analysis on the developed model. For the sensitivity analysis, modeling parameters, such as mesh resolution Digital Elevation Model (DEM resolution and roughness were considered. The outcome of the study shows the model is sensitive to changes in these parameters. The performance of the model is significantly influenced, by the Manning's friction value, the DEM resolution and the area of the triangular mesh. Also, changes in the aforementioned modeling parameters influence the Flood characteristics, such as the inundation extent, the flow depth and the velocity across the model domain.

  18. Modeling the outflow of liquid with initial supercritical parameters using the relaxation model for condensation

    Directory of Open Access Journals (Sweden)

    Lezhnin Sergey

    2017-01-01

    Full Text Available The two-temperature model of the outflow from a vessel with initial supercritical parameters of medium has been realized. The model uses thermodynamic non-equilibrium relaxation approach to describe phase transitions. Based on a new asymptotic model for computing the relaxation time, the outflow of water with supercritical initial pressure and super- and subcritical temperatures has been calculated.

  19. Hybrid choice model to disentangle the effect of awareness from attitudes: Application test of soft measures in medium size city

    DEFF Research Database (Denmark)

    Sottile, Eleonora; Meloni, Italo; Cherchi, Elisabetta

    2017-01-01

    ), carried out with the purpose of promoting the use of the light rail in Park and Ride mode. To account for all these effects in the choice between car and Park and Ride we estimate a Hybrid Choice Model where the discrete choice structure allows us to estimate the effect of awareness of environment......The need to reduce private vehicle use has led to the development of soft measures aimed at re-educating car users through information processes that raise their awareness about the benefits of environmentally friendly modes, encouraging them to voluntarily change their travel choice behaviour...... (level of services characteristics being equal). It has been observed that these measures can produce enduring changes, being the result of mindful decisions. It is important then to try and understand what contributes to shape individuals’ preferences in order to be able to define the best policy...

  20. Using metro smart card data to model location choice of after-work activities: An application to Shanghai

    NARCIS (Netherlands)

    Wang, Y.; Correia, G.H.D.A.; Romph, E. de; Timmermans, H.J.P.H.

    2017-01-01

    A location choice model explains how travellers choose their trip destinations especially for those activities which are flexible in space and time. The model is usually estimated using travel survey data; however, little is known about how to use smart card data (SCD) for this purpose in a public

  1. Patterns of Reinforcement and the Essential Value of Brands: II. Evaluation of a Model of Consumer Choice

    Science.gov (United States)

    Yan, Ji; Foxall, Gordon R.; Doyle, John R.

    2012-01-01

    We employ a behavioral-economic equation put forward by Hursh and Silberberg (2008) to explain human consumption behavior among substitutable food brands, applying a consumer-choice model--the behavioral perspective model (BPM; Foxall, 1990/2004, 2005). In this study, we apply the behavioral-economic equation to human economic consumption data. We…

  2. The episodic random utility model unifies time trade-off and discrete choice approaches in health state valuation

    NARCIS (Netherlands)

    B.M. Craig (Benjamin); J.J. van Busschbach (Jan)

    2009-01-01

    textabstractABSTRACT: BACKGROUND: To present an episodic random utility model that unifies time trade-off and discrete choice approaches in health state valuation. METHODS: First, we introduce two alternative random utility models (RUMs) for health preferences: the episodic RUM and the more common

  3. Hydrological modeling in alpine catchments: sensing the critical parameters towards an efficient model calibration.

    Science.gov (United States)

    Achleitner, S; Rinderer, M; Kirnbauer, R

    2009-01-01

    For the Tyrolean part of the river Inn, a hybrid model for flood forecast has been set up and is currently in its test phase. The system is a hybrid system which comprises of a hydraulic 1D model for the river Inn, and the hydrological models HQsim (Rainfall-runoff-discharge model) and the snow and ice melt model SES for modeling the rainfall runoff form non-glaciated and glaciated tributary catchment respectively. Within this paper the focus is put on the hydrological modeling of the totally 49 connected non-glaciated catchments realized with the software HQsim. In the course of model calibration, the identification of the most sensitive parameters is important aiming at an efficient calibration procedure. The indicators used for explaining the parameter sensitivities were chosen specifically for the purpose of flood forecasting. Finally five model parameters could be identified as being sensitive for model calibration when aiming for a well calibrated model for flood conditions. In addition two parameters were identified which are sensitive in situations where the snow line plays an important role.

  4. Misspecification in Latent Change Score Models: Consequences for Parameter Estimation, Model Evaluation, and Predicting Change.

    Science.gov (United States)

    Clark, D Angus; Nuttall, Amy K; Bowles, Ryan P

    2018-01-01

    Latent change score models (LCS) are conceptually powerful tools for analyzing longitudinal data (McArdle & Hamagami, 2001). However, applications of these models typically include constraints on key parameters over time. Although practically useful, strict invariance over time in these parameters is unlikely in real data. This study investigates the robustness of LCS when invariance over time is incorrectly imposed on key change-related parameters. Monte Carlo simulation methods were used to explore the impact of misspecification on parameter estimation, predicted trajectories of change, and model fit in the dual change score model, the foundational LCS. When constraints were incorrectly applied, several parameters, most notably the slope (i.e., constant change) factor mean and autoproportion coefficient, were severely and consistently biased, as were regression paths to the slope factor when external predictors of change were included. Standard fit indices indicated that the misspecified models fit well, partly because mean level trajectories over time were accurately captured. Loosening constraint improved the accuracy of parameter estimates, but estimates were more unstable, and models frequently failed to converge. Results suggest that potentially common sources of misspecification in LCS can produce distorted impressions of developmental processes, and that identifying and rectifying the situation is a challenge.

  5. Physical property parameter set for modeling ICPP aqueous wastes with ASPEN electrolyte NRTL model

    International Nuclear Information System (INIS)

    Schindler, R.E.

    1996-09-01

    The aqueous waste evaporators at the Idaho Chemical Processing Plant (ICPP) are being modeled using ASPEN software. The ASPEN software calculates chemical and vapor-liquid equilibria with activity coefficients calculated using the electrolyte Non-Random Two Liquid (NRTL) model for local excess Gibbs free energies of interactions between ions and molecules in solution. The use of the electrolyte NRTL model requires the determination of empirical parameters for the excess Gibbs free energies of the interactions between species in solution. This report covers the development of a set parameters, from literature data, for the use of the electrolyte NRTL model with the major solutes in the ICPP aqueous wastes

  6. Testing for parameter instability across different modeling frameworks

    NARCIS (Netherlands)

    Calvori, Francesco; Creal, Drew; Koopman, Siem Jan; Lucas, André

    2017-01-01

    We develop a new parameter instability test that generalizes the seminal ARCHLagrange Multiplier test of Engle (1982) for a constant variance against the alternative of autoregressive conditional heteroskedasticity to settings with nonlinear timevarying parameters and non-Gaussian distributions. We

  7. Increasing reach by offering choices: Results from an innovative model for statewide services for smoking cessation.

    Science.gov (United States)

    Keller, Paula A; Schillo, Barbara A; Kerr, Amy N; Lien, Rebecca K; Saul, Jessie; Dreher, Marietta; Lachter, Randi B

    2016-10-01

    Although state quitlines provide free telephone counseling and often include nicotine replacement therapy (NRT), reach remains limited (1-2% in most states). More needs to be done to engage all smokers in the quitting process. A possible strategy is to offer choices of cessation services through quitlines and to reduce registration barriers. In March 2014, ClearWay Minnesota SM implemented a new model for QUITPLAN® Services, the state's population-wide cessation services. Tobacco users could choose the QUITPLAN® Helpline or one or more Individual QUITPLAN® Services (NRT starter kit, text messaging, email program, or quit guide). The program website was redesigned, online enrollment was added, and a new advertising campaign was created and launched. In 2014-2015, we evaluated whether these changes increased reach. We also assessed quit attempts, quit outcomes, predictors of 30-day abstinence, and average cost per quit via a seven-month follow-up survey. Between March 2014-February 2015, 15,861 unique tobacco users registered, which was a 169% increase over calendar year 2013. The majority of participants made a quit attempt (83.7%). Thirty-day point prevalence abstinence rates (responder rates) were 26.1% for QUITPLAN Services overall, 29.6% for the QUITPLAN Helpline, and 25.5% for Individual QUITPLAN Services. Several variables predicted quit outcomes, including receiving only one call from the Helpline and using both the Helpline and the NRT starter kit. Providing greater choice of cessation services and reducing registration barriers have the potential to engage more tobacco users, foster more quit attempts, and ultimately lead to long-term cessation and reductions in prevalence. Copyright © 2016 Elsevier Inc. All rights reserved.

  8. Statistical osteoporosis models using composite finite elements: a parameter study.

    Science.gov (United States)

    Wolfram, Uwe; Schwen, Lars Ole; Simon, Ulrich; Rumpf, Martin; Wilke, Hans-Joachim

    2009-09-18

    Osteoporosis is a widely spread disease with severe consequences for patients and high costs for health care systems. The disease is characterised by a loss of bone mass which induces a loss of mechanical performance and structural integrity. It was found that transverse trabeculae are thinned and perforated while vertical trabeculae stay intact. For understanding these phenomena and the mechanisms leading to fractures of trabecular bone due to osteoporosis, numerous researchers employ micro-finite element models. To avoid disadvantages in setting up classical finite element models, composite finite elements (CFE) can be used. The aim of the study is to test the potential of CFE. For that, a parameter study on numerical lattice samples with statistically simulated, simplified osteoporosis is performed. These samples are subjected to compression and shear loading. Results show that the biggest drop of compressive stiffness is reached for transverse isotropic structures losing 32% of the trabeculae (minus 89.8% stiffness). The biggest drop in shear stiffness is found for an isotropic structure also losing 32% of the trabeculae (minus 67.3% stiffness). The study indicates that losing trabeculae leads to a worse drop of macroscopic stiffness than thinning of trabeculae. The results further demonstrate the advantages of CFEs for simulating micro-structured samples.

  9. Cost-effective choices of marine fuels in a carbon-constrained world: results from a global energy model.

    Science.gov (United States)

    Taljegard, Maria; Brynolf, Selma; Grahn, Maria; Andersson, Karin; Johnson, Hannes

    2014-11-04

    The regionalized Global Energy Transition model has been modified to include a more detailed shipping sector in order to assess what marine fuels and propulsion technologies might be cost-effective by 2050 when achieving an atmospheric CO2 concentration of 400 or 500 ppm by the year 2100. The robustness of the results was examined in a Monte Carlo analysis, varying uncertain parameters and technology options, including the amount of primary energy resources, the availability of carbon capture and storage (CCS) technologies, and costs of different technologies and fuels. The four main findings are (i) it is cost-effective to start the phase out of fuel oil from the shipping sector in the next decade; (ii) natural gas-based fuels (liquefied natural gas and methanol) are the most probable substitutes during the study period; (iii) availability of CCS, the CO2 target, the liquefied natural gas tank cost and potential oil resources affect marine fuel choices significantly; and (iv) biofuels rarely play a major role in the shipping sector, due to limited supply and competition for bioenergy from other energy sectors.

  10. Diffusion model for one-choice reaction-time tasks and the cognitive effects of sleep deprivation.

    Science.gov (United States)

    Ratcliff, Roger; Van Dongen, Hans P A

    2011-07-05

    One-choice reaction-time (RT) tasks are used in many domains, including assessments of motor vehicle driving and assessments of the cognitive/behavioral consequences of sleep deprivation. In such tasks, subjects are asked to respond when they detect the onset of a stimulus; the dependent variable is RT. We present a cognitive model for one-choice RT tasks that uses a one-boundary diffusion process to represent the accumulation of stimulus information. When the accumulated evidence reaches a decision criterion, a response is initiated. This model is distinct in accounting for the RT distributions observed for one-choice RT tasks, which can have long tails that have not been accurately captured by earlier cognitive modeling approaches. We show that the model explains performance on a brightness-detection task (a "simple RT task") and on a psychomotor vigilance test. The latter is used extensively to examine the clinical and behavioral effects of sleep deprivation. For the brightness-detection task, the model explains the behavior of RT distributions as a function of brightness. For the psychomotor vigilance test, it accounts for lapses in performance under conditions of sleep deprivation and for changes in the shapes of RT distributions over the course of sleep deprivation. The model also successfully maps the rate of accumulation of stimulus information onto independently derived predictions of alertness. The model is a unified, mechanistic account of one-choice RT under conditions of sleep deprivation.

  11. The agony of choice: different empirical mortality models lead to sharply different future forest dynamics.

    Science.gov (United States)

    Bircher, Nicolas; Cailleret, Maxime; Bugmann, Harald

    2015-07-01

    Dynamic models are pivotal for projecting forest dynamics in a changing climate, from the local to the global scale. They encapsulate the processes of tree population dynamics with varying resolution. Yet, almost invariably, tree mortality is modeled based on simple, theoretical assumptions that lack a physiological and/or empirical basis. Although this has been widely criticized and a growing number of empirically derived alternatives are available, they have not been tested systematically in models of forest dynamics. We implemented an inventory-based and a tree-ring-based mortality routine in the forest gap model ForClim v3.0. We combined these routines with a stochastic and a deterministic approach for the determination of tree status (alive vs. dead). We tested the four new model versions for two Norway spruce forests in the Swiss Alps, one of which was managed (inventory time series spanning 72 years) and the other was unmanaged (41 years). Furthermore, we ran long-term simulations (-400 years) into the future under three climate scenarios to test model behavior under changing environmental conditions. The tests against inventory data showed an excellent match of simulated basal area and stem numbers at the managed site and a fair agreement at the unmanaged site for three of the four empirical mortality models, thus rendering the choice of one particular model difficult. However, long-term simulations under current climate revealed very different behavior of the mortality models in terms of simulated changes of basal area and stem numbers, both in timing and magnitude, thus indicating high sensitivity of simulated forest dynamics to assumptions on tree mortality. Our results underpin the potential of using empirical mortality routines in forest gap models. However, further tests are needed that span other climatic conditions and mixed forests. Short-term simulations to benchmark model behavior against empirical data are insufficient; long-term tests are

  12. Clinical validation of the LKB model and parameter sets for predicting radiation-induced pneumonitis from breast cancer radiotherapy

    International Nuclear Information System (INIS)

    Tsougos, Ioannis; Mavroidis, Panayiotis; Theodorou, Kyriaki; Rajala, J; Pitkaenen, M A; Holli, K; Ojala, A T; Hyoedynmaa, S; Jaervenpaeae, Ritva; Lind, Bengt K; Kappas, Constantin

    2006-01-01

    The choice of the appropriate model and parameter set in determining the relation between the incidence of radiation pneumonitis and dose distribution in the lung is of great importance, especially in the case of breast radiotherapy where the observed incidence is fairly low. From our previous study based on 150 breast cancer patients, where the fits of dose-volume models to clinical data were estimated (Tsougos et al 2005 Evaluation of dose-response models and parameters predicting radiation induced pneumonitis using clinical data from breast cancer radiotherapy Phys. Med. Biol. 50 3535-54), one could get the impression that the relative seriality is significantly better than the LKB NTCP model. However, the estimation of the different NTCP models was based on their goodness-of-fit on clinical data, using various sets of published parameters from other groups, and this fact may provisionally justify the results. Hence, we sought to investigate further the LKB model, by applying different published parameter sets for the very same group of patients, in order to be able to compare the results. It was shown that, depending on the parameter set applied, the LKB model is able to predict the incidence of radiation pneumonitis with acceptable accuracy, especially when implemented on a sub-group of patients (120) receiving D-bar-bar vertical bar EUD higher than 8 Gy. In conclusion, the goodness-of-fit of a certain radiobiological model on a given clinical case is closely related to the selection of the proper scoring criteria and parameter set as well as to the compatibility of the clinical case from which the data were derived. (letter to the editor)

  13. Psychological interpretation of the ex-Gaussian and shifted Wald parameters: a diffusion model analysis

    NARCIS (Netherlands)

    Matzke, D.; Wagenmakers, E.-J.

    2009-01-01

    A growing number of researchers use descriptive distributions such as the ex-Gaussian and the shifted Wald to summarize response time data for speeded two-choice tasks. Some of these researchers also assume that the parameters of these distributions uniquely correspond to specific cognitive

  14. Assigning probability distributions to input parameters of performance assessment models

    International Nuclear Information System (INIS)

    Mishra, Srikanta

    2002-02-01

    This study presents an overview of various approaches for assigning probability distributions to input parameters and/or future states of performance assessment models. Specifically,three broad approaches are discussed for developing input distributions: (a) fitting continuous distributions to data, (b) subjective assessment of probabilities, and (c) Bayesian updating of prior knowledge based on new information. The report begins with a summary of the nature of data and distributions, followed by a discussion of several common theoretical parametric models for characterizing distributions. Next, various techniques are presented for fitting continuous distributions to data. These include probability plotting, method of moments, maximum likelihood estimation and nonlinear least squares analysis. The techniques are demonstrated using data from a recent performance assessment study for the Yucca Mountain project. Goodness of fit techniques are also discussed, followed by an overview of how distribution fitting is accomplished in commercial software packages. The issue of subjective assessment of probabilities is dealt with in terms of the maximum entropy distribution selection approach, as well as some common rules for codifying informal expert judgment. Formal expert elicitation protocols are discussed next, and are based primarily on the guidance provided by the US NRC. The Bayesian framework for updating prior distributions (beliefs) when new information becomes available is discussed. A simple numerical approach is presented for facilitating practical applications of the Bayes theorem. Finally, a systematic framework for assigning distributions is presented: (a) for the situation where enough data are available to define an empirical CDF or fit a parametric model to the data, and (b) to deal with the situation where only a limited amount of information is available

  15. Assigning probability distributions to input parameters of performance assessment models

    Energy Technology Data Exchange (ETDEWEB)

    Mishra, Srikanta [INTERA Inc., Austin, TX (United States)

    2002-02-01

    This study presents an overview of various approaches for assigning probability distributions to input parameters and/or future states of performance assessment models. Specifically,three broad approaches are discussed for developing input distributions: (a) fitting continuous distributions to data, (b) subjective assessment of probabilities, and (c) Bayesian updating of prior knowledge based on new information. The report begins with a summary of the nature of data and distributions, followed by a discussion of several common theoretical parametric models for characterizing distributions. Next, various techniques are presented for fitting continuous distributions to data. These include probability plotting, method of moments, maximum likelihood estimation and nonlinear least squares analysis. The techniques are demonstrated using data from a recent performance assessment study for the Yucca Mountain project. Goodness of fit techniques are also discussed, followed by an overview of how distribution fitting is accomplished in commercial software packages. The issue of subjective assessment of probabilities is dealt with in terms of the maximum entropy distribution selection approach, as well as some common rules for codifying informal expert judgment. Formal expert elicitation protocols are discussed next, and are based primarily on the guidance provided by the US NRC. The Bayesian framework for updating prior distributions (beliefs) when new information becomes available is discussed. A simple numerical approach is presented for facilitating practical applications of the Bayes theorem. Finally, a systematic framework for assigning distributions is presented: (a) for the situation where enough data are available to define an empirical CDF or fit a parametric model to the data, and (b) to deal with the situation where only a limited amount of information is available.

  16. MATHEMATICAL MODELING OF FLOW PARAMETERS FOR SINGLE WIND TURBINE

    Directory of Open Access Journals (Sweden)

    2016-01-01

    Full Text Available It is known that on the territory of the Russian Federation the construction of several large wind farms is planned. The tasks connected with design and efficiency evaluation of wind farm work are in demand today. One of the possible directions in design is connected with mathematical modeling. The method of large eddy simulation developed within the direction of computational hydrodynamics allows to reproduce unsteady structure of the flow in details and to determine various integrated values. The calculation of work for single wind turbine installation by means of large eddy simulation and Actuator Line Method along the turbine blade is given in this work. For problem definition the numerical method in the form of a box was considered and the adapted unstructured grid was used.The mathematical model included the main equations of continuity and momentum equations for incompressible fluid. The large-scale vortex structures were calculated by means of integration of the filtered equations. The calculation was carried out with Smagorinsky model for determination of subgrid scale turbulent viscosity. The geometrical parametersof wind turbine were set proceeding from open sources in the Internet.All physical values were defined at center of computational cell. The approximation of items in equations was ex- ecuted with the second order of accuracy for time and space. The equations for coupling velocity and pressure were solved by means of iterative algorithm PIMPLE. The total quantity of the calculated physical values on each time step was equal to 18. So, the resources of a high performance cluster were required.As a result of flow calculation in wake for the three-bladed turbine average and instantaneous values of velocity, pressure, subgrid kinetic energy and turbulent viscosity, components of subgrid stress tensor were worked out. The re- ceived results matched the known results of experiments and numerical simulation, testify the opportunity

  17. Multi-source localization in MEG using simulated annealing: model order determination and parameter accuracy

    Energy Technology Data Exchange (ETDEWEB)

    Huang, M.; Supek, S.; Aine, C.

    1996-06-01

    Empirical neuromagnetic studies have reported that multiple brain regions are active at single instants in time as well as across time intervals of interest. Determining the number of active regions, however, required a systematic search across increasing model orders using reduced chi-square measure of goodness-of-fit and multiple starting points within each model order assumed. Simulated annealing was recently proposed for noiseless biomagnetic data as an effective global minimizer. A modified cost function was also proposed to effectively deal with an unknown number of dipoles for noiseless, multi-source biomagnetic data. Numerical simulation studies were conducted using simulated annealing to examine effects of a systematic increase in model order using both reduced chi-square as a cost function as well as a modified cost function, and effects of overmodeling on parameter estimation accuracy. Effects of different choices of weighting factors are also discussed. Simulated annealing was also applied to visually evoked neuromagnetic data and the effectiveness of both cost functions in determining the number of active regions was demonstrated.

  18. Impact of parameter representation in gas-particle partitioning on aerosol yield model prediction

    Science.gov (United States)

    Kelly, Janya L.

    A kinetic box model is used to highlight the importance of parameter representation in predicting the formation of secondary organic aerosol (SOA) from the photo-oxidation of toluene through a subset of the University of Leeds Master Chemical Mechanism (MCM) version 3.1, and a kinetically based gas-particle partitioning approach. The model provides a prediction of the total aerosol yield and a tentative speciation of aerosols initialized from experimental data from York University's indoor smog chamber. A series of model sensitivity experiments were performed to study the relative importance of different parameters in SOA formation, with emphasis on vapour pressure, accommodation coefficient and NOx conditions. Early sensitivity experiments indicate vapour pressure to be a critical parameter in the partitioning and final aerosol yield. Current estimation methods are highly sensitive to boiling point temperature and can lead to the propagation of errors in the model. Of concern is the estimation of vapour pressure for compounds containing organic nitrates (major contributors to the aerosol speciation in this study). Results indicate that approximately +/- 80% error can be expected in the final aerosol mass from errors in the boiling point temperature and vapour pressure estimation methods, and, that for most experiments, this error alone cannot account for a general under prediction in the aerosol mass. Current experimental conditions dictate a very high initial NOx environment and a much higher final aerosol yield compared to other smog chamber studies, leading to the question of whether the model results arise from unique experimental conditions (relative to other chambers), from using different pathways in MCMv3.1 leading to different aerosol speciation (from the high NOx conditions), or from the physical representation of partitioning in the model. Results show that the choice of isopropyl nitrite as the hydroxyl radical oxidation source may be contributing to

  19. GEMSFITS: Code package for optimization of geochemical model parameters and inverse modeling

    International Nuclear Information System (INIS)

    Miron, George D.; Kulik, Dmitrii A.; Dmytrieva, Svitlana V.; Wagner, Thomas

    2015-01-01

    Highlights: • Tool for generating consistent parameters against various types of experiments. • Handles a large number of experimental data and parameters (is parallelized). • Has a graphical interface and can perform statistical analysis on the parameters. • Tested on fitting the standard state Gibbs free energies of aqueous Al species. • Example on fitting interaction parameters of mixing models and thermobarometry. - Abstract: GEMSFITS is a new code package for fitting internally consistent input parameters of GEM (Gibbs Energy Minimization) geochemical–thermodynamic models against various types of experimental or geochemical data, and for performing inverse modeling tasks. It consists of the gemsfit2 (parameter optimizer) and gfshell2 (graphical user interface) programs both accessing a NoSQL database, all developed with flexibility, generality, efficiency, and user friendliness in mind. The parameter optimizer gemsfit2 includes the GEMS3K chemical speciation solver ( (http://gems.web.psi.ch/GEMS3K)), which features a comprehensive suite of non-ideal activity- and equation-of-state models of solution phases (aqueous electrolyte, gas and fluid mixtures, solid solutions, (ad)sorption. The gemsfit2 code uses the robust open-source NLopt library for parameter fitting, which provides a selection between several nonlinear optimization algorithms (global, local, gradient-based), and supports large-scale parallelization. The gemsfit2 code can also perform comprehensive statistical analysis of the fitted parameters (basic statistics, sensitivity, Monte Carlo confidence intervals), thus supporting the user with powerful tools for evaluating the quality of the fits and the physical significance of the model parameters. The gfshell2 code provides menu-driven setup of optimization options (data selection, properties to fit and their constraints, measured properties to compare with computed counterparts, and statistics). The practical utility, efficiency, and

  20. Death valley regional ground-water flow model calibration using optimal parameter estimation methods and geoscientific information systems

    Science.gov (United States)

    D'Agnese, F. A.; Faunt, C.C.; Hill, M.C.; Turner, A.K.

    1999-01-01

    A regional-scale, steady-state, saturated-zone ground-water flow model was constructed to evaluate potential regional ground-water flow in the vicinity of Yucca Mountain, Nevada. The model was limited to three layers in an effort to evaluate the characteristics governing large-scale subsurface flow. Geoscientific information systems (GSIS) were used to characterize the complex surface and subsurface hydrogeologic conditions of the area, and this characterization was used to construct likely conceptual models of the flow system. Subsurface properties in this system vary dramatically, producing high contrasts and abrupt contacts. This characteristic, combined with the large scale of the model, make zonation the logical choice for representing the hydraulic-conductivity distribution. Different conceptual models were evaluated using sensitivity analysis and were tested by using nonlinear regression to determine parameter values that are optimal, in that they provide the best match between the measured and simulated heads and flows. The different conceptual models were judged based both on the fit achieved to measured heads and spring flows, and the plausibility of the optimal parameter values. One of the conceptual models considered appears to represent the system most realistically. Any apparent model error is probably caused by the coarse vertical and horizontal discretization.A regional-scale, steady-state, saturated-zone ground-water flow model was constructed to evaluate potential regional ground-water flow in the vicinity of Yucca Mountain, Nevada. The model was limited to three layers in an effort to evaluate the characteristics governing large-scale subsurface flow. Geoscientific information systems (GSIS) were used to characterize the complex surface and subsurface hydrogeologic conditions of the area, and this characterization was used to construct likely conceptual models of the flow system. Subsurface properties in this system vary dramatically, producing