WorldWideScience

Sample records for model misspecification justifying

  1. Sensitivity of Fit Indices to Misspecification in Growth Curve Models

    Science.gov (United States)

    Wu, Wei; West, Stephen G.

    2010-01-01

    This study investigated the sensitivity of fit indices to model misspecification in within-individual covariance structure, between-individual covariance structure, and marginal mean structure in growth curve models. Five commonly used fit indices were examined, including the likelihood ratio test statistic, root mean square error of…

  2. Linearity and Misspecification Tests for Vector Smooth Transition Regression Models

    DEFF Research Database (Denmark)

    Teräsvirta, Timo; Yang, Yukai

    The purpose of the paper is to derive Lagrange multiplier and Lagrange multiplier type specification and misspecification tests for vector smooth transition regression models. We report results from simulation studies in which the size and power properties of the proposed asymptotic tests in small...

  3. Testing constancy of unconditional variance in volatility models by misspecification and specification tests

    DEFF Research Database (Denmark)

    Silvennoinen, Annastiina; Terasvirta, Timo

    The topic of this paper is testing the hypothesis of constant unconditional variance in GARCH models against the alternative that the unconditional variance changes deterministically over time. Tests of this hypothesis have previously been performed as misspecification tests after fitting a GARCH...... models. An application to exchange rate returns is included....

  4. Detecting Growth Shape Misspecifications in Latent Growth Models: An Evaluation of Fit Indexes

    Science.gov (United States)

    Leite, Walter L.; Stapleton, Laura M.

    2011-01-01

    In this study, the authors compared the likelihood ratio test and fit indexes for detection of misspecifications of growth shape in latent growth models through a simulation study and a graphical analysis. They found that the likelihood ratio test, MFI, and root mean square error of approximation performed best for detecting model misspecification…

  5. The problem of measurement model misspecification in behavioral and organizational research and some recommended solutions.

    Science.gov (United States)

    MacKenzie, Scott B; Podsakoff, Philip M; Jarvis, Cheryl Burke

    2005-07-01

    The purpose of this study was to review the distinction between formative- and reflective-indicator measurement models, articulate a set of criteria for deciding whether measures are formative or reflective, illustrate some commonly researched constructs that have formative indicators, empirically test the effects of measurement model misspecification using a Monte Carlo simulation, and recommend new scale development procedures for latent constructs with formative indicators. Results of the Monte Carlo simulation indicated that measurement model misspecification can inflate unstandardized structural parameter estimates by as much as 400% or deflate them by as much as 80% and lead to Type I or Type II errors of inference, depending on whether the exogenous or the endogenous latent construct is misspecified. Implications of this research are discussed. Copyright 2005 APA, all rights reserved.

  6. The impact of covariance misspecification in multivariate Gaussian mixtures on estimation and inference: an application to longitudinal modeling.

    Science.gov (United States)

    Heggeseth, Brianna C; Jewell, Nicholas P

    2013-07-20

    Multivariate Gaussian mixtures are a class of models that provide a flexible parametric approach for the representation of heterogeneous multivariate outcomes. When the outcome is a vector of repeated measurements taken on the same subject, there is often inherent dependence between observations. However, a common covariance assumption is conditional independence-that is, given the mixture component label, the outcomes for subjects are independent. In this paper, we study, through asymptotic bias calculations and simulation, the impact of covariance misspecification in multivariate Gaussian mixtures. Although maximum likelihood estimators of regression and mixing probability parameters are not consistent under misspecification, they have little asymptotic bias when mixture components are well separated or if the assumed correlation is close to the truth even when the covariance is misspecified. We also present a robust standard error estimator and show that it outperforms conventional estimators in simulations and can indicate that the model is misspecified. Body mass index data from a national longitudinal study are used to demonstrate the effects of misspecification on potential inferences made in practice. Copyright © 2013 John Wiley & Sons, Ltd.

  7. Explained variation and predictive accuracy in general parametric statistical models: the role of model misspecification

    DEFF Research Database (Denmark)

    Rosthøj, Susanne; Keiding, Niels

    2004-01-01

    When studying a regression model measures of explained variation are used to assess the degree to which the covariates determine the outcome of interest. Measures of predictive accuracy are used to assess the accuracy of the predictions based on the covariates and the regression model. We give a ...... a detailed and general introduction to the two measures and the estimation procedures. The framework we set up allows for a study of the effect of misspecification on the quantities estimated. We also introduce a generalization to survival analysis....

  8. A Lagrange multiplier-type test for idiosyncratic unit roots in the exact factor model under misspecification

    NARCIS (Netherlands)

    Zhou, X.; Solberger, M.

    2013-01-01

    We consider an exact factor model and derive a Lagrange multiplier-type test for unit roots in the idiosyncratic components. The asymptotic distribution of the statistic is derived under the misspecification that the differenced factors are white noise. We prove that the asymptotic distribution is

  9. The effect of mis-specification on mean and selection between the Weibull and lognormal models

    Science.gov (United States)

    Jia, Xiang; Nadarajah, Saralees; Guo, Bo

    2018-02-01

    The lognormal and Weibull models are commonly used to analyse data. Although selection procedures have been extensively studied, it is possible that the lognormal model could be selected when the true model is Weibull or vice versa. As the mean is important in applications, we focus on the effect of mis-specification on mean. The effect on lognormal mean is first considered if the lognormal sample is wrongly fitted by a Weibull model. The maximum likelihood estimate (MLE) and quasi-MLE (QMLE) of lognormal mean are obtained based on lognormal and Weibull models. Then, the impact is evaluated by computing ratio of biases and ratio of mean squared errors (MSEs) between MLE and QMLE. For completeness, the theoretical results are demonstrated by simulation studies. Next, the effect of the reverse mis-specification on Weibull mean is discussed. It is found that the ratio of biases and the ratio of MSEs are independent of the location and scale parameters of the lognormal and Weibull models. The influence could be ignored if some special conditions hold. Finally, a model selection method is proposed by comparing ratios concerning biases and MSEs. We also present a published data to illustrate the study in this paper.

  10. Expectation Maximization Algorithm for Box-Cox Transformation Cure Rate Model and Assessment of Model Misspecification Under Weibull Lifetimes.

    Science.gov (United States)

    Pal, Suvra; Balakrishnan, Narayanaswamy

    2018-05-01

    In this paper, we develop likelihood inference based on the expectation maximization algorithm for the Box-Cox transformation cure rate model assuming the lifetimes to follow a Weibull distribution. A simulation study is carried out to demonstrate the performance of the proposed estimation method. Through Monte Carlo simulations, we also study the effect of model misspecification on the estimate of cure rate. Finally, we analyze a well-known data on melanoma with the model and the inferential method developed here.

  11. Estimating the mean and standard deviation of environmental data with below detection limit observations: Considering highly skewed data and model misspecification.

    Science.gov (United States)

    Shoari, Niloofar; Dubé, Jean-Sébastien; Chenouri, Shoja'eddin

    2015-11-01

    In environmental studies, concentration measurements frequently fall below detection limits of measuring instruments, resulting in left-censored data. Some studies employ parametric methods such as the maximum likelihood estimator (MLE), robust regression on order statistic (rROS), and gamma regression on order statistic (GROS), while others suggest a non-parametric approach, the Kaplan-Meier method (KM). Using examples of real data from a soil characterization study in Montreal, we highlight the need for additional investigations that aim at unifying the existing literature. A number of studies have examined this issue; however, those considering data skewness and model misspecification are rare. These aspects are investigated in this paper through simulations. Among other findings, results show that for low skewed data, the performance of different statistical methods is comparable, regardless of the censoring percentage and sample size. For highly skewed data, the performance of the MLE method under lognormal and Weibull distributions is questionable; particularly, when the sample size is small or censoring percentage is high. In such conditions, MLE under gamma distribution, rROS, GROS, and KM are less sensitive to skewness. Related to model misspecification, MLE based on lognormal and Weibull distributions provides poor estimates when the true distribution of data is misspecified. However, the methods of rROS, GROS, and MLE under gamma distribution are generally robust to model misspecifications regardless of skewness, sample size, and censoring percentage. Since the characteristics of environmental data (e.g., type of distribution and skewness) are unknown a priori, we suggest using MLE based on gamma distribution, rROS and GROS. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. A Bayesian approach to identifying and compensating for model misspecification in population models.

    Science.gov (United States)

    Thorson, James T; Ono, Kotaro; Munch, Stephan B

    2014-02-01

    State-space estimation methods are increasingly used in ecology to estimate productivity and abundance of natural populations while accounting for variability in both population dynamics and measurement processes. However, functional forms for population dynamics and density dependence often will not match the true biological process, and this may degrade the performance of state-space methods. We therefore developed a Bayesian semiparametric state-space model, which uses a Gaussian process (GP) to approximate the population growth function. This offers two benefits for population modeling. First, it allows data to update a specified "prior" on the population growth function, while reverting to this prior when data are uninformative. Second, it allows variability in population dynamics to be decomposed into random errors around the population growth function ("process error") and errors due to the mismatch between the specified prior and estimated growth function ("model error"). We used simulation modeling to illustrate the utility of GP methods in state-space population dynamics models. Results confirmed that the GP model performs similarly to a conventional state-space model when either (1) the prior matches the true process or (2) data are relatively uninformative. However, GP methods improve estimates of the population growth function when the function is misspecified. Results also demonstrated that the estimated magnitude of "model error" can be used to distinguish cases of model misspecification. We conclude with a discussion of the prospects for GP methods in other state-space models, including age and length-structured, meta-analytic, and individual-movement models.

  13. Why item parcels are (almost) never appropriate: two wrongs do not make a right--camouflaging misspecification with item parcels in CFA models.

    Science.gov (United States)

    Marsh, Herbert W; Lüdtke, Oliver; Nagengast, Benjamin; Morin, Alexandre J S; Von Davier, Matthias

    2013-09-01

    The present investigation has a dual focus: to evaluate problematic practice in the use of item parcels and to suggest exploratory structural equation models (ESEMs) as a viable alternative to the traditional independent clusters confirmatory factor analysis (ICM-CFA) model (with no cross-loadings, subsidiary factors, or correlated uniquenesses). Typically, it is ill-advised to (a) use item parcels when ICM-CFA models do not fit the data, and (b) retain ICM-CFA models when items cross-load on multiple factors. However, the combined use of (a) and (b) is widespread and often provides such misleadingly good fit indexes that applied researchers might believe that misspecification problems are resolved--that 2 wrongs really do make a right. Taking a pragmatist perspective, in 4 studies we demonstrate with responses to the Rosenberg Self-Esteem Inventory (Rosenberg, 1965), Big Five personality factors, and simulated data that even small cross-loadings seriously distort relations among ICM-CFA constructs or even decisions on the number of factors; although obvious in item-level analyses, this is camouflaged by the use of parcels. ESEMs provide a viable alternative to ICM-CFAs and a test for the appropriateness of parcels. The use of parcels with an ICM-CFA model is most justifiable when the fit of both ICM-CFA and ESEM models is acceptable and equally good, and when substantively important interpretations are similar. However, if the ESEM model fits the data better than the ICM-CFA model, then the use of parcels with an ICM-CFA model typically is ill-advised--particularly in studies that are also interested in scale development, latent means, and measurement invariance.

  14. Interpretational confounding is due to misspecification, not to type of indicator: comment on Howell, Breivik, and Wilcox (2007).

    Science.gov (United States)

    Bollen, Kenneth A

    2007-06-01

    R. D. Howell, E. Breivik, and J. B. Wilcox (2007) have argued that causal (formative) indicators are inherently subject to interpretational confounding. That is, they have argued that using causal (formative) indicators leads the empirical meaning of a latent variable to be other than that assigned to it by a researcher. Their critique of causal (formative) indicators rests on several claims: (a) A latent variable exists apart from the model when there are effect (reflective) indicators but not when there are causal (formative) indicators, (b) causal (formative) indicators need not have the same consequences, (c) causal (formative) indicators are inherently subject to interpretational confounding, and (d) a researcher cannot detect interpretational confounding when using causal (formative) indicators. This article shows that each claim is false. Rather, interpretational confounding is more a problem of structural misspecification of a model combined with an underidentified model that leaves these misspecifications undetected. Interpretational confounding does not occur if the model is correctly specified whether a researcher has causal (formative) or effect (reflective) indicators. It is the validity of a model not the type of indicator that determines the potential for interpretational confounding. Copyright 2007 APA, all rights reserved.

  15. Structural Break Tests Robust to Regression Misspecification

    Directory of Open Access Journals (Sweden)

    Alaa Abi Morshed

    2018-05-01

    Full Text Available Structural break tests for regression models are sensitive to model misspecification. We show—analytically and through simulations—that the sup Wald test for breaks in the conditional mean and variance of a time series process exhibits severe size distortions when the conditional mean dynamics are misspecified. We also show that the sup Wald test for breaks in the unconditional mean and variance does not have the same size distortions, yet benefits from similar power to its conditional counterpart in correctly specified models. Hence, we propose using it as an alternative and complementary test for breaks. We apply the unconditional and conditional mean and variance tests to three US series: unemployment, industrial production growth and interest rates. Both the unconditional and the conditional mean tests detect a break in the mean of interest rates. However, for the other two series, the unconditional mean test does not detect a break, while the conditional mean tests based on dynamic regression models occasionally detect a break, with the implied break-point estimator varying across different dynamic specifications. For all series, the unconditional variance does not detect a break while most tests for the conditional variance do detect a break which also varies across specifications.

  16. The impact of covariance misspecification in group-based trajectory models for longitudinal data with non-stationary covariance structure.

    Science.gov (United States)

    Davies, Christopher E; Glonek, Gary Fv; Giles, Lynne C

    2017-08-01

    One purpose of a longitudinal study is to gain a better understanding of how an outcome of interest changes among a given population over time. In what follows, a trajectory will be taken to mean the series of measurements of the outcome variable for an individual. Group-based trajectory modelling methods seek to identify subgroups of trajectories within a population, such that trajectories that are grouped together are more similar to each other than to trajectories in distinct groups. Group-based trajectory models generally assume a certain structure in the covariances between measurements, for example conditional independence, homogeneous variance between groups or stationary variance over time. Violations of these assumptions could be expected to result in poor model performance. We used simulation to investigate the effect of covariance misspecification on misclassification of trajectories in commonly used models under a range of scenarios. To do this we defined a measure of performance relative to the ideal Bayesian correct classification rate. We found that the more complex models generally performed better over a range of scenarios. In particular, incorrectly specified covariance matrices could significantly bias the results but using models with a correct but more complicated than necessary covariance matrix incurred little cost.

  17. Measurement Model Specification Error in LISREL Structural Equation Models.

    Science.gov (United States)

    Baldwin, Beatrice; Lomax, Richard

    This LISREL study examines the robustness of the maximum likelihood estimates under varying degrees of measurement model misspecification. A true model containing five latent variables (two endogenous and three exogenous) and two indicator variables per latent variable was used. Measurement model misspecification considered included errors of…

  18. Italian Physical Society Justifying the QCD parton model

    CERN Document Server

    Veneziano, G

    2018-01-01

    I will focus my attention on the two papers I wrote with Roberto and Daniele Amati on justifying the QCD-improved parton model, a very basic tool used every day to estimate a variety of processes involving strong (as well as possibly other) interactions. While doing so, I will also touch on other occasions I had to work —or just interact— with Roberto during more than 30 years of our respective careers.

  19. Animal Models in Forensic Science Research: Justified Use or Ethical Exploitation?

    Science.gov (United States)

    Mole, Calvin Gerald; Heyns, Marise

    2018-05-01

    A moral dilemma exists in biomedical research relating to the use of animal or human tissue when conducting scientific research. In human ethics, researchers need to justify why the use of humans is necessary should suitable models exist. Conversely, in animal ethics, a researcher must justify why research cannot be carried out on suitable alternatives. In the case of medical procedures or therapeutics testing, the use of animal models is often justified. However, in forensic research, the justification may be less evident, particularly when research involves the infliction of trauma on living animals. To determine how the forensic science community is dealing with this dilemma, a review of literature within major forensic science journals was conducted. The frequency and trends of the use of animals in forensic science research was investigated for the period 1 January 2012-31 December 2016. The review revealed 204 original articles utilizing 5050 animals in various forms as analogues for human tissue. The most common specimens utilized were various species of rats (35.3%), pigs (29.3%), mice (17.7%), and rabbits (8.2%) although different specimens were favored in different study themes. The majority of studies (58%) were conducted on post-mortem specimens. It is, however, evident that more needs to be done to uphold the basic ethical principles of reduction, refinement and replacement in the use of animals for research purposes.

  20. Univariate and Multivariate Specification Search Indices in Covariance Structure Modeling.

    Science.gov (United States)

    Hutchinson, Susan R.

    1993-01-01

    Simulated population data were used to compare relative performances of the modification index and C. Chou and P. M. Bentler's Lagrange multiplier test (a multivariate generalization of a modification index) for four levels of model misspecification. Both indices failed to recover the true model except at the lowest level of misspecification. (SLD)

  1. Essays on model uncertainty in financial models

    NARCIS (Netherlands)

    Li, Jing

    2018-01-01

    This dissertation studies model uncertainty, particularly in financial models. It consists of two empirical chapters and one theoretical chapter. The first empirical chapter (Chapter 2) classifies model uncertainty into parameter uncertainty and misspecification uncertainty. It investigates the

  2. Improved hidden Markov model for nosocomial infections.

    Science.gov (United States)

    Khader, Karim; Leecaster, Molly; Greene, Tom; Samore, Matthew; Thomas, Alun

    2014-12-01

    We propose a novel hidden Markov model (HMM) for parameter estimation in hospital transmission models, and show that commonly made simplifying assumptions can lead to severe model misspecification and poor parameter estimates. A standard HMM that embodies two commonly made simplifying assumptions, namely a fixed patient count and binomially distributed detections is compared with a new alternative HMM that does not require these simplifying assumptions. Using simulated data, we demonstrate how each of the simplifying assumptions used by the standard model leads to model misspecification, whereas the alternative model results in accurate parameter estimates. © The Authors 2013. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved.

  3. Multivariate GARCH models

    DEFF Research Database (Denmark)

    Silvennoinen, Annastiina; Teräsvirta, Timo

    This article contains a review of multivariate GARCH models. Most common GARCH models are presented and their properties considered. This also includes nonparametric and semiparametric models. Existing specification and misspecification tests are discussed. Finally, there is an empirical example...

  4. Justify your alpha

    NARCIS (Netherlands)

    Lakens, Daniel; Adolfi, Federico G.; Albers, Casper J.; Anvari, Farid; Apps, Matthew A.J.; Argamon, Shlomo E.; Baguley, Thom; Becker, Raymond B.; Benning, Stephen D.; Bradford, Daniel E.; Buchanan, Erin M.; Caldwell, Aaron R.; Van Calster, Ben; Carlsson, Rickard; Chen, Sau Chin; Chung, Bryan; Colling, Lincoln J.; Collins, Gary S.; Crook, Zander; Cross, Emily S.; Daniels, Sameera; Danielsson, Henrik; Debruine, Lisa; Dunleavy, Daniel J.; Earp, Brian D.; Feist, Michele I.; Ferrell, Jason D.; Field, James G.; Fox, Nicholas W.; Friesen, Amanda; Gomes, Caio; Gonzalez-Marquez, Monica; Grange, James A.; Grieve, Andrew P.; Guggenberger, Robert; Grist, James; Van Harmelen, Anne Laura; Hasselman, Fred; Hochard, Kevin D.; Hoffarth, Mark R.; Holmes, Nicholas P.; Ingre, Michael; Isager, Peder M.; Isotalus, Hanna K.; Johansson, Christer; Juszczyk, Konrad; Kenny, David A.; Khalil, Ahmed A.; Konat, Barbara; Lao, Junpeng; Larsen, Erik Gahner; Lodder, Gerine M.A.; Lukavský, Jiří; Madan, Christopher R.; Manheim, David; Martin, Stephen R.; Martin, Andrea E.; Mayo, Deborah G.; McCarthy, Randy J.; McConway, Kevin; McFarland, Colin; Nio, Amanda Q.X.; Nilsonne, Gustav; De Oliveira, Cilene Lino; De Xivry, Jean Jacques Orban; Parsons, Sam; Pfuhl, Gerit; Quinn, Kimberly A.; Sakon, John J.; Saribay, S. Adil; Schneider, Iris K.; Selvaraju, Manojkumar; Sjoerds, Zsuzsika; Smith, Samuel G.; Smits, Tim; Spies, Jeffrey R.; Sreekumar, Vishnu; Steltenpohl, Crystal N.; Stenhouse, Neil; Świątkowski, Wojciech; Vadillo, Miguel A.; Van Assen, Marcel A.L.M.; Williams, Matt N.; Williams, Samantha E.; Williams, Donald R.; Yarkoni, Tal; Ziano, Ignazio; Zwaan, Rolf A.

    2018-01-01

    In response to recommendations to redefine statistical significance to P ≤ 0.005, we propose that researchers should transparently report and justify all choices they make when designing a study, including the alpha level.

  5. Heteroscedasticity as a Basis of Direction Dependence in Reversible Linear Regression Models.

    Science.gov (United States)

    Wiedermann, Wolfgang; Artner, Richard; von Eye, Alexander

    2017-01-01

    Heteroscedasticity is a well-known issue in linear regression modeling. When heteroscedasticity is observed, researchers are advised to remedy possible model misspecification of the explanatory part of the model (e.g., considering alternative functional forms and/or omitted variables). The present contribution discusses another source of heteroscedasticity in observational data: Directional model misspecifications in the case of nonnormal variables. Directional misspecification refers to situations where alternative models are equally likely to explain the data-generating process (e.g., x → y versus y → x). It is shown that the homoscedasticity assumption is likely to be violated in models that erroneously treat true nonnormal predictors as response variables. Recently, Direction Dependence Analysis (DDA) has been proposed as a framework to empirically evaluate the direction of effects in linear models. The present study links the phenomenon of heteroscedasticity with DDA and describes visual diagnostics and nine homoscedasticity tests that can be used to make decisions concerning the direction of effects in linear models. Results of a Monte Carlo simulation that demonstrate the adequacy of the approach are presented. An empirical example is provided, and applicability of the methodology in cases of violated assumptions is discussed.

  6. Supplier-induced demand: re-examining identification and misspecification in cross-sectional analysis.

    Science.gov (United States)

    Peacock, Stuart J; Richardson, Jeffrey R J

    2007-09-01

    This paper re-examines criticisms of cross-sectional methods used to test for supplier-induced demand (SID) and re-evaluates the empirical evidence using data from Australian medical services. Cross-sectional studies of SID have been criticised on two grounds. First, and most important, the inclusion of the doctor supply in the demand equation leads to an identification problem. This criticism is shown to be invalid, as the doctor supply variable is stochastic and depends upon a variety of other variables including the desirability of the location. Second, cross-sectional studies of SID fail diagnostic tests and produce artefactual findings due to model misspecification. Contrary to this, the re-evaluation of cross-sectional Australian data indicate that demand equations that do not include the doctor supply are misspecified. Empirical evidence from the re-evaluation of Australian medical services data supports the notion of SID. Demand and supply equations are well specified and have very good explanatory power. The demand equation is identified and the desirability of a location is an important predictor of the doctor supply. Results show an average price elasticity of demand of 0.22 and an average elasticity of demand with respect to the doctor supply of 0.46, with the impact of SID becoming stronger as the doctor supply rises. The conclusion we draw from this paper is that two of the main criticisms of the empirical evidence supporting the SID hypothesis have been inappropriately levelled at the methods used. More importantly, SID provides a satisfactory, and robust, explanation of the empirical data on the demand for medical services in Australia.

  7. Semiparametric mixed-effects analysis of PK/PD models using differential equations.

    Science.gov (United States)

    Wang, Yi; Eskridge, Kent M; Zhang, Shunpu

    2008-08-01

    Motivated by the use of semiparametric nonlinear mixed-effects modeling on longitudinal data, we develop a new semiparametric modeling approach to address potential structural model misspecification for population pharmacokinetic/pharmacodynamic (PK/PD) analysis. Specifically, we use a set of ordinary differential equations (ODEs) with form dx/dt = A(t)x + B(t) where B(t) is a nonparametric function that is estimated using penalized splines. The inclusion of a nonparametric function in the ODEs makes identification of structural model misspecification feasible by quantifying the model uncertainty and provides flexibility for accommodating possible structural model deficiencies. The resulting model will be implemented in a nonlinear mixed-effects modeling setup for population analysis. We illustrate the method with an application to cefamandole data and evaluate its performance through simulations.

  8. Modelling and forecasting WIG20 daily returns

    DEFF Research Database (Denmark)

    Amado, Cristina; Silvennoinen, Annestiina; Terasvirta, Timo

    of the model is that the deterministic component is specified before estimating the multiplicative conditional variance component. The resulting model is subjected to misspecification tests and its forecasting performance is compared with that of commonly applied models of conditional heteroskedasticity....

  9. Modelling severe Staphylococcus aureus sepsis in conscious pigs: are implications for animal welfare justified?

    DEFF Research Database (Denmark)

    Olsen, Helle G; Kjelgaard-Hansen, Mads; Tveden-Nyborg, Pernille

    2016-01-01

    A porcine model of haematogenous Staphylococcus aureus sepsis has previously been established in our research group. In these studies, pigs developed severe sepsis including liver dysfunction during a 48 h study period. As pigs were awake during the study, animal welfare was challenged by the sev......A porcine model of haematogenous Staphylococcus aureus sepsis has previously been established in our research group. In these studies, pigs developed severe sepsis including liver dysfunction during a 48 h study period. As pigs were awake during the study, animal welfare was challenged....... Prior to euthanasia, a galactose elimination capacity test was performed to assess liver function. Pigs were euthanised 48 h post inoculation for necropsy and histopathological evaluation. While infusion times of 6.66 min, and higher, did not induce liver dysfunction (n = 3), the infusion time of 3......, according to humane endpoints. A usable balance between scientific purpose and animal welfare could not be achieved, and we therefore find it hard to justify further use of this conscious porcine sepsis model. In order to make a model of translational relevance for human sepsis, we suggest that future model...

  10. The Self-Justifying Desire for Happiness

    DEFF Research Database (Denmark)

    Rodogno, Raffaele

    2004-01-01

    In Happiness, Tabensky equates the notion of happiness to Aristotelian eudaimonia. I shall claim that doing so amounts to equating two concepts that moderns cannot conceptually equate, namely, the good for a person and the good person or good life. In §2 I examine the way in which Tabensky deals...... with this issue and claim that his idea of happiness is as problematic for us moderns as is any translation of the notion of eudaimonia in terms of happiness. Naturally, if happiness understood as eudaimonia is ambiguous, so will be the notion of a desire for happiness, which we find at the core of Tabensky......'s whole project. In §3 I shall be concerned with another aspect of the desire for happiness; namely, its alleged self-justifying nature. I will attempt to undermine the idea that this desire is self-justifying by undermining the criterion on which Tabensky takes self-justifiability to rest, i.e. its...

  11. Are stock prices too volatile to be justified by the dividend discount model?

    Science.gov (United States)

    Akdeniz, Levent; Salih, Aslıhan Altay; Ok, Süleyman Tuluğ

    2007-03-01

    This study investigates excess stock price volatility using the variance bound framework of LeRoy and Porter [The present-value relation: tests based on implied variance bounds, Econometrica 49 (1981) 555-574] and of Shiller [Do stock prices move too much to be justified by subsequent changes in dividends? Am. Econ. Rev. 71 (1981) 421-436.]. The conditional variance bound relationship is examined using cross-sectional data simulated from the general equilibrium asset pricing model of Brock [Asset prices in a production economy, in: J.J. McCall (Ed.), The Economics of Information and Uncertainty, University of Chicago Press, Chicago (for N.B.E.R.), 1982]. Results show that the conditional variance bounds hold, hence, our hypothesis of the validity of the dividend discount model cannot be rejected. Moreover, in our setting, markets are efficient and stock prices are neither affected by herd psychology nor by the outcome of noise trading by naive investors; thus, we are able to control for market efficiency. Consequently, we show that one cannot infer any conclusions about market efficiency from the unconditional variance bounds tests.

  12. Analysis of hypoglycemic events using negative binomial models.

    Science.gov (United States)

    Luo, Junxiang; Qu, Yongming

    2013-01-01

    Negative binomial regression is a standard model to analyze hypoglycemic events in diabetes clinical trials. Adjusting for baseline covariates could potentially increase the estimation efficiency of negative binomial regression. However, adjusting for covariates raises concerns about model misspecification, in which the negative binomial regression is not robust because of its requirement for strong model assumptions. In some literature, it was suggested to correct the standard error of the maximum likelihood estimator through introducing overdispersion, which can be estimated by the Deviance or Pearson Chi-square. We proposed to conduct the negative binomial regression using Sandwich estimation to calculate the covariance matrix of the parameter estimates together with Pearson overdispersion correction (denoted by NBSP). In this research, we compared several commonly used negative binomial model options with our proposed NBSP. Simulations and real data analyses showed that NBSP is the most robust to model misspecification, and the estimation efficiency will be improved by adjusting for baseline hypoglycemia. Copyright © 2013 John Wiley & Sons, Ltd.

  13. Journalism as Justified True Belief

    Directory of Open Access Journals (Sweden)

    Sílvia Lisboa

    2015-12-01

    Full Text Available If it is important to think of journalism as a form of knowledge, then how does it become knowledge? How does this process work? In order to answer this question, this article proposes a new understanding of journalism as a subject; presenting it as a justified true belief. We think of journalism being based on pillars of truth and justification, conditions necessary in order for Epistemology to grant it the status of knowledge. We address the concept of truth and show how journalistic reports are justified to the public as well as consider the central role of credibility in this process. We add to the epistemic conception by using concepts of discourse that help to understand how journalism provides evidence through its intentions, its authority and its ability. This evidence acts like a guide for the reader towards forming opinions on journalistic reports and recognizing journalism as a form of knowledge.

  14. Meta-analysis of studies with bivariate binary outcomes: a marginal beta-binomial model approach.

    Science.gov (United States)

    Chen, Yong; Hong, Chuan; Ning, Yang; Su, Xiao

    2016-01-15

    When conducting a meta-analysis of studies with bivariate binary outcomes, challenges arise when the within-study correlation and between-study heterogeneity should be taken into account. In this paper, we propose a marginal beta-binomial model for the meta-analysis of studies with binary outcomes. This model is based on the composite likelihood approach and has several attractive features compared with the existing models such as bivariate generalized linear mixed model (Chu and Cole, 2006) and Sarmanov beta-binomial model (Chen et al., 2012). The advantages of the proposed marginal model include modeling the probabilities in the original scale, not requiring any transformation of probabilities or any link function, having closed-form expression of likelihood function, and no constraints on the correlation parameter. More importantly, because the marginal beta-binomial model is only based on the marginal distributions, it does not suffer from potential misspecification of the joint distribution of bivariate study-specific probabilities. Such misspecification is difficult to detect and can lead to biased inference using currents methods. We compare the performance of the marginal beta-binomial model with the bivariate generalized linear mixed model and the Sarmanov beta-binomial model by simulation studies. Interestingly, the results show that the marginal beta-binomial model performs better than the Sarmanov beta-binomial model, whether or not the true model is Sarmanov beta-binomial, and the marginal beta-binomial model is more robust than the bivariate generalized linear mixed model under model misspecifications. Two meta-analyses of diagnostic accuracy studies and a meta-analysis of case-control studies are conducted for illustration. Copyright © 2015 John Wiley & Sons, Ltd.

  15. The Integration of Continuous and Discrete Latent Variable Models: Potential Problems and Promising Opportunities

    Science.gov (United States)

    Bauer, Daniel J.; Curran, Patrick J.

    2004-01-01

    Structural equation mixture modeling (SEMM) integrates continuous and discrete latent variable models. Drawing on prior research on the relationships between continuous and discrete latent variable models, the authors identify 3 conditions that may lead to the estimation of spurious latent classes in SEMM: misspecification of the structural model,…

  16. Why Do Women Justify Violence Against Wives More Often Than Do Men in Vietnam?

    Science.gov (United States)

    Krause, Kathleen H; Gordon-Roberts, Rachel; VanderEnde, Kristin; Schuler, Sidney Ruth; Yount, Kathryn M

    2015-05-06

    Intimate partner violence (IPV) harms the health of women and their children. In Vietnam, 31% of women report lifetime exposure to physical IPV, and surprisingly, women justify physical IPV against wives more often than do men. We compare men's and women's rates of finding good reason for wife hitting and assess whether differences in childhood experiences and resources and constraints in adulthood account for observed differences. Probability samples of married men (n = 522) and women (n = 533) were surveyed in Vietnam. Ordered logit models assessed the proportional odds for women versus men of finding more "good reasons" to hit a wife (never, 1-3 situations, 4-6 situations). In all situations, women found good reason to hit a wife more often than did men. The unadjusted odds for women versus men of reporting more good reasons to hit a wife were 6.55 (95% confidence interval [CI] = [4.82, 8.91]). This gap disappeared in adjusted models that included significant interactions of gender with age, number of children ever born, and experience of physical IPV as an adult. Having children was associated with justifying wife hitting among women but not men. Exposure to IPV in adulthood was associated with justifying wife hitting among men, but was negatively associated with justification of IPV among women. Further study of the gendered effects of resources and constraints in adulthood on attitudes about IPV against women will clarify women's more frequent reporting than men's that IPV against women is justified. © The Author(s) 2015.

  17. Inconsistency of Bayesian inference for misspecified linear models, and a proposal for repairing it

    NARCIS (Netherlands)

    P.D. Grünwald (Peter); T. van Ommen (Thijs)

    2017-01-01

    textabstractWe empirically show that Bayesian inference can be inconsistent under misspecification in simple linear regression problems, both in a model averaging/selection and in a Bayesian ridge regression setting. We use the standard linear model, which assumes homoskedasticity, whereas the data

  18. Inconsistency of Bayesian Inference for Misspecified Linear Models, and a Proposal for Repairing It

    NARCIS (Netherlands)

    Grünwald, P.; van Ommen, T.

    2017-01-01

    We empirically show that Bayesian inference can be inconsistent under misspecification in simple linear regression problems, both in a model averaging/selection and in a Bayesian ridge regression setting. We use the standard linear model, which assumes homoskedasticity, whereas the data are

  19. Beyond Conflict and Spoilt Identities: How Rwandan Leaders Justify a Single Recategorization Model for Post-Conflict Reconciliation

    Directory of Open Access Journals (Sweden)

    Sigrun Marie Moss

    2014-08-01

    Full Text Available Since 1994, the Rwandan government has attempted to remove the division of the population into the ‘ethnic’ identities Hutu, Tutsi and Twa and instead make the shared Rwandan identity salient. This paper explores how leaders justify the single recategorization model, based on nine in-depth semi-structured interviews with Rwandan national leaders (politicians and bureaucrats tasked with leading unity implementation conducted in Rwanda over three months in 2011/2012. Thematic analysis revealed this was done through a meta-narrative focusing on the shared Rwandan identity. Three frames were found in use to “sell” this narrative where ethnic identities are presented as a an alien construction; b which was used to the disadvantage of the people; and c non-essential social constructs. The material demonstrates the identity entrepreneurship behind the single recategorization approach: the definition of the category boundaries, the category content, and the strategies for controlling and overcoming alternative narratives. Rwandan identity is presented as essential and legitimate, and as offering a potential way for people to escape spoilt subordinate identities. The interviewed leaders insist Rwandans are all one, and that the single recategorization is the right path for Rwanda, but this approach has been criticised for increasing rather than decreasing intergroup conflict due to social identity threat. The Rwandan case offers a rare opportunity to explore leaders’ own narratives and framing of these ‘ethnic’ identities to justify the single recategorization approach.

  20. Model-checking techniques based on cumulative residuals.

    Science.gov (United States)

    Lin, D Y; Wei, L J; Ying, Z

    2002-03-01

    Residuals have long been used for graphical and numerical examinations of the adequacy of regression models. Conventional residual analysis based on the plots of raw residuals or their smoothed curves is highly subjective, whereas most numerical goodness-of-fit tests provide little information about the nature of model misspecification. In this paper, we develop objective and informative model-checking techniques by taking the cumulative sums of residuals over certain coordinates (e.g., covariates or fitted values) or by considering some related aggregates of residuals, such as moving sums and moving averages. For a variety of statistical models and data structures, including generalized linear models with independent or dependent observations, the distributions of these stochastic processes tinder the assumed model can be approximated by the distributions of certain zero-mean Gaussian processes whose realizations can be easily generated by computer simulation. Each observed process can then be compared, both graphically and numerically, with a number of realizations from the Gaussian process. Such comparisons enable one to assess objectively whether a trend seen in a residual plot reflects model misspecification or natural variation. The proposed techniques are particularly useful in checking the functional form of a covariate and the link function. Illustrations with several medical studies are provided.

  1. System-justifying ideologies and academic outcomes among first-year Latino college students.

    Science.gov (United States)

    O'Brien, Laurie T; Mars, Dustin E; Eccleston, Collette

    2011-10-01

    The present study examines the relationship between system-justifying ideologies and academic outcomes among 78 first-year Latino college students (21 men, 57 women, mean age = 18.1 years) attending a moderately selective West Coast university. Endorsement of system-justifying ideologies was negatively associated with grade point average (GPA); however it was positively associated with feelings of belonging at the university. In addition, system-justifying ideologies were negatively associated with perceptions of personal discrimination. In contrast, ethnic identity centrality was unrelated to GPA, feelings of belonging, and perceptions of personal discrimination once the relationship between system-justifying ideologies and these outcomes was statistically taken into account. The results of the present study suggest that endorsement of system-justifying ideologies may be a double-edged sword for Latino college students, involving trade-offs between academic success and feelings of belonging.

  2. Parity, Incomparability and Rationally Justified Choice

    NARCIS (Netherlands)

    Boot, Martijn

    2009-01-01

    This article discusses the possibility of a rationally justified choice between two options neither of which is better than the other while they are not equally good either (‘3NT’). Joseph Raz regards such options as incomparable and argues that reason cannot guide the choice between them. Ruth

  3. About 'restriction', 'justified' and 'necessary'

    DEFF Research Database (Denmark)

    Werlauff, Erik

    2016-01-01

    The article is an academic fairy tale about why and how all national corporate tax protection legislation should undergo a 3-part test to ensure its consistency with EU law. Each Member State introduce a compulsory 3-step test for each new (corporate) tax provision. The test is simple: (1) Does...... the tax provision constitute a restriction in the sense of EU law? (2) If the answer is yes: Is the restriction justified? (3) If the answer is yes: Is the restriction necessary?"...

  4. Bayesian inference for partially identified models exploring the limits of limited data

    CERN Document Server

    Gustafson, Paul

    2015-01-01

    Introduction Identification What Is against Us? What Is for Us? Some Simple Examples of Partially Identified ModelsThe Road Ahead The Structure of Inference in Partially Identified Models Bayesian Inference The Structure of Posterior Distributions in PIMs Computational Strategies Strength of Bayesian Updating, Revisited Posterior MomentsCredible Intervals Evaluating the Worth of Inference Partial Identification versus Model Misspecification The Siren Call of Identification Comp

  5. Journalism as Justified True Belief

    OpenAIRE

    Lisboa, Sílvia; Benetti, Marcia

    2015-01-01

    If it is important to think of journalism as a form of knowledge, then how does it become knowledge? How does this process work? In order to answer this question, this article proposes a new understanding of journalism as a subject; presenting it as a justified true belief. We think of journalism being based on pillars of truth and justification, conditions necessary in order for Epistemology to grant it the status of knowledge. We address the concept of truth and show how journalistic report...

  6. Electrical stimulation in dysphagia treatment: a justified controversy?

    NARCIS (Netherlands)

    Bogaardt, H. C. A.

    2008-01-01

    Electrical stimulation in dysphagia treatment: a justified controversy? Neuromuscular electrostimulation (LAMES) is a method for stimulating muscles with short electrical pulses. Neuromuscular electrostimulation is frequently used in physiotherapy to strengthen healthy muscles (as in sports

  7. Justifying an information system.

    Science.gov (United States)

    Neal, T

    1993-03-01

    A four-step model for the hospital pharmacist to use in justifying a computerized information system is described. In the first step, costs are identified and analyzed. Both the costs and the advantages of the existing system are evaluated. A request for information and a request for proposal are prepared and sent to vendors, who return estimates of hardware, software, and support costs. Costs can then be merged and analyzed as one-time costs, recurring annual costs, and total costs annualized over five years. In step 2, benefits are identified and analyzed. Tangible economic benefits are those that directly reduce or avoid costs or directly enhance revenues and can be measured in dollars. Intangible economic benefits are realized through a reduction in overhead and reallocation of labor and are less easily measured in dollars. Noneconomic benefits, some involving quality-of-care issues, can also be used in the justification. Step 3 consists of a formal risk assessment in which the project is broken into categories for which specific questions are answered by assigning a risk factor. In step 4, both costs and benefits are subjected to a financial analysis, the object of which is to maximize the return on investment to the institution from the capital being requested. Calculations include return on investment based on the net present value of money, internal rate of return, payback period, and profitability index. A well-designed justification for an information system not only identifies the costs, risks, and benefits but also presents a plan of action for realizing the benefits.

  8. Spatial measurement error and correction by spatial SIMEX in linear regression models when using predicted air pollution exposures.

    Science.gov (United States)

    Alexeeff, Stacey E; Carroll, Raymond J; Coull, Brent

    2016-04-01

    Spatial modeling of air pollution exposures is widespread in air pollution epidemiology research as a way to improve exposure assessment. However, there are key sources of exposure model uncertainty when air pollution is modeled, including estimation error and model misspecification. We examine the use of predicted air pollution levels in linear health effect models under a measurement error framework. For the prediction of air pollution exposures, we consider a universal Kriging framework, which may include land-use regression terms in the mean function and a spatial covariance structure for the residuals. We derive the bias induced by estimation error and by model misspecification in the exposure model, and we find that a misspecified exposure model can induce asymptotic bias in the effect estimate of air pollution on health. We propose a new spatial simulation extrapolation (SIMEX) procedure, and we demonstrate that the procedure has good performance in correcting this asymptotic bias. We illustrate spatial SIMEX in a study of air pollution and birthweight in Massachusetts. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  9. Modeling Attitude Variance in Small UAS’s for Acoustic Signature Simplification Using Experimental Design in a Hardware-in-the-Loop Simulation

    Science.gov (United States)

    2015-03-26

    response. Additionally, choosing correlated levels for multiple factors results in multicollinearity which can cause problems such as model...misspecification or large variances and covariances for the regression coefficients. A good way to avoid multicollinearity is to use orthogonal, factorial

  10. Justifying Innovative Language Programs in an Environment of ...

    African Journals Online (AJOL)

    pkurgat

    Justifying Innovative Language Programs in an Environment of Change: The Case ... Key words: project management, change management, educational management, .... the sustainability of the course considering that there were and continue to be problems .... language teaching in general on a sound scientific base.

  11. Population pharmacokinetic/pharmacodynamic modelling of the hypothalamic-pituitary-gonadal axis

    DEFF Research Database (Denmark)

    Tornøe, Christoffer Wenzel

    2005-01-01

    model mis-specification feasible by quantifying the model uncertainty, which subsequently provides the basis for systematic population PK/PD model development. To support the model building process, the SDE approach was applied to clinical PK/PD data and used as a tool for tracking unexplained...... was stimulated and inhibited by the plasma triptorelin and degarelix concentrations, respec-tively. Circulating LH stimulated the testosterone secretion while the delayed testosterone feedback on the non-basal LH synthesis and release was modelled through a receptor compartment where testosterone stimulates...

  12. Drug companies' evidence to justify advertising.

    Science.gov (United States)

    Wade, V A; Mansfield, P R; McDonald, P J

    1989-11-25

    Ten international pharmaceutical companies were asked by letter to supply their best evidence in support of marketing claims for seventeen products. Fifteen replies were received. Seven replies cited a total of 67 references: 31 contained relevant original data and only 13 were controlled trials, all of which had serious methodological flaws. There were four reports of changes in advertising claims and one company ceased marketing nikethamide in the third world. Standards of evidence used to justify advertising claims are inadequate.

  13. Physicians and strikes: can a walkout over the malpractice crisis be ethically justified?

    Science.gov (United States)

    Fiester, Autumn

    2004-01-01

    Malpractice insurance rates have created a crisis in American medicine. Rates are rising and reimbursements are not keeping pace. In response, physicians in the states hardest hit by this crisis are feeling compelled to take political action, and the current action of choice seems to be physician strikes. While the malpractice insurance crisis is acknowledged to be severe, does it justify the extreme action of a physician walkout? Should physicians engage in this type of collective action, and what are the costs to patients and the profession when such action is taken? I will offer three related arguments against physician strikes that constitute a prima facie prohibition against such action: first, strikes are intended to cause harm to patients; second, strikes are an affront to the physician-patient relationship; and, third, strikes risk decreasing the public's respect for the medical profession. As with any prima facie obligation, there are justifying conditions that may override the moral prohibition, but I will argue that the current malpractice crisis does not rise to the level of such a justifying condition. While the malpractice crisis demands and justifies a political response on the part of the nation's physicians, strikes and slow-downs are not an ethically justified means to the legitimate end of controlling insurance costs.

  14. A Systematic Approach for Identifying Level-1 Error Covariance Structures in Latent Growth Modeling

    Science.gov (United States)

    Ding, Cherng G.; Jane, Ten-Der; Wu, Chiu-Hui; Lin, Hang-Rung; Shen, Chih-Kang

    2017-01-01

    It has been pointed out in the literature that misspecification of the level-1 error covariance structure in latent growth modeling (LGM) has detrimental impacts on the inferences about growth parameters. Since correct covariance structure is difficult to specify by theory, the identification needs to rely on a specification search, which,…

  15. An individual-based probabilistic model for simulating fisheries population dynamics

    Directory of Open Access Journals (Sweden)

    Jie Cao

    2016-12-01

    Full Text Available The purpose of stock assessment is to support managers to provide intelligent decisions regarding removal from fish populations. Errors in assessment models may have devastating impacts on the population fitness and negative impacts on the economy of the resource users. Thus, accuracte estimations of population size, growth rates are critical for success. Evaluating and testing the behavior and performance of stock assessment models and assessing the consequences of model mis-specification and the impact of management strategies requires an operating model that accurately describe the dynamics of the target species, and can resolve spatial and seasonal changes. In addition, the most thorough evaluations of assessment models use an operating model that takes a different form than the assessment model. This paper presents an individual-based probabilistic model used to simulate the complex dynamics of populations and their associated fisheries. Various components of population dynamics are expressed as random Bernoulli trials in the model and detailed life and fishery histories of each individual are tracked over their life span. The simulation model is designed to be flexible so it can be used for different species and fisheries. It can simulate mixing among multiple stocks and link stock-recruit relationships to environmental factors. Furthermore, the model allows for flexibility in sub-models (e.g., growth and recruitment and model assumptions (e.g., age- or size-dependent selectivity. This model enables the user to conduct various simulation studies, including testing the performance of assessment models under different assumptions, assessing the impacts of model mis-specification and evaluating management strategies.

  16. Quantifying Distributional Model Risk via Optimal Transport

    OpenAIRE

    Blanchet, Jose; Murthy, Karthyek R. A.

    2016-01-01

    This paper deals with the problem of quantifying the impact of model misspecification when computing general expected values of interest. The methodology that we propose is applicable in great generality, in particular, we provide examples involving path dependent expectations of stochastic processes. Our approach consists in computing bounds for the expectation of interest regardless of the probability measure used, as long as the measure lies within a prescribed tolerance measured in terms ...

  17. Education and gender bias in the sex ratio at birth: evidence from India.

    Science.gov (United States)

    Echávarri, Rebeca A; Ezcurra, Roberto

    2010-02-01

    This article investigates the possible existence of a nonlinear link between female disadvantage in natality and education. To this end, we devise a theoretical model based on the key role of social interaction in explaining people's acquisition of preferences, which justifies the existence of a nonmonotonic relationship between female disadvantage in natality and education. The empirical validity of the proposed model is examined for the case of India, using district-level data. In this context, our econometric analysis pays particular attention to the role of spatial dependence to avoid any potential problems of misspecification. The results confirm that the relationship between the sex ratio at birth and education in India follows an inverted U-shape. This finding is robust to the inclusion of additional explanatory variables in the analysis, and to the choice of the spatial weight matrix used to quantify the spatial interdependence between the sample districts.

  18. Investigation into How Managers Justify Investments in IT Infrastructure

    Science.gov (United States)

    Ibe, Richmond Ikechukwu

    2012-01-01

    Organization leaders are dependent on information technology for corporate productivity; however, senior managers have expressed concerns about insufficient benefits from information technology investments. The problem researched was to understand how midsized businesses justify investments in information technology infrastructure. The purpose of…

  19. Economic modelling of energy services: Rectifying misspecified energy demand functions

    International Nuclear Information System (INIS)

    Hunt, Lester C.; Ryan, David L.

    2015-01-01

    Although it is well known that energy demand is derived, since energy is required not for its own sake but for the energy services it produces – such as heating, lighting, and motive power – energy demand models, both theoretical and empirical, often fail to take account of this feature. In this paper, we highlight the misspecification that results from ignoring this aspect, and its empirical implications – biased estimates of price elasticities and other measures – and provide a relatively simple and empirically practicable way to rectify it, which has a strong theoretical grounding. To do so, we develop an explicit model of consumer behaviour in which utility derives from consumption of energy services rather than from the energy sources that are used to produce them. As we discuss, this approach opens up the possibility of examining many aspects of energy demand in a theoretically sound way that have not previously been considered on a widespread basis, although some existing empirical work could be interpreted as being consistent with this type of specification. While this formulation yields demand equations for energy services rather than for energy or particular energy sources, these are shown to be readily converted, without added complexity, into the standard type of energy demand equation(s) that is (are) typically estimated. The additional terms that the resulting energy demand equations include, compared to those that are typically estimated, highlight the misspecification that is implicit when typical energy demand equations are estimated. A simple solution for dealing with an apparent drawback of this formulation for empirical purposes, namely that information is required on typically unobserved energy efficiency, indicates how energy efficiency can be captured in the model, such as by including exogenous trends and/or including its possible dependence on past energy prices. The approach is illustrated using an empirical example that involves

  20. A comparison of non-homogeneous Markov regression models with application to Alzheimer’s disease progression

    Science.gov (United States)

    Hubbard, R. A.; Zhou, X.H.

    2011-01-01

    Markov regression models are useful tools for estimating the impact of risk factors on rates of transition between multiple disease states. Alzheimer’s disease (AD) is an example of a multi-state disease process in which great interest lies in identifying risk factors for transition. In this context, non-homogeneous models are required because transition rates change as subjects age. In this report we propose a non-homogeneous Markov regression model that allows for reversible and recurrent disease states, transitions among multiple states between observations, and unequally spaced observation times. We conducted simulation studies to demonstrate performance of estimators for covariate effects from this model and compare performance with alternative models when the underlying non-homogeneous process was correctly specified and under model misspecification. In simulation studies, we found that covariate effects were biased if non-homogeneity of the disease process was not accounted for. However, estimates from non-homogeneous models were robust to misspecification of the form of the non-homogeneity. We used our model to estimate risk factors for transition to mild cognitive impairment (MCI) and AD in a longitudinal study of subjects included in the National Alzheimer’s Coordinating Center’s Uniform Data Set. Using our model, we found that subjects with MCI affecting multiple cognitive domains were significantly less likely to revert to normal cognition. PMID:22419833

  1. Justifiability and Animal Research in Health: Can Democratisation Help Resolve Difficulties?

    Science.gov (United States)

    2018-01-01

    Simple Summary Scientists justify animal use in medical research because the benefits to human health outweigh the costs or harms to animals. However, whether it is justifiable is controversial for many people. Even public interests are divided because an increasing proportion of people do not support animal research, while demand for healthcare that is based on animal research is also rising. The wider public should be given more influence in these difficult decisions. This could be through requiring explicit disclosure about the role of animals in drug labelling to inform the public out of respect for people with strong objections. It could also be done through periodic public consultations that use public opinion and expert advice to decide which diseases justify the use of animals in medical research. More public input will help ensure that animal research projects meet public expectations and may help to promote changes to facilitate medical advances that need fewer animals. Abstract Current animal research ethics frameworks emphasise consequentialist ethics through cost-benefit or harm-benefit analysis. However, these ethical frameworks along with institutional animal ethics approval processes cannot satisfactorily decide when a given potential benefit is outweighed by costs to animals. The consequentialist calculus should, theoretically, provide for situations where research into a disease or disorder is no longer ethical, but this is difficult to determine objectively. Public support for animal research is also falling as demand for healthcare is rising. Democratisation of animal research could help resolve these tensions through facilitating ethical health consumerism or giving the public greater input into deciding the diseases and disorders where animal research is justified. Labelling drugs to disclose animal use and providing a plain-language summary of the role of animals may help promote public understanding and would respect the ethical beliefs of

  2. Compatriot partiality and cosmopolitan justice: Can we justify compatriot partiality within the cosmopolitan framework?

    Directory of Open Access Journals (Sweden)

    Rachelle Bascara

    2016-10-01

    Full Text Available This paper shows an alternative way in which compatriot partiality could be justified within the framework of global distributive justice. Philosophers who argue that compatriot partiality is similar to racial partiality capture something correct about compatriot partiality. However, the analogy should not lead us to comprehensively reject compatriot partiality. We can justify compatriot partiality on the same grounds that liberation movements and affirmative action have been justified. Hence, given cosmopolitan demands of justice, special consideration for the economic well-being of your nation as a whole is justified if and only if the country it identifies is an oppressed developing nation in an unjust global order.This justification is incomplete. We also need to say why Person A, qua national of Country A, is justified in helping her compatriots in Country A over similarly or slightly more oppressed non-compatriots in Country B. I argue that Person A’s partiality towards her compatriots admits further vindication because it is part of an oppressed group’s project of self-emancipation, which is preferable to paternalistic emancipation.Finally, I identify three benefits in my justification for compatriot partiality. First, I do not offer a blanket justification for all forms of compatriot partiality. Partiality between members of oppressed groups is only a temporary effective measure designed to level an unlevel playing field. Second, because history attests that sovereign republics could arise as a collective response to colonial oppression, justifying compatriot partiality on the grounds that I have identified is conducive to the development of sovereignty and even democracy in poor countries, thereby avoiding problems of infringement that many humanitarian poverty alleviation efforts encounter. Finally, my justification for compatriot partiality complies with the implicit cosmopolitan commitment to the realizability of global justice

  3. Assessing the DICE model: uncertainty associated with the emission and retention of greenhouse gases

    International Nuclear Information System (INIS)

    Kaufmann, R.K.

    1997-01-01

    Analysis of the DICE model indicates that it contains unsupported assumptions, simple extrapolations, and mis-specifications that cause it to understate the rate at which economic activity emits greenhouse gases and the rate at which the atmosphere retains greenhouse gases. The model assumes a world population that is 2 billion people lower than the 'base case' projected by demographers. The model extrapolates a decline in the quantity of greenhouse gases emitted per unit of economic activity that is possible only if there is a structural break in the economic and engineering factors have determined this ratio over the last century. The model uses a single equation to simulate the rate at which greenhouse gases accumulate in the atmosphere. The forecast for the airborne fraction generated by this equation contradicts forecasts generated by models that represent the physical and chemical processes which determine the movement of carbon from the atmosphere to the ocean. When these unsupported assumptions, simple extrapolations, and misspecifications are remedied with simple fixes, the economic impact of global climate change increases several fold. Similarly, these remedies increase the impact of uncertainty on estimates for the economic impact of global climate change. Together, these results indicate that considerable scientific and economic research is needed before the threat of climate change can be dismissed with any degree of certainty. 23 refs., 3 figs

  4. Selection Criteria in Regime Switching Conditional Volatility Models

    Directory of Open Access Journals (Sweden)

    Thomas Chuffart

    2015-05-01

    Full Text Available A large number of nonlinear conditional heteroskedastic models have been proposed in the literature. Model selection is crucial to any statistical data analysis. In this article, we investigate whether the most commonly used selection criteria lead to choice of the right specification in a regime switching framework. We focus on two types of models: the Logistic Smooth Transition GARCH and the Markov-Switching GARCH models. Simulation experiments reveal that information criteria and loss functions can lead to misspecification ; BIC sometimes indicates the wrong regime switching framework. Depending on the Data Generating Process used in the experiments, great care is needed when choosing a criterion.

  5. Model format for a vaccine stability report and software solutions.

    Science.gov (United States)

    Shin, Jinho; Southern, James; Schofield, Timothy

    2009-11-01

    A session of the International Association for Biologicals Workshop on Stability Evaluation of Vaccine, a Life Cycle Approach was devoted to a model format for a vaccine stability report, and software solutions. Presentations highlighted the utility of a model format that will conform to regulatory requirements and the ICH common technical document. However, there need be flexibility to accommodate individual company practices. Adoption of a model format is premised upon agreement regarding content between industry and regulators, and ease of use. Software requirements will include ease of use and protections against inadvertent misspecification of stability design or misinterpretation of program output.

  6. Influenza vaccination in Dutch nursing homes: is tacit consent morally justified?

    Science.gov (United States)

    Verweij, M F; van den Hoven, M A

    2005-01-01

    Efficient procedures for obtaining informed (proxy) consent may contribute to high influenza vaccination rates in nursing homes. Yet are such procedures justified? This study's objective was to gain insight in informed consent policies in Dutch nursing homes; to assess how these may affect influenza vaccination rates and to answer the question whether deviating from standard informed consent procedures could be morally justified. A survey among nursing home physicians. We sent a questionnaire to all (356) nursing homes in the Netherlands, to be completed by one of the physicians. We received 245 completed questionnaires. As 21 institutions appeared to be closed or merged into other institutions, the response was 73.1% (245/335). Of all respondents 81.9% reported a vaccination rate above 80%. Almost 50% reported a vaccination rate above 90%. Most respondents considered herd immunity to be an important consideration for institutional policy. Freedom of choice for residents was considered important by almost all. Nevertheless, 106 out of 245 respondents follow a tacit consent procedure, according to which vaccination will be administered unless the resident or her proxy refuses. These institutions show significantly higher vaccination rates (p tacit consent procedures can be morally justifiable. Such procedures assume that vaccination is good for residents either as individuals or as a group. Even though this assumption may be true for most residents, there are good reasons for preferring express consent procedures.

  7. Thumba X-ray plant: Are radiation fears justified

    International Nuclear Information System (INIS)

    Madhvanath, U.

    1978-01-01

    Technical facts about the X-ray unit located at Vikram Sarabhai Space Centre, Thumba (India) are set down to explain that it is not posing any radiation hazard as reported in a newspaper and thus radiation fears are not justifiable. It is stated that, after thorough checking, X-ray installations in this space centre cause negligible exposure even to workers who handle these units, and others practically do not get any exposure at all. (B.G.W.)

  8. Justifying a recommendation: tell a story or present an argument?

    NARCIS (Netherlands)

    van den Hoven, P.J.

    2017-01-01

    In the deliberative genre there is a complex ‘playground’ of choices to present a recommendation; a rhetorician has to determine his or her position. Relevant dimensions are the coerciveness of the recommendation and the strength of its justifi cation, but also the presentation format, varying from

  9. Is nuclear energy ethically justifiable?

    International Nuclear Information System (INIS)

    Zuend, H.

    1988-01-01

    Nuclear technology brings the chance to provide an essential long term contribution to the energy supply of the world population and to use the raw materials uranium and thorium which have no other use. The use of nuclear energy is ethically justifiable providing certain simple fundamental rules for the design of nuclear facilities are observed. Such rules were clearly violated before the reactor accident at Chernobyl. They are, however, observed in our existing nuclear power plants. Compared with other energy systems nuclear energy has, with the exception of natural gas, the lowest risk. The consideration of the ethical justification of nuclear energy must also include the question of withdrawal. A withdrawal would have considerable social consequences for the industrial nations as well as for the developing countries. The problem of spreading alarm (and concern) by the opponents of nuclear energy should also be included in the ethical justification. 8 refs., 2 figs

  10. Is nuclear energy ethically justifiable?

    International Nuclear Information System (INIS)

    Zuend, H.

    1987-01-01

    Nuclear technology offers the chance to make an extremely long term contribution to the energy supply of the earth. The use of nuclear energy is ethically justifiable, provided that several fundamental rules are obeyed during the technical design of nuclear installations. Such fundamental rules were unequivocally violated in the nuclear power plant Chernobyl. They are, however, fulfilled in the existing Swiss nuclear power plants. Improvements are possible in new nuclear power plants. Compared to other usable energy systems nuclear energy is second only to natural gas in minimal risk per generated energy unit. The question of ethical justification also may rightly be asked of the non-use of nuclear energy. The socially weakest members of the Swiss population would suffer most under a renunciation of nuclear energy. Future prospects for the developing countries would deteriorate considerably with a renunciation by industrial nations of nuclear energy. The widely spread fear concerning the nuclear energy in the population is a consequence of non-objective discussion. 8 refs., 2 figs

  11. Suing One's Sense Faculties for Fraud: 'Justifiable Reliance' in the ...

    African Journals Online (AJOL)

    The law requires that plaintiffs in fraud cases be 'justified' in relying on a misrepresentation. I deploy the accumulated intuitions of the law to defend externalist accounts of epistemic justification and knowledge against Laurence BonJour's counterexamples involving clairvoyance. I suggest that the law can offer a ...

  12. Justifying Clinical Nudges.

    Science.gov (United States)

    Gorin, Moti; Joffe, Steven; Dickert, Neal; Halpern, Scott

    2017-03-01

    The shift away from paternalistic decision-making and toward patient-centered, shared decision-making has stemmed from the recognition that in order to practice medicine ethically, health care professionals must take seriously the values and preferences of their patients. At the same time, there is growing recognition that minor and seemingly irrelevant features of how choices are presented can substantially influence the decisions people make. Behavioral economists have identified striking ways in which trivial differences in the presentation of options can powerfully and predictably affect people's choices. Choice-affecting features of the decision environment that do not restrict the range of choices or significantly alter the incentives have come to be known as "nudges." Although some have criticized conscious efforts to influence choice, we believe that clinical nudges may often be morally justified. The most straightforward justification for nudge interventions is that they help people bypass their cognitive limitations-for example, the tendency to choose the first option presented even when that option is not the best for them-thereby allowing people to make choices that best align with their rational preferences or deeply held values. However, we argue that this justification is problematic. We argue that, if physicians wish to use nudges to shape their patients' choices, the justification for doing so must appeal to an ethical and professional standard, not to patients' preferences. We demonstrate how a standard with which clinicians and bioethicists already are quite familiar-the best-interest standard-offers a robust justification for the use of nudges. © 2017 The Hastings Center.

  13. The Shifting Seasonal Mean Autoregressive Model and Seasonality in the Central England Monthly Temperature Series, 1772-2016

    DEFF Research Database (Denmark)

    He, Changli; Kang, Jian; Terasvirta, Timo

    In this paper we introduce an autoregressive model with seasonal dummy variables in which coefficients of seasonal dummies vary smoothly and deterministically over time. The error variance of the model is seasonally heteroskedastic and multiplicatively decomposed, the decomposition being similar ...... temperature series. More specifically, the idea is to find out in which way and by how much the monthly temperatures are varying over time during the period of more than 240 years, if they do. Misspecification tests are applied to the estimated model and the findings discussed....

  14. Topics in modelling of clustered data

    CERN Document Server

    Aerts, Marc; Ryan, Louise M; Geys, Helena

    2002-01-01

    Many methods for analyzing clustered data exist, all with advantages and limitations in particular applications. Compiled from the contributions of leading specialists in the field, Topics in Modelling of Clustered Data describes the tools and techniques for modelling the clustered data often encountered in medical, biological, environmental, and social science studies. It focuses on providing a comprehensive treatment of marginal, conditional, and random effects models using, among others, likelihood, pseudo-likelihood, and generalized estimating equations methods. The authors motivate and illustrate all aspects of these models in a variety of real applications. They discuss several variations and extensions, including individual-level covariates and combined continuous and discrete outcomes. Flexible modelling with fractional and local polynomials, omnibus lack-of-fit tests, robustification against misspecification, exact, and bootstrap inferential procedures all receive extensive treatment. The application...

  15. The Luckless and the Doomed: Contractualism on Justified Risk-Imposition

    DEFF Research Database (Denmark)

    Holm, Sune Hannibal

    2018-01-01

    Several authors have argued that contractualism faces a dilemma when it comes to justifying risks generated by socially valuable activities. At the heart of the matter is the question of whether contractualists should adopt an ex post or an ex ante perspective when assessing whether an action...... to prohibit a range of intuitively permissible and socially valuable activities....

  16. Mandatory Personal Therapy: Does the Evidence Justify the Practice? In Debate

    Science.gov (United States)

    Chaturvedi, Surabhi

    2013-01-01

    The article addresses the question of whether the practice of mandatory personal therapy, followed by several training organisations, is justified by existing research and evidence. In doing so, it discusses some implications of this training requirement from an ethical and ideological standpoint, raising questions of import for training…

  17. Justifying decisions in social dilemmas: justification pressures and tacit coordination under environmental uncertainty.

    Science.gov (United States)

    de Kwaadsteniet, Erik W; van Dijk, Eric; Wit, Arjaan; De Cremer, David; de Rooij, Mark

    2007-12-01

    This article investigates how justification pressures influence harvesting decisions in common resource dilemmas. The authors argue that when a division rule prescribes a specific harvest level, such as under environmental certainty, people adhere more strongly to this division rule when they have to justify their decisions to fellow group members. When a division rule does not prescribe a specific harvest level, such as under environmental uncertainty, people restrict their harvests when they have to justify their decisions to fellow group members. The results of two experimental studies corroborate this line of reasoning. The findings are discussed in terms of tacit coordination. The authors specify conditions under which justification pressures may or may not facilitate efficient coordination.

  18. Letter: Can Islamic Jurisprudence Justify Procurement of Transplantable Vital Organs in Brain Death?

    Science.gov (United States)

    Rady, Mohamed Y

    2018-01-01

    In their article, "An International Legal Review of the Relationship between Brain Death and Organ Transplantation," in The Journal of Clinical Ethics 29, no. 1, Aramesh, Arima, Gardiner, and Shah reported on diverse international legislative approaches for justifying procurement of transplantable vital organs in brain death. They stated, "In Islamic traditions in particular, the notion of unstable life is a way to justify organ donation from brain-dead patients that we believe has not been fully described previously in the literature." This commentary queries the extent to which this concept is valid in accordance with the primary source of Islamic law, that is, the Quran. Copyright 2018 The Journal of Clinical Ethics. All rights reserved.

  19. Calculation-experimental method justifies the life of wagons

    Directory of Open Access Journals (Sweden)

    Валерія Сергіївна Воропай

    2015-11-01

    Full Text Available The article proposed a method to evaluate the technical state of tank wagons operating in chemical industry. An algorithm for evaluation the technical state of tank wagons was developed, that makes it possible on the basis of diagnosis and analysis of current condition to justify a further period of operation. The complex of works on testing the tanks and mathematical models for calculations of the design strength and reliability were proposed. The article is devoted to solving the problem of effective exploitation of the working fleet of tank wagons. Opportunities for further exploitation of cars, the complex of works on the assessment of their technical state and the calculation of the resources have been proposed in the article. Engineering research of the chemical industries park has reduced the shortage of the rolling stock for transportation of ammonia. The analysis of the chassis numerous faults and the main elements of tank wagons supporting structure after 20 years of exploitation was made. The algorithm of determining the residual life of the specialized tank wagons operating in an industrial plant has been proposed. The procedure for resource conservation of tank wagons carrying cargo under high pressure was first proposed. The improved procedure for identifying residual life proposed in the article has both theoretical and practical importance

  20. Adjusting for Confounding in Early Postlaunch Settings: Going Beyond Logistic Regression Models.

    Science.gov (United States)

    Schmidt, Amand F; Klungel, Olaf H; Groenwold, Rolf H H

    2016-01-01

    Postlaunch data on medical treatments can be analyzed to explore adverse events or relative effectiveness in real-life settings. These analyses are often complicated by the number of potential confounders and the possibility of model misspecification. We conducted a simulation study to compare the performance of logistic regression, propensity score, disease risk score, and stabilized inverse probability weighting methods to adjust for confounding. Model misspecification was induced in the independent derivation dataset. We evaluated performance using relative bias confidence interval coverage of the true effect, among other metrics. At low events per coefficient (1.0 and 0.5), the logistic regression estimates had a large relative bias (greater than -100%). Bias of the disease risk score estimates was at most 13.48% and 18.83%. For the propensity score model, this was 8.74% and >100%, respectively. At events per coefficient of 1.0 and 0.5, inverse probability weighting frequently failed or reduced to a crude regression, resulting in biases of -8.49% and 24.55%. Coverage of logistic regression estimates became less than the nominal level at events per coefficient ≤5. For the disease risk score, inverse probability weighting, and propensity score, coverage became less than nominal at events per coefficient ≤2.5, ≤1.0, and ≤1.0, respectively. Bias of misspecified disease risk score models was 16.55%. In settings with low events/exposed subjects per coefficient, disease risk score methods can be useful alternatives to logistic regression models, especially when propensity score models cannot be used. Despite better performance of disease risk score methods than logistic regression and propensity score models in small events per coefficient settings, bias, and coverage still deviated from nominal.

  1. A Smooth Transition Logit Model of the Effects of Deregulation in the Electricity Market

    DEFF Research Database (Denmark)

    Hurn, A.S.; Silvennoinen, Annastiina; Teräsvirta, Timo

    We consider a nonlinear vector model called the logistic vector smooth transition autoregressive model. The bivariate single-transition vector smooth transition regression model of Camacho (2004) is generalised to a multivariate and multitransition one. A modelling strategy consisting of specific......We consider a nonlinear vector model called the logistic vector smooth transition autoregressive model. The bivariate single-transition vector smooth transition regression model of Camacho (2004) is generalised to a multivariate and multitransition one. A modelling strategy consisting...... of specification, including testing linearity, estimation and evaluation of these models is constructed. Nonlinear least squares estimation of the parameters of the model is discussed. Evaluation by misspecification tests is carried out using tests derived in a companion paper. The use of the modelling strategy...

  2. Justifying gender discrimination in the workplace: The mediating role of motherhood myths.

    Science.gov (United States)

    Verniers, Catherine; Vala, Jorge

    2018-01-01

    The issue of gender equality in employment has given rise to numerous policies in advanced industrial countries, all aimed at tackling gender discrimination regarding recruitment, salary and promotion. Yet gender inequalities in the workplace persist. The purpose of this research is to document the psychosocial process involved in the persistence of gender discrimination against working women. Drawing on the literature on the justification of discrimination, we hypothesized that the myths according to which women's work threatens children and family life mediates the relationship between sexism and opposition to a mother's career. We tested this hypothesis using the Family and Changing Gender Roles module of the International Social Survey Programme. The dataset contained data collected in 1994 and 2012 from 51632 respondents from 18 countries. Structural equation modellings confirmed the hypothesised mediation. Overall, the findings shed light on how motherhood myths justify the gender structure in countries promoting gender equality.

  3. Model Justified Search Algorithms for Scheduling Under Uncertainty

    National Research Council Canada - National Science Library

    Howe, Adele; Whitley, L. D

    2008-01-01

    .... We also identified plateaus as a significant barrier to superb performance of local search on scheduling and have studied several canonical discrete optimization problems to discover and model the nature of plateaus...

  4. Modelling Conditional and Unconditional Heteroskedasticity with Smoothly Time-Varying Structure

    DEFF Research Database (Denmark)

    Amado, Christina; Teräsvirta, Timo

    multiplier type misspecification tests. Finite-sample properties of these procedures and tests are examined by simulation. An empirical application to daily stock returns and another one to daily exchange rate returns illustrate the functioning and properties of our modelling strategy in practice......In this paper, we propose two parametric alternatives to the standard GARCH model. They allow the conditional variance to have a smooth time-varying structure of either ad- ditive or multiplicative type. The suggested parameterizations describe both nonlinearity and structural change...... in the conditional and unconditional variances where the transition between regimes over time is smooth. A modelling strategy for these new time-varying parameter GARCH models is developed. It relies on a sequence of Lagrange multiplier tests, and the adequacy of the estimated models is investigated by Lagrange...

  5. Evidence-based, ethically justified counseling for fetal bilateral renal agenesis

    Science.gov (United States)

    Thomas, Alana N.; McCullough, Laurence B.; Chervenak, Frank A.; Placencia, Frank X.

    2017-01-01

    Background Not much data are available on the natural history of bilateral renal agenesis, as the medical community does not typically offer aggressive obstetric or neonatal care asbilateral renal agenesis has been accepted as a lethal condition. Aim To provide an evidence-based, ethically justified approach to counseling pregnant women about the obstetric management of bilateral renal agenesis. Study design A systematic literature search was performed using multiple databases. We deploy an ethical analysis of the results of the literature search on the basis of the professional responsibility model of obstetric ethics. Results Eighteen articles met the inclusion criteria for review. With the exception of a single case study using serial amnioinfusion, there has been no other case of survival following dialysis and transplantation documented. Liveborn babies die during the neonatal period. Counseling pregnant women about management of pregnancies complicated by bilateral renal agenesis should be guided by beneficence-based judgment informed by evidence about outcomes. Conclusions Based on the ethical analysis of the results from this review, without experimental obstetric intervention, neonatal mortality rates will continue to be 100%. Serial amnioinfusion therefore should not be offered as treatment, but only as approved innovation or research. PMID:28222038

  6. Army Justified Initial Production Plan for the Paladin Integrated Management Program but Has Not Resolved Two Vehicle Performance Deficiencies (Redacted)

    Science.gov (United States)

    2016-08-05

    model oversight organization in the Federal Government by leading change, speaking truth, and promoting excellence—a diverse organization, working ...VIRGINIA 22350-1500 August 5, 2016 MEMORANDUM FOR AUDITOR GENERAL, DEPARTMENT OF THE ARMY SUBJECT: Army Justified Initial Production Plan for the Paladin... Family of Vehicles, M109A7 Self-Propelled Howitzer and M992A3 Carrier, Ammunition, Tracked, October 2015; • M109A7 AFES Overview, September 2015

  7. Estimation and model selection of semiparametric multivariate survival functions under general censorship.

    Science.gov (United States)

    Chen, Xiaohong; Fan, Yanqin; Pouzo, Demian; Ying, Zhiliang

    2010-07-01

    We study estimation and model selection of semiparametric models of multivariate survival functions for censored data, which are characterized by possibly misspecified parametric copulas and nonparametric marginal survivals. We obtain the consistency and root- n asymptotic normality of a two-step copula estimator to the pseudo-true copula parameter value according to KLIC, and provide a simple consistent estimator of its asymptotic variance, allowing for a first-step nonparametric estimation of the marginal survivals. We establish the asymptotic distribution of the penalized pseudo-likelihood ratio statistic for comparing multiple semiparametric multivariate survival functions subject to copula misspecification and general censorship. An empirical application is provided.

  8. Justifying the Use of Internet Sources in School Assignments on Controversial Issues

    Science.gov (United States)

    Mikkonen, Teemu

    2018-01-01

    Introduction: This study concerns students' criteria in the evaluation of Internet sources for a school assignment requiring reflections on a controversial issue. The findings are elaborated by analysing students' discursive accounts in justifying the use or non-use of sources. Method: The interview data was collected in a Finnish upper secondary…

  9. "Teach Your Children Well": Arguing in Favor of Pedagogically Justifiable Hospitality Education

    Science.gov (United States)

    Potgieter, Ferdinand J.

    2016-01-01

    This paper is a sequel to the paper which I delivered at last year's BCES conference in Sofia. Making use of hermeneutic phenomenology and constructive interpretivism as methodological apparatus, I challenge the pedagogic justifiability of the fashionable notion of religious tolerance. I suggest that we need, instead, to reflect "de…

  10. Specification and misspecification of theoretical foundations and logic models for health communication campaigns.

    Science.gov (United States)

    Slater, Michael D

    2006-01-01

    While increasingly widespread use of behavior change theory is an advance for communication campaigns and their evaluation, such theories provide a necessary but not sufficient condition for theory-based communication interventions. Such interventions and their evaluations need to incorporate theoretical thinking about plausible mechanisms of message effect on health-related attitudes and behavior. Otherwise, strategic errors in message design and dissemination, and misspecified campaign logic models, insensitive to campaign effects, are likely to result. Implications of the elaboration likelihood model, attitude accessibility, attitude to the ad theory, exemplification, and framing are explored, and implications for campaign strategy and evaluation designs are briefly discussed. Initial propositions are advanced regarding a theory of campaign affect generalization derived from attitude to ad theory, and regarding a theory of reframing targeted health behaviors in those difficult contexts in which intended audiences are resistant to the advocated behavior or message.

  11. Cost-justifying usability an update for the internet age

    CERN Document Server

    Bias, Randolph G; Bias, Randolph G

    2005-01-01

    You just know that an improvement of the user interface will reap rewards, but how do you justify the expense and the labor and the time-guarantee a robust ROI!-ahead of time? How do you decide how much of an investment should be funded? And what is the best way to sell usability to others? In this completely revised and new edition, Randolph G. Bias (University of Texas at Austin, with 25 years' experience as a usability practitioner and manager) and Deborah J. Mayhew (internationally recognized usability consultant and author of two other seminal books including The Usability Enginee

  12. Laboratory experiments cannot be utilized to justify the action of early streamer emission terminals

    International Nuclear Information System (INIS)

    Becerra, Marley; Cooray, Vernon

    2008-01-01

    The early emission of streamers in laboratory long air gaps under switching impulses has been observed to reduce the time of initiation of leader positive discharges. This fact has been arbitrarily extrapolated by the manufacturers of early streamer emission devices to the case of upward connecting leaders initiated under natural lightning conditions, in support of those non-conventional terminals that claim to perform better than Franklin lightning rods. In order to discuss the physical basis and validity of these claims, a self-consistent model based on the physics of leader discharges is used to simulate the performance of lightning rods in the laboratory and under natural lightning conditions. It is theoretically shown that the initiation of early streamers can indeed lead to the early initiation of self-propagating positive leaders in laboratory long air gaps under switching voltages. However, this is not the case for positive connecting leaders initiated from the same lightning rod under the influence of the electric field produced by a downward moving stepped leader. The time evolution of the development of positive leaders under natural conditions is different from the case in the laboratory, where the leader inception condition is closely dependent upon the initiation of the first streamer burst. Our study shows that the claimed similarity between the performance of lightning rods under switching electric fields applied in the laboratory and under the electric field produced by a descending stepped leader is not justified. Thus, the use of existing laboratory results to validate the performance of the early streamer lightning rods under natural conditions is not justified

  13. Justifying gender discrimination in the workplace: The mediating role of motherhood myths

    Science.gov (United States)

    2018-01-01

    The issue of gender equality in employment has given rise to numerous policies in advanced industrial countries, all aimed at tackling gender discrimination regarding recruitment, salary and promotion. Yet gender inequalities in the workplace persist. The purpose of this research is to document the psychosocial process involved in the persistence of gender discrimination against working women. Drawing on the literature on the justification of discrimination, we hypothesized that the myths according to which women’s work threatens children and family life mediates the relationship between sexism and opposition to a mother’s career. We tested this hypothesis using the Family and Changing Gender Roles module of the International Social Survey Programme. The dataset contained data collected in 1994 and 2012 from 51632 respondents from 18 countries. Structural equation modellings confirmed the hypothesised mediation. Overall, the findings shed light on how motherhood myths justify the gender structure in countries promoting gender equality. PMID:29315326

  14. A Critical Review of Construct Indicators and Measurement Model Misspecification in Marketing and Consumer Research.

    OpenAIRE

    Jarvis, Cheryl Burke; MacKenzie, Scott B; Podsakoff, Philip M

    2003-01-01

    A review of the literature suggests that few studies use formative indicator measurement models, even though they should. Therefore, the purpose of this research is to (a) discuss the distinction between formative and reflective measurement models, (b) develop a set of conceptual criteria that can be used to determine whether a construct should be modeled as having formative or reflective indicators, (c) review the marketing literature to obtain an estimate of the extent of measurement model ...

  15. Belief in School Meritocracy as a System-justifying Tool for Low Status Students

    Directory of Open Access Journals (Sweden)

    Virginie eWiederkehr

    2015-07-01

    Full Text Available The belief that, in school, success only depends on will and hard work is widespread in Western societies despite evidence showing that several factors other than merit explain school success, including group belonging (e.g., social class, gender. In the present paper, we argue that because merit is the only track for low status students to reach upward mobility, Belief in School Meritocracy (BSM is a particularly useful system-justifying tool to help them perceive their place in society as being deserved. Consequently, for low status students (but not high status students, this belief should be related to more general system-justifying beliefs (Study 1. Moreover, low status students should be particularly prone to endorsing this belief when their place within a system on which they strongly depend to acquire status is challenged (Study 2. In Study 1, high status (boys and high SES were compared to low status (girls and low SES high school students. Results indicated that BSM was related to system-justifying beliefs only for low SES students and for girls, but not for high SES students or for boys. In Study 2, university students were exposed (or not to information about an important selection process that occurs at the university, depending on the condition. Their subjective status was assessed. Although such a confrontation reduced BSM for high subjective SES students, it tended to enhance it for low subjective SES students. Results are discussed in terms of system-justification motives and the palliative function meritocratic ideology may play for low status students.

  16. Belief in school meritocracy as a system-justifying tool for low status students.

    Science.gov (United States)

    Wiederkehr, Virginie; Bonnot, Virginie; Krauth-Gruber, Silvia; Darnon, Céline

    2015-01-01

    The belief that, in school, success only depends on will and hard work is widespread in Western societies despite evidence showing that several factors other than merit explain school success, including group belonging (e.g., social class, gender). In the present paper, we argue that because merit is the only track for low status students to reach upward mobility, Belief in School Meritocracy (BSM) is a particularly useful system-justifying tool to help them perceive their place in society as being deserved. Consequently, for low status students (but not high status students), this belief should be related to more general system-justifying beliefs (Study 1). Moreover, low status students should be particularly prone to endorsing this belief when their place within a system on which they strongly depend to acquire status is challenged (Study 2). In Study 1, high status (boys and high SES) were compared to low status (girls and low SES) high school students. Results indicated that BSM was related to system-justifying beliefs only for low SES students and for girls, but not for high SES students or for boys. In Study 2, university students were exposed (or not) to information about an important selection process that occurs at the university, depending on the condition. Their subjective status was assessed. Although such a confrontation reduced BSM for high subjective SES students, it tended to enhance it for low subjective SES students. Results are discussed in terms of system justification motives and the palliative function meritocratic ideology may play for low status students.

  17. Modeling of correlated data with informative cluster sizes: An evaluation of joint modeling and within-cluster resampling approaches.

    Science.gov (United States)

    Zhang, Bo; Liu, Wei; Zhang, Zhiwei; Qu, Yanping; Chen, Zhen; Albert, Paul S

    2017-08-01

    Joint modeling and within-cluster resampling are two approaches that are used for analyzing correlated data with informative cluster sizes. Motivated by a developmental toxicity study, we examined the performances and validity of these two approaches in testing covariate effects in generalized linear mixed-effects models. We show that the joint modeling approach is robust to the misspecification of cluster size models in terms of Type I and Type II errors when the corresponding covariates are not included in the random effects structure; otherwise, statistical tests may be affected. We also evaluate the performance of the within-cluster resampling procedure and thoroughly investigate the validity of it in modeling correlated data with informative cluster sizes. We show that within-cluster resampling is a valid alternative to joint modeling for cluster-specific covariates, but it is invalid for time-dependent covariates. The two methods are applied to a developmental toxicity study that investigated the effect of exposure to diethylene glycol dimethyl ether.

  18. The Bernstein-Von-Mises theorem under misspecification

    NARCIS (Netherlands)

    Kleijn, B.J.K.; van der Vaart, A.W.

    2012-01-01

    We prove that the posterior distribution of a parameter in misspecified LAN parametric models can be approximated by a random normal distribution. We derive from this that Bayesian credible sets are not valid confidence sets if the model is misspecified. We obtain the result under conditions that

  19. On the Estimation of Disease Prevalence by Latent Class Models for Screening Studies Using Two Screening Tests with Categorical Disease Status Verified in Test Positives Only

    Science.gov (United States)

    Chu, Haitao; Zhou, Yijie; Cole, Stephen R.; Ibrahim, Joseph G.

    2010-01-01

    Summary To evaluate the probabilities of a disease state, ideally all subjects in a study should be diagnosed by a definitive diagnostic or gold standard test. However, since definitive diagnostic tests are often invasive and expensive, it is generally unethical to apply them to subjects whose screening tests are negative. In this article, we consider latent class models for screening studies with two imperfect binary diagnostic tests and a definitive categorical disease status measured only for those with at least one positive screening test. Specifically, we discuss a conditional independent and three homogeneous conditional dependent latent class models and assess the impact of misspecification of the dependence structure on the estimation of disease category probabilities using frequentist and Bayesian approaches. Interestingly, the three homogeneous dependent models can provide identical goodness-of-fit but substantively different estimates for a given study. However, the parametric form of the assumed dependence structure itself is not “testable” from the data, and thus the dependence structure modeling considered here can only be viewed as a sensitivity analysis concerning a more complicated non-identifiable model potentially involving heterogeneous dependence structure. Furthermore, we discuss Bayesian model averaging together with its limitations as an alternative way to partially address this particularly challenging problem. The methods are applied to two cancer screening studies, and simulations are conducted to evaluate the performance of these methods. In summary, further research is needed to reduce the impact of model misspecification on the estimation of disease prevalence in such settings. PMID:20191614

  20. Is Investment in Maize Research Balanced and Justified? An Empirical Study

    Directory of Open Access Journals (Sweden)

    Hari Krishna Shrestha

    2016-12-01

    Full Text Available The objective of this study was to investigate whether the investment in maize research was adequate and balanced in Nepalese context. Resource use in maize research was empirically studied with standard congruency analysis by using Full Time Equivalent (FTE of researchers as a proxy measure of investment. The number of researchers involved in maize was 61 but it was only 21.25 on FTE basis, indicating that full time researchers were very few as compared to the cultivated area of maize in the country. Statistical analysis revealed that the investment in maize research was higher in Tarai and lower in the Hills. Congruency index on actual production basis was found low across the eco-zones and even lower across the geographical regions indicating that the investment in maize research was a mismatch and not justified. While adjusted with the equity factor and the research progress factor in the analysis substantial difference was not found in congruency index. This study recommends that substantial increase in investment in maize research is needed with balanced and justified manner across the eco-zones and the geographical regions. Hills need special attention to increase the investment as maize output value is higher in this eco-zone. Eastern and western regions also need increased investment in maize according to their contribution in the output value.

  1. HALL project. Justifying synthesis of the dimensioning inventory model

    International Nuclear Information System (INIS)

    Lagrange, M.H.

    2003-01-01

    This document explains the input data and the hypotheses retained for the establishment of the dimensioning inventory model (DIM). It recalls, first, the scenarios considered for the spent fuel and reprocessing management, describes the updating of the list of families of high-activity and long living (HALL) waste packages and the hypotheses considered for their quantifying in the inventory model. It presents also the selection criteria of type-packages and the list of such packages. It precises the regrouping of package families into type-packages and the related quantitative data. Finally, it details the modalities of preparation of radiological and chemical description of type-packages. (J.S.)

  2. Justifying molecular images in cell biology textbooks: From constructions to primary data.

    Science.gov (United States)

    Serpente, Norberto

    2016-02-01

    For scientific claims to be reliable and productive they have to be justified. However, on the one hand little is known on what justification precisely means to scientists, and on the other the position held by philosophers of science on what it entails is rather limited; for justifications customarily refer to the written form (textual expressions) of scientific claims, leaving aside images, which, as many cases from the history of science show are relevant to this process. The fact that images can visually express scientific claims independently from text, plus their vast variety and origins, requires an assessment of the way they are currently justified and in turn used as sources to justify scientific claims in the case of particular scientific fields. Similarly, in view of the different nature of images, analysis is required to determine on what side of the philosophical distinction between data and phenomena these different kinds of images fall. This paper historicizes and documents a particular aspect of contemporary life sciences research: the use of the molecular image as vehicle of knowledge production in cell studies, a field that has undergone a significant shift in visual expressions from the early 1980s onwards. Focussing on textbooks as sources that have been overlooked in the historiography of contemporary biomedicine, the aim is to explore (1) whether the shift of cell studies, entailing a superseding of the optical image traditionally conceptualised as primary data, by the molecular image, corresponds with a shift of justificatory practices, and (2) to assess the role of the molecular image as primary data. This paper also explores the dual role of images as teaching resources and as resources for the construction of knowledge in cell studies especially in its relation to discovery and justification. Finally, this paper seeks to stimulate reflection on what kind of archival resources could benefit the work of present and future epistemic

  3. "Men Are Dogs": Is The Stereotype Justified? Data On the Cheating College Male

    Science.gov (United States)

    Knox, David; Vail-Smith, Karen; Zusman, Marty

    2008-01-01

    Analysis of data from 1394 undergraduates at a large southeastern university were used to assess the degree to which the stereotype that "men are dogs" (sexually-focused cheaters) is justified. Results suggest that this stereotype is unjustified since the majority of males: (1) define behaviors from kissing to anal sex as cheating; (2)…

  4. Hedging endowment assurance products under interest rate and mortality risk

    NARCIS (Netherlands)

    Chen, A.; Mahayni, A.

    2007-01-01

    This paper analyzes how model misspecification associated with both interest rate and mortality risk influences hedging decisions of insurance companies. For this purpose, diverse risk management strategies which are riskminimizing when model risk is ignored come into consideration. The

  5. Information matrix estimation procedures for cognitive diagnostic models.

    Science.gov (United States)

    Liu, Yanlou; Xin, Tao; Andersson, Björn; Tian, Wei

    2018-03-06

    Two new methods to estimate the asymptotic covariance matrix for marginal maximum likelihood estimation of cognitive diagnosis models (CDMs), the inverse of the observed information matrix and the sandwich-type estimator, are introduced. Unlike several previous covariance matrix estimators, the new methods take into account both the item and structural parameters. The relationships between the observed information matrix, the empirical cross-product information matrix, the sandwich-type covariance matrix and the two approaches proposed by de la Torre (2009, J. Educ. Behav. Stat., 34, 115) are discussed. Simulation results show that, for a correctly specified CDM and Q-matrix or with a slightly misspecified probability model, the observed information matrix and the sandwich-type covariance matrix exhibit good performance with respect to providing consistent standard errors of item parameter estimates. However, with substantial model misspecification only the sandwich-type covariance matrix exhibits robust performance. © 2018 The British Psychological Society.

  6. Semi-Nonparametric Estimation and Misspecification Testing of Diffusion Models

    DEFF Research Database (Denmark)

    Kristensen, Dennis

    of the estimators and tests under the null are derived, and the power properties are analyzed by considering contiguous alternatives. Test directly comparing the drift and diffusion estimators under the relevant null and alternative are also analyzed. Markov Bootstrap versions of the test statistics are proposed...... to improve on the finite-sample approximations. The finite sample properties of the estimators are examined in a simulation study....

  7. Portfolio Management with Stochastic Interest Rates and Inflation Ambiguity

    DEFF Research Database (Denmark)

    Munk, Claus; Rubtsov, Alexey Vladimirovich

    2014-01-01

    prices. The investor is ambiguous about the inflation model and prefers a portfolio strategy which is robust to model misspecification. Ambiguity about the inflation dynamics is shown to affect the optimal portfolio fundamentally different than ambiguity about the price dynamics of traded assets...

  8. Differentiating intraday seasonalities through wavelet multi-scaling

    NARCIS (Netherlands)

    Gençay, R.; Selçuk, F.; Whitcher, B.

    2001-01-01

    It is well documented that strong intraday seasonalities may induce distortions in the estimation of volatility models. These seasonalities are also the dominant source for the underlying misspecifications of the various volatility models. Therefore, an obvious route is to filter out the underlying

  9. Statistical tests for equal predictive ability across multiple forecasting methods

    DEFF Research Database (Denmark)

    Borup, Daniel; Thyrsgaard, Martin

    We develop a multivariate generalization of the Giacomini-White tests for equal conditional predictive ability. The tests are applicable to a mixture of nested and non-nested models, incorporate estimation uncertainty explicitly, and allow for misspecification of the forecasting model as well as ...

  10. Context Based Inferences in Research Methodology: The Role of Culture in Justifying Knowledge Claims

    Science.gov (United States)

    Evers, Colin W.; Mason, Mark

    2011-01-01

    Drawing on work in epistemology and the philosophy of science, this paper seeks to provide very general reasons for why a comparative perspective needs to be applied to the inferential procedures of research methodologies where these concern the issue of justifying knowledge claims. In particular, the paper explores the role of culture on a number…

  11. Relative efficiency of joint-model and full-conditional-specification multiple imputation when conditional models are compatible: The general location model.

    Science.gov (United States)

    Seaman, Shaun R; Hughes, Rachael A

    2018-06-01

    Estimating the parameters of a regression model of interest is complicated by missing data on the variables in that model. Multiple imputation is commonly used to handle these missing data. Joint model multiple imputation and full-conditional specification multiple imputation are known to yield imputed data with the same asymptotic distribution when the conditional models of full-conditional specification are compatible with that joint model. We show that this asymptotic equivalence of imputation distributions does not imply that joint model multiple imputation and full-conditional specification multiple imputation will also yield asymptotically equally efficient inference about the parameters of the model of interest, nor that they will be equally robust to misspecification of the joint model. When the conditional models used by full-conditional specification multiple imputation are linear, logistic and multinomial regressions, these are compatible with a restricted general location joint model. We show that multiple imputation using the restricted general location joint model can be substantially more asymptotically efficient than full-conditional specification multiple imputation, but this typically requires very strong associations between variables. When associations are weaker, the efficiency gain is small. Moreover, full-conditional specification multiple imputation is shown to be potentially much more robust than joint model multiple imputation using the restricted general location model to mispecification of that model when there is substantial missingness in the outcome variable.

  12. [Justifying measures to correct functional state of operators varying in personal anxiety].

    Science.gov (United States)

    2012-01-01

    Workers of operating and dispatching occupations are exposed to high nervous and emotional exertion that result in increased personal anxiety, working stress and overstrain. That requires physiologically justified correction of hazardous psycho-physiologic states via various prophylactic measures (stay in schungite room, autogenous training, central electric analgesia, electric acupuncture). Attempted relaxation sessions in schungite room revealed in highly anxious individuals an increased velocity of visual signals perception, of attention concentration and shifting. Autogenous training sessions improve memory and have significant hypotensive effect in highly anxious individuals.

  13. Improved productivity justifies world record underbalanced perforating operation

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, A. M.; Bakker, E. R. [NAM B.V. (Netherlands); Hungerford, K.

    1998-12-31

    To achieve vertical connectivity with all the layers, and thus long term sustained productivity in a highly stratified reservoir, a one run underbalanced perforating operation was considered necessary. Due to coiled tube limitations in this deep (5136 m along hole, 3700 m true vertical depth, with a maximum deviation of 89 degrees), high pressure well a hydraulic workover unit (HWU) was selected to deploy and retrieve the guns. The operation is considered a world record since this is the longest section (total gross interval of 1026 m perforated) of guns conveyed, fired underbalanced and deployed out of a live well. It is concluded that the improved productivity more than justified the additional time, effort and expenditure; considering the full life cycle of the well it is readily apparent that the operation was an economic and technical success. Details of the considerations leading to the perforating technique selection, the planning and the execution of the operation, and the validation of the technique in terms of productivity gains, are provided. 13 refs., 7 figs.

  14. Justifying the design and selection of literacy and thinking tools

    Directory of Open Access Journals (Sweden)

    David Whitehead

    2008-10-01

    Full Text Available Criteria for the design and selection of literacy and thinking tools that allow educators to justify what they do are described within a wider framework of learning theory and research into best practice. Based on a meta-analysis of best practice, results from a three year project designed to evaluate the effectiveness of a secondary school literacy initiative in New Zealand, together with recent research from cognitive and neuro-psychologists, it is argued that the design and selection of literacy and thinking tools used in elementary schools should be consistent with (i teaching focused (ii learner focused, (iii thought linked (iv neurologically consistent, (v subject specific, (vi text linked, (vii developmentally appropriate, and (viii assessment linked criteria.

  15. Conditional Monte Carlo randomization tests for regression models.

    Science.gov (United States)

    Parhat, Parwen; Rosenberger, William F; Diao, Guoqing

    2014-08-15

    We discuss the computation of randomization tests for clinical trials of two treatments when the primary outcome is based on a regression model. We begin by revisiting the seminal paper of Gail, Tan, and Piantadosi (1988), and then describe a method based on Monte Carlo generation of randomization sequences. The tests based on this Monte Carlo procedure are design based, in that they incorporate the particular randomization procedure used. We discuss permuted block designs, complete randomization, and biased coin designs. We also use a new technique by Plamadeala and Rosenberger (2012) for simple computation of conditional randomization tests. Like Gail, Tan, and Piantadosi, we focus on residuals from generalized linear models and martingale residuals from survival models. Such techniques do not apply to longitudinal data analysis, and we introduce a method for computation of randomization tests based on the predicted rate of change from a generalized linear mixed model when outcomes are longitudinal. We show, by simulation, that these randomization tests preserve the size and power well under model misspecification. Copyright © 2014 John Wiley & Sons, Ltd.

  16. Digital and multimedia forensics justified: An appraisal on professional policy and legislation

    Science.gov (United States)

    Popejoy, Amy Lynnette

    Recent progress in professional policy and legislation at the federal level in the field of forensic science constructs a transformation of new outcomes for future experts. An exploratory and descriptive qualitative methodology was used to critique and examine Digital and Multimedia Science (DMS) as a justified forensic discipline. Chapter I summarizes Recommendations 1, 2, and 10 of the National Academy of Sciences (NAS) Report 2009 regarding disparities and challenges facing the forensic science community. Chapter I also delivers the overall foundation and framework of this thesis, specifically how it relates to DMS. Chapter II expands on Recommendation 1: "The Promotion and Development of Forensic Science," and focuses chronologically on professional policy and legislative advances through 2014. Chapter III addresses Recommendation 2: "The Standardization of Terminology in Reporting and Testimony," and the issues of legal language and terminology, model laboratory reports, and expert testimony concerning DMS case law. Chapter IV analyzes Recommendation 10: "Insufficient Education and Training," identifying legal awareness for the digital and multimedia examiner to understand the role of the expert witness, the attorney, the judge and the admission of forensic science evidence in litigation in our criminal justice system. Finally, Chapter V studies three DME specific laboratories at the Texas state, county, and city level, concentrating on current practice and procedure.

  17. Testing for Stock Market Contagion: A Quantile Regression Approach

    NARCIS (Netherlands)

    S.Y. Park (Sung); W. Wang (Wendun); N. Huang (Naijing)

    2015-01-01

    markdownabstract__Abstract__ Regarding the asymmetric and leptokurtic behavior of financial data, we propose a new contagion test in the quantile regression framework that is robust to model misspecification. Unlike conventional correlation-based tests, the proposed quantile contagion test

  18. DPpackage: Bayesian Semi- and Nonparametric Modeling in R

    Directory of Open Access Journals (Sweden)

    Alejandro Jara

    2011-04-01

    Full Text Available Data analysis sometimes requires the relaxation of parametric assumptions in order to gain modeling flexibility and robustness against mis-specification of the probability model. In the Bayesian context, this is accomplished by placing a prior distribution on a function space, such as the space of all probability distributions or the space of all regression functions. Unfortunately, posterior distributions ranging over function spaces are highly complex and hence sampling methods play a key role. This paper provides an introduction to a simple, yet comprehensive, set of programs for the implementation of some Bayesian nonparametric and semiparametric models in R, DPpackage. Currently, DPpackage includes models for marginal and conditional density estimation, receiver operating characteristic curve analysis, interval-censored data, binary regression data, item response data, longitudinal and clustered data using generalized linear mixed models, and regression data using generalized additive models. The package also contains functions to compute pseudo-Bayes factors for model comparison and for eliciting the precision parameter of the Dirichlet process prior, and a general purpose Metropolis sampling algorithm. To maximize computational efficiency, the actual sampling for each model is carried out using compiled C, C++ or Fortran code.

  19. When is deliberate killing of young children justified? Indigenous interpretations of infanticide in Bolivia.

    Science.gov (United States)

    de Hilari, Caroline; Condori, Irma; Dearden, Kirk A

    2009-01-01

    In the Andes, as elsewhere, infanticide is a difficult challenge that remains largely undocumented and misunderstood. From January to March 2004 we used community-based vital event surveillance systems, discussions with health staff, ethnographic interviews, and focus group discussions among Aymara men and women from two geographically distinct sites in the Andes of Bolivia to provide insights into the practice of infanticide. We noted elevated mortality at both sites. In one location, suspected causes of infanticide were especially high for girls. We also observed that community members maintain beliefs that justify infanticide under certain circumstances. Among the Aymara, justification for infanticide was both biological (deformities and twinship) and social (illegitimate birth, family size and poverty). Communities generally did not condemn killing when reasons for doing so were biological, but the taking of life for social reasons was rarely justified. In this cultural context, strategies to address the challenge of infanticide should include education of community members about alternatives to infanticide. At a program level, planners and implementers should target ethnic groups with high levels of infanticide and train health care workers to detect and address multiple warning signs for infanticide (for example, domestic violence and child maltreatment) as well as proxies for infant neglect and abuse such as mother/infant separation and bottle use.

  20. Simple, efficient estimators of treatment effects in randomized trials using generalized linear models to leverage baseline variables.

    Science.gov (United States)

    Rosenblum, Michael; van der Laan, Mark J

    2010-04-01

    Models, such as logistic regression and Poisson regression models, are often used to estimate treatment effects in randomized trials. These models leverage information in variables collected before randomization, in order to obtain more precise estimates of treatment effects. However, there is the danger that model misspecification will lead to bias. We show that certain easy to compute, model-based estimators are asymptotically unbiased even when the working model used is arbitrarily misspecified. Furthermore, these estimators are locally efficient. As a special case of our main result, we consider a simple Poisson working model containing only main terms; in this case, we prove the maximum likelihood estimate of the coefficient corresponding to the treatment variable is an asymptotically unbiased estimator of the marginal log rate ratio, even when the working model is arbitrarily misspecified. This is the log-linear analog of ANCOVA for linear models. Our results demonstrate one application of targeted maximum likelihood estimation.

  1. Simple, Efficient Estimators of Treatment Effects in Randomized Trials Using Generalized Linear Models to Leverage Baseline Variables

    Science.gov (United States)

    Rosenblum, Michael; van der Laan, Mark J.

    2010-01-01

    Models, such as logistic regression and Poisson regression models, are often used to estimate treatment effects in randomized trials. These models leverage information in variables collected before randomization, in order to obtain more precise estimates of treatment effects. However, there is the danger that model misspecification will lead to bias. We show that certain easy to compute, model-based estimators are asymptotically unbiased even when the working model used is arbitrarily misspecified. Furthermore, these estimators are locally efficient. As a special case of our main result, we consider a simple Poisson working model containing only main terms; in this case, we prove the maximum likelihood estimate of the coefficient corresponding to the treatment variable is an asymptotically unbiased estimator of the marginal log rate ratio, even when the working model is arbitrarily misspecified. This is the log-linear analog of ANCOVA for linear models. Our results demonstrate one application of targeted maximum likelihood estimation. PMID:20628636

  2. On adjustment for auxiliary covariates in additive hazard models for the analysis of randomized experiments

    DEFF Research Database (Denmark)

    Vansteelandt, S.; Martinussen, Torben; Tchetgen, E. J Tchetgen

    2014-01-01

    We consider additive hazard models (Aalen, 1989) for the effect of a randomized treatment on a survival outcome, adjusting for auxiliary baseline covariates. We demonstrate that the Aalen least-squares estimator of the treatment effect parameter is asymptotically unbiased, even when the hazard...... that, in view of its robustness against model misspecification, Aalen least-squares estimation is attractive for evaluating treatment effects on a survival outcome in randomized experiments, and the primary reasons to consider baseline covariate adjustment in such settings could be interest in subgroup......'s dependence on time or on the auxiliary covariates is misspecified, and even away from the null hypothesis of no treatment effect. We furthermore show that adjustment for auxiliary baseline covariates does not change the asymptotic variance of the estimator of the effect of a randomized treatment. We conclude...

  3. Interpretable inference on the mixed effect model with the Box-Cox transformation.

    Science.gov (United States)

    Maruo, K; Yamaguchi, Y; Noma, H; Gosho, M

    2017-07-10

    We derived results for inference on parameters of the marginal model of the mixed effect model with the Box-Cox transformation based on the asymptotic theory approach. We also provided a robust variance estimator of the maximum likelihood estimator of the parameters of this model in consideration of the model misspecifications. Using these results, we developed an inference procedure for the difference of the model median between treatment groups at the specified occasion in the context of mixed effects models for repeated measures analysis for randomized clinical trials, which provided interpretable estimates of the treatment effect. From simulation studies, it was shown that our proposed method controlled type I error of the statistical test for the model median difference in almost all the situations and had moderate or high performance for power compared with the existing methods. We illustrated our method with cluster of differentiation 4 (CD4) data in an AIDS clinical trial, where the interpretability of the analysis results based on our proposed method is demonstrated. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  4. Comparing fixed effects and covariance structure estimators for panel data

    DEFF Research Database (Denmark)

    Ejrnæs, Mette; Holm, Anders

    2006-01-01

    In this article, the authors compare the traditional econometric fixed effect estimator with the maximum likelihood estimator implied by covariance structure models for panel data. Their findings are that the maximum like lipoid estimator is remarkably robust to certain types of misspecifications...

  5. The Misspecification of the Covariance Structures in Multilevel Models for Single-Case Data: A Monte Carlo Simulation Study

    Science.gov (United States)

    Moeyaert, Mariola; Ugille, Maaike; Ferron, John M.; Beretvas, S. Natasha; Van den Noortgate, Wim

    2016-01-01

    The impact of misspecifying covariance matrices at the second and third levels of the three-level model is evaluated. Results indicate that ignoring existing covariance has no effect on the treatment effect estimate. In addition, the between-case variance estimates are unbiased when covariance is either modeled or ignored. If the research interest…

  6. More Precise Estimation of Lower-Level Interaction Effects in Multilevel Models.

    Science.gov (United States)

    Loeys, Tom; Josephy, Haeike; Dewitte, Marieke

    2018-01-01

    In hierarchical data, the effect of a lower-level predictor on a lower-level outcome may often be confounded by an (un)measured upper-level factor. When such confounding is left unaddressed, the effect of the lower-level predictor is estimated with bias. Separating this effect into a within- and between-component removes such bias in a linear random intercept model under a specific set of assumptions for the confounder. When the effect of the lower-level predictor is additionally moderated by another lower-level predictor, an interaction between both lower-level predictors is included into the model. To address unmeasured upper-level confounding, this interaction term ought to be decomposed into a within- and between-component as well. This can be achieved by first multiplying both predictors and centering that product term next, or vice versa. We show that while both approaches, on average, yield the same estimates of the interaction effect in linear models, the former decomposition is much more precise and robust against misspecification of the effects of cross-level and upper-level terms, compared to the latter.

  7. Is development of geothermal energy resource in Macedonia justified or not?

    International Nuclear Information System (INIS)

    Popovski, Kiril; Popovska Vasilevska, Sanja

    2007-01-01

    During the 80-ies of last century, Macedonia has been one of the world leaders in development of direct application of geothermal energy. During a period of only 6-7 years a participation of 0,7% in the State energy balance has been reached. However, situation has been changed during the last 20 years and the development of this energy resource has been not only stopped but some of the existing projects have been abandoned leading to regression. This situation is illogical, due the fact that it practically proved of being technically feasible and absolutely economically justified. A summary of the present situation with geothermal projects in Macedonia is made in the paper, and possibilities for their improvement and possibilities and justifications for development of new resources foreseen. Final conclusion is that the development of direct application of geothermal energy in Macedonia offer (in comparison with other renewable energy resources) the best energy and economic effects. (Author)

  8. Adjusting for overdispersion in piecewise exponential regression models to estimate excess mortality rate in population-based research.

    Science.gov (United States)

    Luque-Fernandez, Miguel Angel; Belot, Aurélien; Quaresma, Manuela; Maringe, Camille; Coleman, Michel P; Rachet, Bernard

    2016-10-01

    In population-based cancer research, piecewise exponential regression models are used to derive adjusted estimates of excess mortality due to cancer using the Poisson generalized linear modelling framework. However, the assumption that the conditional mean and variance of the rate parameter given the set of covariates x i are equal is strong and may fail to account for overdispersion given the variability of the rate parameter (the variance exceeds the mean). Using an empirical example, we aimed to describe simple methods to test and correct for overdispersion. We used a regression-based score test for overdispersion under the relative survival framework and proposed different approaches to correct for overdispersion including a quasi-likelihood, robust standard errors estimation, negative binomial regression and flexible piecewise modelling. All piecewise exponential regression models showed the presence of significant inherent overdispersion (p-value regression modelling, with either a quasi-likelihood or robust standard errors, was the best approach as it deals with both, overdispersion due to model misspecification and true or inherent overdispersion.

  9. Exploratory Analyses To Improve Model Fit: Errors Due to Misspecification and a Strategy To Reduce Their Occurrence.

    Science.gov (United States)

    Green, Samuel B.; Thompson, Marilyn S.; Poirier, Jennifer

    1999-01-01

    The use of Lagrange multiplier (LM) tests in specification searches and the efforts that involve the addition of extraneous parameters to models are discussed. Presented are a rationale and strategy for conducting specification searches in two stages that involve adding parameters to LM tests to maximize fit and then deleting parameters not needed…

  10. What justifies the United States ban on federal funding for nonreproductive cloning?

    Science.gov (United States)

    Cunningham, Thomas V

    2013-11-01

    This paper explores how current United States policies for funding nonreproductive cloning are justified and argues against that justification. I show that a common conceptual framework underlies the national prohibition on the use of public funds for cloning research, which I call the simple argument. This argument rests on two premises: that research harming human embryos is unethical and that embryos produced via fertilization are identical to those produced via cloning. In response to the simple argument, I challenge the latter premise. I demonstrate there are important ontological differences between human embryos (produced via fertilization) and clone embryos (produced via cloning). After considering the implications my argument has for the morality of publicly funding cloning for potential therapeutic purposes and potential responses to my position, I conclude that such funding is not only ethically permissible, but also humane national policy.

  11. Approaches for modeling within subject variability in pharmacometric count data analysis: dynamic inter-occasion variability and stochastic differential equations.

    Science.gov (United States)

    Deng, Chenhui; Plan, Elodie L; Karlsson, Mats O

    2016-06-01

    Parameter variation in pharmacometric analysis studies can be characterized as within subject parameter variability (WSV) in pharmacometric models. WSV has previously been successfully modeled using inter-occasion variability (IOV), but also stochastic differential equations (SDEs). In this study, two approaches, dynamic inter-occasion variability (dIOV) and adapted stochastic differential equations, were proposed to investigate WSV in pharmacometric count data analysis. These approaches were applied to published count models for seizure counts and Likert pain scores. Both approaches improved the model fits significantly. In addition, stochastic simulation and estimation were used to explore further the capability of the two approaches to diagnose and improve models where existing WSV is not recognized. The results of simulations confirmed the gain in introducing WSV as dIOV and SDEs when parameters vary randomly over time. Further, the approaches were also informative as diagnostics of model misspecification, when parameters changed systematically over time but this was not recognized in the structural model. The proposed approaches in this study offer strategies to characterize WSV and are not restricted to count data.

  12. La guerre en Irak peut-elle être justifiée comme un cas d’intervention humanitaire?

    Directory of Open Access Journals (Sweden)

    Stéphane Courtois

    2006-05-01

    Full Text Available Most current criticisms against the intervention in Iraq have tackled the two justifications articulated by the members of the coalition:(1 that the United States had to neutralize the threats that Iraq generated for their own security and to the political stability in the Middle Eastand (2 that the war in Iraq can be justified as a necessary stage in the war against international terrorism. The principal objection against justification (1 is that it was, and remains, unfounded. Against justification (2, many have replied that the intervention in Iraq had no connection,or at best had merely an indirect connection, with the fight against terrorism. In a recent text,Fernando Tesón claims that the American intervention in Iraq can nevertheless be morally justified as a case of humanitarian intervention. By “humanitarian intervention”, one must understand a coercive action taken by a state or a group of states inside the sphere of jurisdiction of an independent political community, without the permission of the latter, in order to preventor to end a massive violation of individual rights perpetrated against innocent persons which are not co-nationals inside this political community. I argue in this article that the American intervention in Iraq does not satisfy the conditions of a legitimate humanitarian intervention, as opposed to what Fernando Tesón claims.

  13. Can conditional health policies be justified? A policy analysis of the new NHS dental contract reforms.

    Science.gov (United States)

    Laverty, Louise; Harris, Rebecca

    2018-06-01

    Conditional policies, which emphasise personal responsibility, are becoming increasingly common in healthcare. Although used widely internationally, they are relatively new within the UK health system where there have been concerns about whether they can be justified. New NHS dental contracts include the introduction of a conditional component that restricts certain patients from accessing a full range of treatment until they have complied with preventative action. A policy analysis of published documents on the NHS dental contract reforms from 2009 to 2016 was conducted to consider how conditionality is justified and whether its execution is likely to cause distributional effects. Contractualist, paternalistic and mutualist arguments that reflect notions of responsibility and obligation are used as justification within policy. Underlying these arguments is an emphasis on preserving the finite resources of a strained NHS. We argue that the proposed conditional component may differentially affect disadvantaged patients, who do not necessarily have access to the resources needed to meet the behavioural requirements. As such, the conditional component of the NHS dental contract reform has the potential to exacerbate oral health inequalities. Conditional health policies may challenge core NHS principles and, as is the case with any conditional policy, should be carefully considered to ensure they do not exacerbate health inequities. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.

  14. Stochastic differential equations in NONMEM: implementation, application, and comparison with ordinary differential equations.

    Science.gov (United States)

    Tornøe, Christoffer W; Overgaard, Rune V; Agersø, Henrik; Nielsen, Henrik A; Madsen, Henrik; Jonsson, E Niclas

    2005-08-01

    The objective of the present analysis was to explore the use of stochastic differential equations (SDEs) in population pharmacokinetic/pharmacodynamic (PK/PD) modeling. The intra-individual variability in nonlinear mixed-effects models based on SDEs is decomposed into two types of noise: a measurement and a system noise term. The measurement noise represents uncorrelated error due to, for example, assay error while the system noise accounts for structural misspecifications, approximations of the dynamical model, and true random physiological fluctuations. Since the system noise accounts for model misspecifications, the SDEs provide a diagnostic tool for model appropriateness. The focus of the article is on the implementation of the Extended Kalman Filter (EKF) in NONMEM for parameter estimation in SDE models. Various applications of SDEs in population PK/PD modeling are illustrated through a systematic model development example using clinical PK data of the gonadotropin releasing hormone (GnRH) antagonist degarelix. The dynamic noise estimates were used to track variations in model parameters and systematically build an absorption model for subcutaneously administered degarelix. The EKF-based algorithm was successfully implemented in NONMEM for parameter estimation in population PK/PD models described by systems of SDEs. The example indicated that it was possible to pinpoint structural model deficiencies, and that valuable information may be obtained by tracking unexplained variations in parameters.

  15. How arguments are justified in the media debate on climate change in the USA and France

    OpenAIRE

    Ylä-Anttila, Tuomas; Kukkonen, Anna

    2014-01-01

    This paper examines the differences in the values that are evoked to justify arguments in the media debate on climate change in USA and France from 1997 to 2011. We find that climate change is more often discussed in terms of justice, democracy, and legal regulation in France, while monetary value plays a more important role as a justification for climate policy arguments in the USA. Technological and scientific arguments are more often made in France, and ecological arguments equally in both...

  16. Do the ends justify the means? Nursing and the dilemma of whistleblowing.

    Science.gov (United States)

    Firtko, Angela; Jackson, Debra

    2005-01-01

    Patient advocacy and a desire to rectify misconduct in the clinical setting are frequently cited reasons for whistleblowing in nursing and healthcare. This paper explores current knowledge about whistleblowing in nursing and critiques current definitions of whistleblowing. The authors draw on published perspectives of whistleblowing including the media, to reflect on the role of the media in health related whistleblowing. Whistleblowing represents a dilemma for nurses. It strikes at the heart of professional values and raises questions about the responsibilities nurses have to communities and clients, the profession, and themselves. In its most damaging forms, whistleblowing necessarily involves a breach of ethical standards, particularly confidentiality. Despite the pain that can be associated with whistleblowing, if the ends are improved professional standards, enhanced outcomes, rectification of wrongdoings, and, increased safety for patients and staff in our health services, then the ends definitely justify the means.

  17. What the eye doesn’t see: An analysis of strategies for justifying acts by an appeal for conealing them

    NARCIS (Netherlands)

    Tellings, A.E.J.M.

    2006-01-01

    This article analyzes the moral reasoning implied in a very commonly used expression, namely, “What the eye doesn't see, the heart doesn't grieve over”, or “What you don't know won't hurt you.” It especially deals with situations in which it is used for trying to justify acts that are, in

  18. What justifies a hospital admission at the end of life? A focus group study on perspectives of family physicians and nurses

    NARCIS (Netherlands)

    Reyniers, T.; Houttekieri, D.; Cohen, J.; Pasman, H.R.; Deliens, L.

    2014-01-01

    Background: Despite a majority preferring not to die in hospital and health policies aimed at increasing home death, the proportion of hospital deaths remains high. Gaining insight into professional caregiver perspectives about what justifies them could be helpful in understanding the persistently

  19. Justifying British Advertising in War and Austerity, 1939-51.

    Science.gov (United States)

    Haughton, Philippa

    2017-09-01

    Drawing together institutional papers, the trade- and national-press, and Mass-Observation documents, this article examines the changing ways that the Advertising Association justified commercial advertising from 1939 to 1951. It argues that the ability to repeatedly re-conceptualize the social and economic purposes of advertising was central to the industry's survival and revival during the years of war and austerity. This matters because the survival and revival of commercial advertising helps to explain the composition of the post-war mixed economy and the emergence of a consumer culture that became the 'golden age' of capitalism. While commercial advertising's role in supporting periods of affluence is well documented, much less is known about its relationship with war and austerity. This omission is problematic. Advertising was only able to shape the 1950s and 1960s economy because its corporate structures remained intact during the 1940s, as the industry withstood the challenges of wartime and the difficulties presented under Attlee's government. Recognizing the deliberate attempts of advertising people to promote a role for commercial advertising invites us to reconsider the inevitability of post-war affluence, while offering fresh insight into the debate around consumer education, freedom of choice, and the centrality of advertising and communication in democratic society: issues central to the society Britain was, and hoped to become. © The Author [2017]. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  20. Photonic jet etching: Justifying the shape of optical fiber tip

    Science.gov (United States)

    Abdurrochman, Andri; Zelgowski, Julien; Lecler, Sylvain; Mermet, Frédéric; Tumbelaka, Bernard; Fontaine, Joël

    2016-02-01

    Photonic jet (PJ) is a low diverging and highly concentrated beam in the shadow side of dielectric particle (cylinder or sphere). The concentration can be more than 200 times higher than the incidence wave. It is a non-resonance phenomenon in the near-field can propagate in a few wavelengths. Many potential applications have been proposed, including PJ etching. Hence, a guided-beam is considered increasing the PJ mobility control. While the others used a combination of classical optical fibers and spheres, we are concerned on a classical optical fiber with spherical tip to generate the PJ. This PJ driven waveguide has been realized using Gaussian mode beam inside the core. It has different variable parameters compared to classical PJ, which will be discussed in correlation with the etching demonstrations. The parameters dependency between the tip and PJ properties are complex; and theoretical aspect of this interaction will be exposed to justify the shape of our tip and optical fiber used in our demonstrations. Methods to achieve such a needed optical fiber tip will also be described. Finally the ability to generate PJ out of the shaped optical fiber will be experimentally demonstrated and the potential applications for material processing will be exposed.

  1. Analysing model fit of psychometric process models: An overview, a new test and an application to the diffusion model.

    Science.gov (United States)

    Ranger, Jochen; Kuhn, Jörg-Tobias; Szardenings, Carsten

    2017-05-01

    Cognitive psychometric models embed cognitive process models into a latent trait framework in order to allow for individual differences. Due to their close relationship to the response process the models allow for profound conclusions about the test takers. However, before such a model can be used its fit has to be checked carefully. In this manuscript we give an overview over existing tests of model fit and show their relation to the generalized moment test of Newey (Econometrica, 53, 1985, 1047) and Tauchen (J. Econometrics, 30, 1985, 415). We also present a new test, the Hausman test of misspecification (Hausman, Econometrica, 46, 1978, 1251). The Hausman test consists of a comparison of two estimates of the same item parameters which should be similar if the model holds. The performance of the Hausman test is evaluated in a simulation study. In this study we illustrate its application to two popular models in cognitive psychometrics, the Q-diffusion model and the D-diffusion model (van der Maas, Molenaar, Maris, Kievit, & Boorsboom, Psychol Rev., 118, 2011, 339; Molenaar, Tuerlinckx, & van der Maas, J. Stat. Softw., 66, 2015, 1). We also compare the performance of the test to four alternative tests of model fit, namely the M 2 test (Molenaar et al., J. Stat. Softw., 66, 2015, 1), the moment test (Ranger et al., Br. J. Math. Stat. Psychol., 2016) and the test for binned time (Ranger & Kuhn, Psychol. Test. Asess. , 56, 2014b, 370). The simulation study indicates that the Hausman test is superior to the latter tests. The test closely adheres to the nominal Type I error rate and has higher power in most simulation conditions. © 2017 The British Psychological Society.

  2. Bayesian Sensitivity Analysis of a Nonlinear Dynamic Factor Analysis Model with Nonparametric Prior and Possible Nonignorable Missingness.

    Science.gov (United States)

    Tang, Niansheng; Chow, Sy-Miin; Ibrahim, Joseph G; Zhu, Hongtu

    2017-12-01

    Many psychological concepts are unobserved and usually represented as latent factors apprehended through multiple observed indicators. When multiple-subject multivariate time series data are available, dynamic factor analysis models with random effects offer one way of modeling patterns of within- and between-person variations by combining factor analysis and time series analysis at the factor level. Using the Dirichlet process (DP) as a nonparametric prior for individual-specific time series parameters further allows the distributional forms of these parameters to deviate from commonly imposed (e.g., normal or other symmetric) functional forms, arising as a result of these parameters' restricted ranges. Given the complexity of such models, a thorough sensitivity analysis is critical but computationally prohibitive. We propose a Bayesian local influence method that allows for simultaneous sensitivity analysis of multiple modeling components within a single fitting of the model of choice. Five illustrations and an empirical example are provided to demonstrate the utility of the proposed approach in facilitating the detection of outlying cases and common sources of misspecification in dynamic factor analysis models, as well as identification of modeling components that are sensitive to changes in the DP prior specification.

  3. Business capital accumulation and the user cost: is there a heterogeneity bias? JRC Working Papers in Economics and Finance, 2017/11

    OpenAIRE

    FATICA SERENA

    2017-01-01

    Empirical models of capital accumulation estimated on aggregate data series are based on the assumption that capital asset types respond in the same way to cost variables. Likewise, aggregate models do not consider potential heterogeneity in investment behavior originating on the demand side for capital, e.g. at the sector level. We show that the underlying assumption of homogeneity may indeed lead to misspecification of standard aggregate investment models. Using data from 23 sectors in 10 O...

  4. How can interventions for inhabitants be justified after a nuclear accident? An approach based on the radiological protection system of the international commission on radiological protection

    International Nuclear Information System (INIS)

    Takahara, Shogo; Homma, Toshimitsu; Yoneda, Minoru; Shimada, Yoko

    2016-01-01

    Management of radiation-induced risks in areas contaminated by a nuclear accident is characterized by three ethical issues: (1) risk trade-off, (2) paternalistic intervention and (3) individualization of responsibilities. To deal with these issues and to clarify requirements of justification of interventions for the purpose of reduction in radiation-induced risks, we explored the ethical basis of the radiological protection system of the International Commission on Radiological Protection (ICRP). The ICRP's radiological protection system is established based on three normative ethics, i.e. utilitarianism, deontology and virtue ethics. The three ethical issues can be resolved based on the decision-making framework which is constructed in combination with these ethical theories. In addition, the interventions for inhabitants have the possibility to be justified in accordance with two ways. Firstly, when the dangers are severe and far-reaching, interventions could be justified with a sufficient explanation about the nature of harmful effects (or beneficial consequences). Secondly, if autonomy of intervened-individuals can be promoted, those interventions could be justified. (author)

  5. Is the term "fasciculus opticus cerebralis" more justifiable than the term "optic nerve"?

    Science.gov (United States)

    Vojniković, Bojo; Bajek, Snjezana; Bajek, Goran; Strenja-Linić, Ines; Grubesić, Aron

    2013-04-01

    The terminology of the optic nerve had already been changed three times, since 1895 until 1955 when the term "nervus opticus" was introduced in the "Terminologia Anatomica". Following our study we claim that, from the aspect of phylogenetic evolution of binocular vision development as well as optical embryogenesis where opticus is evidently presented as a product of diencephalic structures, the addition of the term "nervus" to opticus is not adequate and justified. From the clinical aspect the term "nervus opticus" is also inadequate, both as a "nerve" that has no functional regenerative properties, unlike other cranial nerves, as well as from a pedagogical and didactical aspect of educating future physicians. We suggest that the term "Fasciculus Opticus Cerebralis" should be used as it much better explains the origin as well as its affiliation to the central nervous system.

  6. Modeling units of study from a pedagogical perspective: the pedagogical meta-model behind EML

    NARCIS (Netherlands)

    Koper, Rob

    2003-01-01

    This text is a short summary of the work on pedagogical analysis carried out when EML (Educational Modelling Language) was being developed. Because we address pedagogical meta-models the consequence is that I must justify the underlying pedagogical models it describes. I have included a (far from

  7. A matlab framework for estimation of NLME models using stochastic differential equations: applications for estimation of insulin secretion rates.

    Science.gov (United States)

    Mortensen, Stig B; Klim, Søren; Dammann, Bernd; Kristensen, Niels R; Madsen, Henrik; Overgaard, Rune V

    2007-10-01

    The non-linear mixed-effects model based on stochastic differential equations (SDEs) provides an attractive residual error model, that is able to handle serially correlated residuals typically arising from structural mis-specification of the true underlying model. The use of SDEs also opens up for new tools for model development and easily allows for tracking of unknown inputs and parameters over time. An algorithm for maximum likelihood estimation of the model has earlier been proposed, and the present paper presents the first general implementation of this algorithm. The implementation is done in Matlab and also demonstrates the use of parallel computing for improved estimation times. The use of the implementation is illustrated by two examples of application which focus on the ability of the model to estimate unknown inputs facilitated by the extension to SDEs. The first application is a deconvolution-type estimation of the insulin secretion rate based on a linear two-compartment model for C-peptide measurements. In the second application the model is extended to also give an estimate of the time varying liver extraction based on both C-peptide and insulin measurements.

  8. Goodness-of-fit tests and model diagnostics for negative binomial regression of RNA sequencing data.

    Science.gov (United States)

    Mi, Gu; Di, Yanming; Schafer, Daniel W

    2015-01-01

    This work is about assessing model adequacy for negative binomial (NB) regression, particularly (1) assessing the adequacy of the NB assumption, and (2) assessing the appropriateness of models for NB dispersion parameters. Tools for the first are appropriate for NB regression generally; those for the second are primarily intended for RNA sequencing (RNA-Seq) data analysis. The typically small number of biological samples and large number of genes in RNA-Seq analysis motivate us to address the trade-offs between robustness and statistical power using NB regression models. One widely-used power-saving strategy, for example, is to assume some commonalities of NB dispersion parameters across genes via simple models relating them to mean expression rates, and many such models have been proposed. As RNA-Seq analysis is becoming ever more popular, it is appropriate to make more thorough investigations into power and robustness of the resulting methods, and into practical tools for model assessment. In this article, we propose simulation-based statistical tests and diagnostic graphics to address model adequacy. We provide simulated and real data examples to illustrate that our proposed methods are effective for detecting the misspecification of the NB mean-variance relationship as well as judging the adequacy of fit of several NB dispersion models.

  9. Should she be granted asylum? Examining the justifiability of the persecution criterion and nexus clause in asylum law

    Directory of Open Access Journals (Sweden)

    Noa Wirth Nogradi

    2016-10-01

    Full Text Available The current international asylum regime recognizes only persecuted persons as rightful asylum applicants. The Geneva Convention and Protocol enumerate specific grounds upon which persecution is recognized. Claimants who cannot demonstrate a real risk of persecution based on one of the recognized grounds are unlikely to be granted asylum. This paper aims to relate real-world practices to normative theories, asking whether the Convention’s restricted preference towards persecuted persons is normatively justified. I intend to show that the justifications of the persecution criterion also apply to grounds currently lacking recognition. My main concern will be persecution on the grounds of gender.The first section introduces the dominant standpoints in theories of asylum, which give different answers to the question of who should be granted asylum, based on different normative considerations. Humanitarian theories base their claims on the factual neediness of asylum-seekers, holding that whoever is in grave danger of harm or deprivation should be granted asylum. Political theories base their justifications on conceptions of legitimacy and membership, holding that whoever has been denied membership in their original state should be granted asylum. Under political theories, Matthew Price’s theory will be discussed, which provides a normative justification of the currently recognized persecution criterion. The second section provides a descriptive definition of persecution based on Kuosmanen (2014, and evaluates the normative relevance of the different elements of this definition based on the theories presented previously. The third section is devoted to the examination of the normative justifiability of the nexus clause’s exclusive list of the bases (grounds upon which persons might be persecuted. The section argues that while the clause does not recognize that persecution might be based on gender, in fact many women experience harms based on

  10. How to justify enforcing a Ulysses contract when Ulysses is competent to refuse.

    Science.gov (United States)

    Davis, John K

    2008-03-01

    Sometimes the mentally ill have sufficient mental capacity to refuse treatment competently, and others have a moral duty to respect their refusal. However, those with episodic mental disorders may wish to precommit themselves to treatment, using Ulysses contracts known as "mental health advance directives." How can health care providers justify enforcing such contracts over an agent's current, competent refusal? I argue that providers respect an agent's autonomy not retrospectively--by reference to his or her past wishes-and not merely synchronically--so that the agent gets what he or she wants right now-but diachronically and prospectively, acting so that the agent can shape his or her circumstances as the agent wishes over time, for the agent will experience the consequences of providers' actions over time. Mental health directives accomplish this, so they are a way of respecting the agent's autonomy even when providers override the agent's current competent refusal.

  11. Forecasting Hong Kong economy using factor augmented vector autoregression

    OpenAIRE

    Pang, Iris Ai Jao

    2010-01-01

    This work applies the FAVAR model to forecast GDP growth rate, unemployment rate and inflation rate of the Hong Kong economy. There is no factor model forecasting literature on the Hong Kong economy. The objective is to find out whether factor forecasting of using a large dataset can improve forecast performance of the Hong Kong economy. To avoid misspecification of the number of factors in the FAVAR, combination forecasts are constructed. It is found that forecasts from FAVAR model overall o...

  12. Quadrilatero ferrifero, MG, Brazil. Regional characteristics justify application for global geoparks network

    International Nuclear Information System (INIS)

    Mantesso-Neto, V.; Azevedo, U.; Guimarães, R.; Nascimento, M.; Beato, D.; Castro, P.; Liccardo, A.

    2010-01-01

    Geopark, a concept created in 2000, is neither strictly geological nor a park in the usual sense. Geopark is a holistic concept, aimed at promoting sustainable economic development based on unique geological features (represented by “geosites”, outcrops with special value, under some point of view), but also having a social objective. The Global Geoparks Network (GGN), working in synergy with UNESCO, has 64 members in 19 countries. This paper presents a brief history and some characteristics of a few European Geoparks, followed by some aspects of the Quadrilátero Ferrífero. As shall be seen, this area is rich in geosites, and in historical, social and cultural attractions. On the other hand, foreseeing a decline in mineral exploitation in mid-century, it urgently seeks a good plan for regional development. As a conclusion, it will be seen that its characteristics fit the Geopark concept, and justify the support of the geoscientific community, and that of society in general, to its application, recently submitted to UNESCO, for admission to the GGN

  13. Modelling Embedded Systems by Non-Monotonic Refinement

    NARCIS (Netherlands)

    Mader, Angelika H.; Marincic, J.; Wupper, H.

    2008-01-01

    This paper addresses the process of modelling embedded sys- tems for formal verification. We propose a modelling process built on non-monotonic refinement and a number of guidelines. The outcome of the modelling process is a model, together with a correctness argument that justifies our modelling

  14. On deciding to have a lobotomy: either lobotomies were justified or decisions under risk should not always seek to maximise expected utility.

    Science.gov (United States)

    Cooper, Rachel

    2014-02-01

    In the 1940s and 1950s thousands of lobotomies were performed on people with mental disorders. These operations were known to be dangerous, but thought to offer great hope. Nowadays, the lobotomies of the 1940s and 1950s are widely condemned. The consensus is that the practitioners who employed them were, at best, misguided enthusiasts, or, at worst, evil. In this paper I employ standard decision theory to understand and assess shifts in the evaluation of lobotomy. Textbooks of medical decision making generally recommend that decisions under risk are made so as to maximise expected utility (MEU) I show that using this procedure suggests that the 1940s and 1950s practice of psychosurgery was justifiable. In making sense of this finding we have a choice: Either we can accept that psychosurgery was justified, in which case condemnation of the lobotomists is misplaced. Or, we can conclude that the use of formal decision procedures, such as MEU, is problematic.

  15. The estimation of time-varying risks in asset pricing modelling using B-Spline method

    Science.gov (United States)

    Nurjannah; Solimun; Rinaldo, Adji

    2017-12-01

    Asset pricing modelling has been extensively studied in the past few decades to explore the risk-return relationship. The asset pricing literature typically assumed a static risk-return relationship. However, several studies found few anomalies in the asset pricing modelling which captured the presence of the risk instability. The dynamic model is proposed to offer a better model. The main problem highlighted in the dynamic model literature is that the set of conditioning information is unobservable and therefore some assumptions have to be made. Hence, the estimation requires additional assumptions about the dynamics of risk. To overcome this problem, the nonparametric estimators can also be used as an alternative for estimating risk. The flexibility of the nonparametric setting avoids the problem of misspecification derived from selecting a functional form. This paper investigates the estimation of time-varying asset pricing model using B-Spline, as one of nonparametric approach. The advantages of spline method is its computational speed and simplicity, as well as the clarity of controlling curvature directly. The three popular asset pricing models will be investigated namely CAPM (Capital Asset Pricing Model), Fama-French 3-factors model and Carhart 4-factors model. The results suggest that the estimated risks are time-varying and not stable overtime which confirms the risk instability anomaly. The results is more pronounced in Carhart’s 4-factors model.

  16. 32 CFR 37.560 - Must I be able to estimate project expenditures precisely in order to justify use of a fixed...

    Science.gov (United States)

    2010-07-01

    ... 32 National Defense 1 2010-07-01 2010-07-01 false Must I be able to estimate project expenditures... I be able to estimate project expenditures precisely in order to justify use of a fixed-support TIA... purposes of this illustration, let that minimum recipient cost sharing be 40% of the total project costs...

  17. Estimating inverse probability weights using super learner when weight-model specification is unknown in a marginal structural Cox model context.

    Science.gov (United States)

    Karim, Mohammad Ehsanul; Platt, Robert W

    2017-06-15

    Correct specification of the inverse probability weighting (IPW) model is necessary for consistent inference from a marginal structural Cox model (MSCM). In practical applications, researchers are typically unaware of the true specification of the weight model. Nonetheless, IPWs are commonly estimated using parametric models, such as the main-effects logistic regression model. In practice, assumptions underlying such models may not hold and data-adaptive statistical learning methods may provide an alternative. Many candidate statistical learning approaches are available in the literature. However, the optimal approach for a given dataset is impossible to predict. Super learner (SL) has been proposed as a tool for selecting an optimal learner from a set of candidates using cross-validation. In this study, we evaluate the usefulness of a SL in estimating IPW in four different MSCM simulation scenarios, in which we varied the specification of the true weight model specification (linear and/or additive). Our simulations show that, in the presence of weight model misspecification, with a rich and diverse set of candidate algorithms, SL can generally offer a better alternative to the commonly used statistical learning approaches in terms of MSE as well as the coverage probabilities of the estimated effect in an MSCM. The findings from the simulation studies guided the application of the MSCM in a multiple sclerosis cohort from British Columbia, Canada (1995-2008), to estimate the impact of beta-interferon treatment in delaying disability progression. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  18. Is routine antenatal venereal disease research laboratory test still justified? Nigerian experience

    Directory of Open Access Journals (Sweden)

    Nwosu BO

    2015-01-01

    Full Text Available Betrand O Nwosu,1 George U Eleje,1 Amaka L Obi-Nwosu,2 Ita F Ahiarakwem,3 Comfort N Akujobi,4 Chukwudi C Egwuatu,4 Chukwudumebi O Onyiuke5 1Department of Obstetrics and Gynecology, Nnamdi Azikiwe University, Nnewi Campus, Nnewi, Anambra State, Nigeria; 2Department of Family Medicine, Nnamdi Azikiwe University Teaching Hospital, Nnewi, Nigeria; 3Department of Medical Microbiology, Imo State University Teaching Hospital, Orlu, Imo State, Nigeria; 4Department of Medical Microbiology, Nnamdi Azikiwe University, Nnewi Campus, Nnewi, Anambra State, Nigeria; 5Department of Medical Microbiology, Nnamdi Azikiwe University Teaching Hospital, Nnewi, Anambra State, NigeriaObjective: To determine the seroreactivity of pregnant women to syphilis in order to justify the need for routine antenatal syphilis screening.Methods: A multicenter retrospective analysis of routine antenatal venereal disease research laboratory (VDRL test results between 1 September 2010 and 31 August 2012 at three specialist care hospitals in south-east Nigeria was done. A reactive VDRL result is subjected for confirmation using Treponema pallidum hemagglutination assay test. Analysis was by Epi Info 2008 version 3.5.1 and Stata/IC version 10.Results: Adequate records were available regarding 2,156 patients and were thus reviewed. The mean age of the women was 27.4 years (±3.34, and mean gestational age was 26.4 weeks (±6.36. Only 15 cases (0.70% were seropositive to VDRL. Confirmatory T. pallidum hemagglutination assay was positive in 4 of the 15 cases, giving an overall prevalence of 0.19% and a false-positive rate of 73.3%. There was no significant difference in the prevalence of syphilis in relation to maternal age and parity (P>0.05.Conclusion: While the prevalence of syphilis is extremely low in the antenatal care population at the three specialist care hospitals in south-east Nigeria, false-positive rate is high and prevalence did not significantly vary with maternal age or

  19. PLS and multicollinearity under conditions common in satisfaction studies

    DEFF Research Database (Denmark)

    Nielsen, Rikke; Kristensen, Kai; Eskildsen, Jacob Kjær

    A number of studies have investigated the performance of the PLS path modelling algorithm in the presence of common empirical problems, such as model misspecification, skewness of manifest variables, missing values, and multicollinearity, and they have shown PLS to be quite robust (see e.g. Cassel...... et al., 1999; Kristensen, Eskildsen, 2005). However, most of the studies, including our own, have focused on somewhat simple models with very simple correlation structures. This paper extends the existing knowledge by investigating the effect of varying degrees of multicollinearity on the PLS model...

  20. Is radiography justified for the evaluation of patients presenting with cervical spine trauma?

    Energy Technology Data Exchange (ETDEWEB)

    Theocharopoulos, Nicholas; Chatzakis, Georgios; Damilakis, John [Department of Medical Physics, Faculty of Medicine, University of Crete, P.O. Box 2208, Iraklion, 71003 Crete (Greece) and Department of Natural Sciences, Technological Education Institute of Crete, P.O. Box 140, Iraklion 71004 Crete (Greece); Department of Radiology, Faculty of Medicine, University of Crete, P.O. Box 2208, Iraklion, 71003 Crete (Greece); Department of Medical Physics, Faculty of Medicine, University of Crete, P.O. Box 2208, Iraklion, 71003 Crete (Greece)

    2009-10-15

    radiogenic lethal cancer incidents. According to the decision model calculations, the use of CT is more favorable over the use of radiography alone or radiography with CT by a factor of 13, for low risk 20 yr old patients, to a factor of 23, for high risk patients younger than 80 yr old. The radiography/CT imaging strategy slightly outperforms plain radiography for high and moderate risk patients. Regardless of the patient age, sex, and fracture risk, the higher diagnostic accuracy obtained by the CT examination counterbalances the increase in dose compared to plain radiography or radiography followed by CT only for positive radiographs and renders CT utilization justified and the radiographic screening redundant.

  1. Adjustment or updating of models

    Indian Academy of Sciences (India)

    25, Part 3, June 2000, pp. 235±245 ... While the model is defined in terms of these spatial parameters, ... discussed in terms of `model order' with concern focused on whether or not the ..... In other words, it is not easy to justify what the required.

  2. Identification and estimation of nonlinear models using two samples with nonclassical measurement errors

    KAUST Repository

    Carroll, Raymond J.

    2010-05-01

    This paper considers identification and estimation of a general nonlinear Errors-in-Variables (EIV) model using two samples. Both samples consist of a dependent variable, some error-free covariates, and an error-prone covariate, for which the measurement error has unknown distribution and could be arbitrarily correlated with the latent true values; and neither sample contains an accurate measurement of the corresponding true variable. We assume that the regression model of interest - the conditional distribution of the dependent variable given the latent true covariate and the error-free covariates - is the same in both samples, but the distributions of the latent true covariates vary with observed error-free discrete covariates. We first show that the general latent nonlinear model is nonparametrically identified using the two samples when both could have nonclassical errors, without either instrumental variables or independence between the two samples. When the two samples are independent and the nonlinear regression model is parameterized, we propose sieve Quasi Maximum Likelihood Estimation (Q-MLE) for the parameter of interest, and establish its root-n consistency and asymptotic normality under possible misspecification, and its semiparametric efficiency under correct specification, with easily estimated standard errors. A Monte Carlo simulation and a data application are presented to show the power of the approach.

  3. Effect of misspecification of gene frequency on the two-point LOD score.

    Science.gov (United States)

    Pal, D K; Durner, M; Greenberg, D A

    2001-11-01

    In this study, we used computer simulation of simple and complex models to ask: (1) What is the penalty in evidence for linkage when the assumed gene frequency is far from the true gene frequency? (2) If the assumed model for gene frequency and inheritance are misspecified in the analysis, can this lead to a higher maximum LOD score than that obtained under the true parameters? Linkage data simulated under simple dominant, recessive, dominant and recessive with reduced penetrance, and additive models, were analysed assuming a single locus with both the correct and incorrect dominance model and assuming a range of different gene frequencies. We found that misspecifying the analysis gene frequency led to little penalty in maximum LOD score in all models examined, especially if the assumed gene frequency was lower than the generating one. Analysing linkage data assuming a gene frequency of the order of 0.01 for a dominant gene, and 0.1 for a recessive gene, appears to be a reasonable tactic in the majority of realistic situations because underestimating the gene frequency, even when the true gene frequency is high, leads to little penalty in the LOD score.

  4. Detection of Q-Matrix Misspecification Using Two Criteria for Validation of Cognitive Structures under the Least Squares Distance Model

    Science.gov (United States)

    Romero, Sonia J.; Ordoñez, Xavier G.; Ponsoda, Vincente; Revuelta, Javier

    2014-01-01

    Cognitive Diagnostic Models (CDMs) aim to provide information about the degree to which individuals have mastered specific attributes that underlie the success of these individuals on test items. The Q-matrix is a key element in the application of CDMs, because contains links item-attributes representing the cognitive structure proposed for solve…

  5. Can the benefits of physical seabed restoration justify the costs? An assessment of a disused aggregate extraction site off the Thames Estuary, UK.

    Science.gov (United States)

    Cooper, Keith; Burdon, Daryl; Atkins, Jonathan P; Weiss, Laura; Somerfield, Paul; Elliott, Michael; Turner, Kerry; Ware, Suzanne; Vivian, Chris

    2013-10-15

    Physical and biological seabed impacts can persist long after the cessation of marine aggregate dredging. Whilst small-scale experimental studies have shown that it may be possible to mitigate such impacts, it is unclear whether the costs of restoration are justified on an industrial scale. Here we explore this question using a case study off the Thames Estuary, UK. By understanding the nature and scale of persistent impacts, we identify possible techniques to restore the physical properties of the seabed, and the costs and the likelihood of success. An analysis of the ecosystem services and goods/benefits produced by the site is used to determine whether intervention is justified. Whilst a comparison of costs and benefits at this site suggests restoration would not be warranted, the analysis is site-specific. We emphasise the need to better define what is, and is not, an acceptable seabed condition post-dredging. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.

  6. Guess LOD approach: sufficient conditions for robustness.

    Science.gov (United States)

    Williamson, J A; Amos, C I

    1995-01-01

    Analysis of genetic linkage between a disease and a marker locus requires specifying a genetic model describing both the inheritance pattern and the gene frequencies of the marker and trait loci. Misspecification of the genetic model is likely for etiologically complex diseases. In previous work we have shown through analytic studies that misspecifying the genetic model for disease inheritance does not lead to excess false-positive evidence for genetic linkage provided the genetic marker alleles of all pedigree members are known, or can be inferred without bias from the data. Here, under various selection or ascertainment schemes we extend these previous results to situations in which the genetic model for the marker locus may be incorrect. We provide sufficient conditions for the asymptotic unbiased estimation of the recombination fraction under the null hypothesis of no linkage, and also conditions for the limiting distribution of the likelihood ratio test for no linkage to be chi-squared. Through simulation studies we document some situations under which asymptotic bias can result when the genetic model is misspecified. Among those situations under which an excess of false-positive evidence for genetic linkage can be generated, the most common is failure to provide accurate estimates of the marker allele frequencies. We show that in most cases false-positive evidence for genetic linkage is unlikely to result solely from the misspecification of the genetic model for disease or trait inheritance.

  7. Optimal inflation for the U.S.

    OpenAIRE

    Roberto M. Billi

    2007-01-01

    What is the correctly measured inflation rate that monetary policy should aim for in the long-run? This paper characterizes the optimal inflation rate for the U.S. economy in a New Keynesian sticky-price model with an occasionally binding zero lower bound on the nominal interest rate. Real-rate and mark-up shocks jointly determine the optimal inflation rate to be positive but not large. Even allowing for the possibility of extreme model misspecification, the optimal inflation rate is robustly...

  8. Flexible mixture modeling via the multivariate t distribution with the Box-Cox transformation: an alternative to the skew-t distribution.

    Science.gov (United States)

    Lo, Kenneth; Gottardo, Raphael

    2012-01-01

    Cluster analysis is the automated search for groups of homogeneous observations in a data set. A popular modeling approach for clustering is based on finite normal mixture models, which assume that each cluster is modeled as a multivariate normal distribution. However, the normality assumption that each component is symmetric is often unrealistic. Furthermore, normal mixture models are not robust against outliers; they often require extra components for modeling outliers and/or give a poor representation of the data. To address these issues, we propose a new class of distributions, multivariate t distributions with the Box-Cox transformation, for mixture modeling. This class of distributions generalizes the normal distribution with the more heavy-tailed t distribution, and introduces skewness via the Box-Cox transformation. As a result, this provides a unified framework to simultaneously handle outlier identification and data transformation, two interrelated issues. We describe an Expectation-Maximization algorithm for parameter estimation along with transformation selection. We demonstrate the proposed methodology with three real data sets and simulation studies. Compared with a wealth of approaches including the skew-t mixture model, the proposed t mixture model with the Box-Cox transformation performs favorably in terms of accuracy in the assignment of observations, robustness against model misspecification, and selection of the number of components.

  9. A hybrid model for combining case-control and cohort studies in systematic reviews of diagnostic tests

    Science.gov (United States)

    Chen, Yong; Liu, Yulun; Ning, Jing; Cormier, Janice; Chu, Haitao

    2014-01-01

    Systematic reviews of diagnostic tests often involve a mixture of case-control and cohort studies. The standard methods for evaluating diagnostic accuracy only focus on sensitivity and specificity and ignore the information on disease prevalence contained in cohort studies. Consequently, such methods cannot provide estimates of measures related to disease prevalence, such as population averaged or overall positive and negative predictive values, which reflect the clinical utility of a diagnostic test. In this paper, we propose a hybrid approach that jointly models the disease prevalence along with the diagnostic test sensitivity and specificity in cohort studies, and the sensitivity and specificity in case-control studies. In order to overcome the potential computational difficulties in the standard full likelihood inference of the proposed hybrid model, we propose an alternative inference procedure based on the composite likelihood. Such composite likelihood based inference does not suffer computational problems and maintains high relative efficiency. In addition, it is more robust to model mis-specifications compared to the standard full likelihood inference. We apply our approach to a review of the performance of contemporary diagnostic imaging modalities for detecting metastases in patients with melanoma. PMID:25897179

  10. How to Justify Purchase of an iPad: Users of the Latest Launch

    Directory of Open Access Journals (Sweden)

    Emílio José Montero Arruda Filho

    2014-09-01

    Full Text Available Contemporary technology innovation is increasingly based on convergence and the multiple uses of products. This change is detailed in the literature about new product development, as well as that on systems integration. This article focuses on the factors that determine the justification for using advanced technology products in which the perceived value of the product is not based on its functionality, as much as on its hedonistic or social value as an “all-in-one” product. In this study, consumer behaviors toward the Apple iPad are analyzed using netnographic evidence taken from internet postings by the consumers themselves. Since Apple initially marketed the iPad as a revolutionary product, with integrated services and features, our analysis concentrates on how consumers perceived these new, innovative features, in an effort to justify their purchase of the product. Our findings indicate that consumers’ justifications are based not only on the iPad’s functionality, but also its hedonic traits, and its similarity to the previously released innovative product, the iPhone.

  11. Evaluating remedial alternatives for an acid mine drainage stream: A model post audit

    Science.gov (United States)

    Runkel, Robert L.; Kimball, Briant A.; Walton-Day, Katherine; Verplanck, Philip L.; Broshears, Robert E.

    2012-01-01

    A post audit for a reactive transport model used to evaluate acid mine drainage treatment systems is presented herein. The post audit is based on a paired synoptic approach in which hydrogeochemical data are collected at low (existing conditions) and elevated (following treatment) pH. Data obtained under existing, low-pH conditions are used for calibration, and the resultant model is used to predict metal concentrations observed following treatment. Predictions for Al, As, Fe, H+, and Pb accurately reproduce the observed reduction in dissolved concentrations afforded by the treatment system, and the information provided in regard to standard attainment is also accurate (predictions correctly indicate attainment or nonattainment of water quality standards for 19 of 25 cases). Errors associated with Cd, Cu, and Zn are attributed to misspecification of sorbent mass (precipitated Fe). In addition to these specific results, the post audit provides insight in regard to calibration and sensitivity analysis that is contrary to conventional wisdom. Steps taken during the calibration process to improve simulations of As sorption were ultimately detrimental to the predictive results, for example, and the sensitivity analysis failed to bracket observed metal concentrations.

  12. Evaluating remedial alternatives for an acid mine drainage stream: a model post audit.

    Science.gov (United States)

    Runkel, Robert L; Kimball, Briant A; Walton-Day, Katherine; Verplanck, Philip L; Broshears, Robert E

    2012-01-03

    A post audit for a reactive transport model used to evaluate acid mine drainage treatment systems is presented herein. The post audit is based on a paired synoptic approach in which hydrogeochemical data are collected at low (existing conditions) and elevated (following treatment) pH. Data obtained under existing, low-pH conditions are used for calibration, and the resultant model is used to predict metal concentrations observed following treatment. Predictions for Al, As, Fe, H(+), and Pb accurately reproduce the observed reduction in dissolved concentrations afforded by the treatment system, and the information provided in regard to standard attainment is also accurate (predictions correctly indicate attainment or nonattainment of water quality standards for 19 of 25 cases). Errors associated with Cd, Cu, and Zn are attributed to misspecification of sorbent mass (precipitated Fe). In addition to these specific results, the post audit provides insight in regard to calibration and sensitivity analysis that is contrary to conventional wisdom. Steps taken during the calibration process to improve simulations of As sorption were ultimately detrimental to the predictive results, for example, and the sensitivity analysis failed to bracket observed metal concentrations.

  13. Uncertainties in repository modeling

    Energy Technology Data Exchange (ETDEWEB)

    Wilson, J.R.

    1996-12-31

    The distant future is ver difficult to predict. Unfortunately, our regulators are being enchouraged to extend ther regulatory period form the standard 10,000 years to 1 million years. Such overconfidence is not justified due to uncertainties in dating, calibration, and modeling.

  14. Uncertainties in repository modeling

    International Nuclear Information System (INIS)

    Wilson, J.R.

    1996-01-01

    The distant future is ver difficult to predict. Unfortunately, our regulators are being enchouraged to extend ther regulatory period form the standard 10,000 years to 1 million years. Such overconfidence is not justified due to uncertainties in dating, calibration, and modeling

  15. Material Modelling - Composite Approach

    DEFF Research Database (Denmark)

    Nielsen, Lauge Fuglsang

    1997-01-01

    is successfully justified comparing predicted results with experimental data obtained in the HETEK-project on creep, relaxation, and shrinkage of very young concretes cured at a temperature of T = 20^o C and a relative humidity of RH = 100%. The model is also justified comparing predicted creep, shrinkage......, and internal stresses caused by drying shrinkage with experimental results reported in the literature on the mechanical behavior of mature concretes. It is then concluded that the model presented applied in general with respect to age at loading.From a stress analysis point of view the most important finding...... in this report is that cement paste and concrete behave practically as linear-viscoelastic materials from an age of approximately 10 hours. This is a significant age extension relative to earlier studies in the literature where linear-viscoelastic behavior is only demonstrated from ages of a few days. Thus...

  16. Some Considerations on the Partial Credit Model

    Science.gov (United States)

    Verhelst, N. D.; Verstralen, H. H. F. M.

    2008-01-01

    The Partial Credit Model (PCM) is sometimes interpreted as a model for stepwise solution of polytomously scored items, where the item parameters are interpreted as difficulties of the steps. It is argued that this interpretation is not justified. A model for stepwise solution is discussed. It is shown that the PCM is suited to model sums of binary…

  17. Robust Consumption-Investment Problem on Infinite Horizon

    Energy Technology Data Exchange (ETDEWEB)

    Zawisza, Dariusz, E-mail: dariusz.zawisza@im.uj.edu.pl [Jagiellonian University in Krakow, Institute of Mathematics, Faculty of Mathematics and Computer Science (Poland)

    2015-12-15

    In our paper we consider an infinite horizon consumption-investment problem under a model misspecification in a general stochastic factor model. We formulate the problem as a stochastic game and finally characterize the saddle point and the value function of that game using an ODE of semilinear type, for which we provide a proof of an existence and uniqueness theorem for its solution. Such equation is interested on its own right, since it generalizes many other equations arising in various infinite horizon optimization problems.

  18. Phylogenetic tree reconstruction accuracy and model fit when proportions of variable sites change across the tree.

    Science.gov (United States)

    Shavit Grievink, Liat; Penny, David; Hendy, Michael D; Holland, Barbara R

    2010-05-01

    Commonly used phylogenetic models assume a homogeneous process through time in all parts of the tree. However, it is known that these models can be too simplistic as they do not account for nonhomogeneous lineage-specific properties. In particular, it is now widely recognized that as constraints on sequences evolve, the proportion and positions of variable sites can vary between lineages causing heterotachy. The extent to which this model misspecification affects tree reconstruction is still unknown. Here, we evaluate the effect of changes in the proportions and positions of variable sites on model fit and tree estimation. We consider 5 current models of nucleotide sequence evolution in a Bayesian Markov chain Monte Carlo framework as well as maximum parsimony (MP). We show that for a tree with 4 lineages where 2 nonsister taxa undergo a change in the proportion of variable sites tree reconstruction under the best-fitting model, which is chosen using a relative test, often results in the wrong tree. In this case, we found that an absolute test of model fit is a better predictor of tree estimation accuracy. We also found further evidence that MP is not immune to heterotachy. In addition, we show that increased sampling of taxa that have undergone a change in proportion and positions of variable sites is critical for accurate tree reconstruction.

  19. Angular overlap model in actinides

    International Nuclear Information System (INIS)

    Gajek, Z.; Mulak, J.

    1991-01-01

    Quantitative foundations of the Angular Overlap Model in actinides based on ab initio calculations of the crystal field effect in the uranium (III) (IV) and (V) ions in various crystals are presented. The calculations justify some common simplifications of the model and fix up the relations between the AOM parameters. Traps and limitations of the AOM phenomenology are discussed

  20. Angular overlap model in actinides

    Energy Technology Data Exchange (ETDEWEB)

    Gajek, Z.; Mulak, J. (Polska Akademia Nauk, Wroclaw (PL). Inst. Niskich Temperatur i Badan Strukturalnych)

    1991-01-01

    Quantitative foundations of the Angular Overlap Model in actinides based on ab initio calculations of the crystal field effect in the uranium (III) (IV) and (V) ions in various crystals are presented. The calculations justify some common simplifications of the model and fix up the relations between the AOM parameters. Traps and limitations of the AOM phenomenology are discussed.

  1. Supplemental Material, PWQ42_2_747845_Choma_and_Prusaczyk - The Effects of System Justifying Beliefs on Skin-Tone Surveillance, Skin-Color Dissatisfaction, and Skin-Bleaching Behavior

    OpenAIRE

    Choma, Becky L.; Prusaczyk, Elvira

    2018-01-01

    Supplemental Material, PWQ42_2_747845_Choma_and_Prusaczyk for The Effects of System Justifying Beliefs on Skin-Tone Surveillance, Skin-Color Dissatisfaction, and Skin-Bleaching Behavior by Becky L. Choma, and Elvira Prusaczyk in Psychology of Women Quarterly

  2. Cost Modeling for Space Telescope

    Science.gov (United States)

    Stahl, H. Philip

    2011-01-01

    Parametric cost models are an important tool for planning missions, compare concepts and justify technology investments. This paper presents on-going efforts to develop single variable and multi-variable cost models for space telescope optical telescope assembly (OTA). These models are based on data collected from historical space telescope missions. Standard statistical methods are used to derive CERs for OTA cost versus aperture diameter and mass. The results are compared with previously published models.

  3. Scientific and technical conference Thermophysical experimental and calculating and theoretical studies to justify characteristics and safety of fast reactors. Thermophysics-2012. Book of abstracts

    International Nuclear Information System (INIS)

    Kalyakin, S.G.; Kukharchuk, O.F.; Sorokin, A.P.

    2012-01-01

    The collection includes abstracts of reports of scientific and technical conference Thermophysics-2012 which has taken place on October 24-26, 2012 in Obninsk. In abstracts the following questions are considered: experimental and calculating and theoretical studies of thermal hydraulics of liquid-metal cooled fast reactors to justify their characteristics and safety; physico-chemical processes in the systems with liquid-metal coolants (LMC); physico-chemical characteristics and thermophysical properties of LMC; development of models, computational methods and calculational codes for simulating processes of of hydrodynamics, heat and mass transfer, including impurities mass transfer in the systems with LMC; methods and means for control of composition and condition of LMC in fast reactor circuits on impurities and purification from them; apparatuses, equipment and technological processes at the work with LMC taking into account the ecology, including fast reactors decommissioning; measuring techniques, sensors and devices for experimental studies of heat and mass transfer in the systems with LMC [ru

  4. Tracing the Rationale Behind UML Model Change Through Argumentation

    Science.gov (United States)

    Jureta, Ivan J.; Faulkner, Stéphane

    Neglecting traceability—i.e., the ability to describe and follow the life of a requirement—is known to entail misunderstanding and miscommunication, leading to the engineering of poor quality systems. Following the simple principles that (a) changes to UML model instances ought be justified to the stakeholders, (b) justification should proceed in a structured manner to ensure rigor in discussions, critique, and revisions of model instances, and (c) the concept of argument instantiated in a justification process ought to be well defined and understood, the present paper introduces the UML Traceability through Argumentation Method (UML-TAM) to enable the traceability of design rationale in UML while allowing the appropriateness of model changes to be checked by analysis of the structure of the arguments provided to justify such changes.

  5. When long memory meets the Kalman filter

    DEFF Research Database (Denmark)

    Grassi, Stefano; Santucci de Magistris, Paolo

    2014-01-01

    The finite sample properties of the state space methods applied to long memory time series are analyzed through Monte Carlo simulations. The state space setup allows to introduce a novel modeling approach in the long memory framework, which directly tackles measurement errors and random level...... shifts. Missing values and several alternative sources of misspecification are also considered. It emerges that the state space methodology provides a valuable alternative for the estimation of the long memory models, under different data generating processes, which are common in financial and economic...

  6. Evidence for the credibility of health economic models for health policy decision-making

    DEFF Research Database (Denmark)

    Søgaard, Rikke; Lindholt, Jes S.

    2012-01-01

    OBJECTIVE: To investigate whether the credibility of health economic models of screening for abdominal aortic aneurysms for health policy decision-making has improved since 2005 when a systematic review by Campbell et al. concluded that reporting standards were poor and there was divergence between...... benefited from general advances in health economic modelling and some improvements in reporting were noted. However, the low level of agreement between studies in model structures and assumptions, and difficulty in justifying these (convergent validity), remain a threat to the credibility of health economic...... models. Decision-makers should not accept the results of a modelling study if the methods are not fully transparent and justified. Modellers should, whenever relevant, supplement a primary report of results with a technical report detailing and discussing the methodological choices made....

  7. How can health care organisations make and justify decisions about risk reduction? Lessons from a cross-industry review and a health care stakeholder consensus development process

    International Nuclear Information System (INIS)

    Sujan, Mark A.; Habli, Ibrahim; Kelly, Tim P.; Gühnemann, Astrid; Pozzi, Simone; Johnson, Christopher W.

    2017-01-01

    Interventions to reduce risk often have an associated cost. In UK industries decisions about risk reduction are made and justified within a shared regulatory framework that requires that risk be reduced as low as reasonably practicable. In health care no such regulatory framework exists, and the practice of making decisions about risk reduction is varied and lacks transparency. Can health care organisations learn from relevant industry experiences about making and justifying risk reduction decisions? This paper presents lessons from a qualitative study undertaken with 21 participants from five industries about how such decisions are made and justified in UK industry. Recommendations were developed based on a consensus development exercise undertaken with 20 health care stakeholders. The paper argues that there is a need in health care to develop a regulatory framework and an agreed process for managing explicitly the trade-off between risk reduction and cost. The framework should include guidance about a health care specific notion of acceptable levels of risk, guidance about standardised risk reduction interventions, it should include regulatory incentives for health care organisations to reduce risk, and it should encourage the adoption of an approach for documenting explicitly an organisation's risk position. - Highlights: • Empirical description of industry perceptions on making risk reduction decisions. • Health care consensus development identified five recommendations. • Risk concept should be better integrated into safety management. • Education and awareness about risk concept are required. • Health systems need to start a dialogue about acceptable levels of risk.

  8. Justifying continuous sedation until death: a focus group study in nursing homes in Flanders, Belgium.

    Science.gov (United States)

    Rys, Sam; Deschepper, Reginald; Deliens, Luc; Mortier, Freddy; Bilsen, Johan

    2013-01-01

    Continuous Sedation until Death (CSD), the act of reducing or removing the consciousness of an incurably ill patient until death, has become a common practice in nursing homes in Flanders (Belgium). Quantitative research has suggested that CSD is not always properly applied. This qualitative study aims to explore and describe the circumstances under which nursing home clinicians consider CSD to be justified. Six focus groups were conducted including 10 physicians, 24 nurses, and 14 care assistants working in either Catholic or non-Catholic nursing homes of varying size. Refractory suffering, limited life expectancy and respecting patient autonomy are considered essential elements in deciding for CSD. However, multiple factors complicate the care of nursing home residents at the end of life, and often hinder clinicians from putting these elements into practice. Nursing home clinicians may benefit from more information and instruction about managing CSD in the complex care situations which typically occur in nursing homes. Copyright © 2013 Mosby, Inc. All rights reserved.

  9. A Closer Look at the Junior Doctor Crisis in the United Kingdom's National Health Services: Is Emigration Justifiable?

    Science.gov (United States)

    Teo, Wendy Zi Wei

    2018-07-01

    This article attempts to tackle the ethically and morally troubling issue of emigration of physicians from the United Kingdom, and whether it can be justified. Unlike most research that has already been undertaken in this field, which looks at migration from developing countries to developed countries, this article takes an in-depth look at the migration of physicians between developed countries, in particular from the United Kingdom (UK) to other developed countries such as Canada, Australia, New Zealand, and the United States (US). This examination was written in response to a current and critical crisis in the National Health Service (NHS), where impending contract changes may bring about a potential exodus of junior doctors.

  10. Doubly Robust Estimation of Optimal Dynamic Treatment Regimes

    DEFF Research Database (Denmark)

    Barrett, Jessica K; Henderson, Robin; Rosthøj, Susanne

    2014-01-01

    We compare methods for estimating optimal dynamic decision rules from observational data, with particular focus on estimating the regret functions defined by Murphy (in J. R. Stat. Soc., Ser. B, Stat. Methodol. 65:331-355, 2003). We formulate a doubly robust version of the regret-regression appro......We compare methods for estimating optimal dynamic decision rules from observational data, with particular focus on estimating the regret functions defined by Murphy (in J. R. Stat. Soc., Ser. B, Stat. Methodol. 65:331-355, 2003). We formulate a doubly robust version of the regret......-regression approach of Almirall et al. (in Biometrics 66:131-139, 2010) and Henderson et al. (in Biometrics 66:1192-1201, 2010) and demonstrate that it is equivalent to a reduced form of Robins' efficient g-estimation procedure (Robins, in Proceedings of the Second Symposium on Biostatistics. Springer, New York, pp....... 189-326, 2004). Simulation studies suggest that while the regret-regression approach is most efficient when there is no model misspecification, in the presence of misspecification the efficient g-estimation procedure is more robust. The g-estimation method can be difficult to apply in complex...

  11. Experiment selection for the discrimination of semi-quantitative models of dynamical systems

    NARCIS (Netherlands)

    Vatcheva, [No Value; de Jong, H; Bernard, O; Mars, NJI

    Modeling an experimental system often results in a number of alternative models that are all justified by the available experimental data. To discriminate among these models, additional experiments are needed. Existing methods for the selection of discriminatory experiments in statistics and in

  12. Preliminary Multivariable Cost Model for Space Telescopes

    Science.gov (United States)

    Stahl, H. Philip

    2010-01-01

    Parametric cost models are routinely used to plan missions, compare concepts and justify technology investments. Previously, the authors published two single variable cost models based on 19 flight missions. The current paper presents the development of a multi-variable space telescopes cost model. The validity of previously published models are tested. Cost estimating relationships which are and are not significant cost drivers are identified. And, interrelationships between variables are explored

  13. [Cesarean birth: justifying indication or justified concern?].

    Science.gov (United States)

    Muñoz-Enciso, José Manuel; Rosales-Aujang, Enrique; Domínguez-Ponce, Guillermo; Serrano-Díaz, César Leopoldo

    2011-02-01

    Caesarean section is the most common surgery performed in all hospitals of second level of care in the health sector and more frequently in private hospitals in Mexico. To determine the behavior that caesarean section in different hospitals in the health sector in the city of Aguascalientes and analyze the indications during the same period. A descriptive and cross in the top four secondary hospitals in the health sector of the state of Aguascalientes, which together account for 81% of obstetric care in the state, from 1 September to 31 October 2008. Were analyzed: indication of cesarean section and their classification, previous pregnancies, marital status, gestational age, weight and minute Apgar newborn and given birth control during the event. were recorded during the study period, 2.964 pregnancies after 29 weeks, of whom 1.195 were resolved by Caesarean section with an overall rate of 40.3%. We found 45 different indications, which undoubtedly reflect the great diversity of views on the institutional medical staff to schedule a cesarean section. Although each institution has different resources and a population with different characteristics, treatment protocols should be developed by staff of each hospital to have the test as a cornerstone of labor, also request a second opinion before a caesarean section, all try to reduce the frequency of cesarean section.

  14. Declarative versus imperative process modeling languages : the issue of maintainability

    NARCIS (Netherlands)

    Fahland, D.; Mendling, J.; Reijers, H.A.; Weber, B.; Weidlich, M.; Zugal, S.; Rinderle-Ma, S.; Sadiq, S.; Leymann, F.

    2010-01-01

    The rise of interest in declarative languages for process modeling both justifies and demands empirical investigations into their presumed advantages over more traditional, imperative alternatives. Our concern in this paper is with the ease of maintaining business process models, for example due to

  15. A thermal model of the economy

    Science.gov (United States)

    Arroyo Colon, Luis Balbino

    The motivation for this work came from an interest in Economics (particularly since the 2008 economic downturn) and a desire to use the tools of physics in a field that has not been the subject of great exploration. We propose a model of economics in analogy to thermodynamics and introduce the concept of the Value Multiplier as a fundamental addition to any such model. Firstly, we attempt to make analogies between some economic concepts and fundamental concepts of thermal physics. Then we introduce the value multiplier and justify its existence in our system; the value multiplier allows us to account for some intangible, psychological elements of the value of goods and services. We finally bring all the elements together in a qualitative system. In particular, we attempt to make an analogy with the Keynesian Multiplier that justifies the usefulness of fiscal stimulus in severe economic downturns. ii

  16. Forecasting Performance of Asymmetric GARCH Stock Market Volatility Models

    Directory of Open Access Journals (Sweden)

    Hojin Lee

    2009-12-01

    Full Text Available We investigate the asymmetry between positive and negative returns in their effect on conditional variance of the stock market index and incorporate the characteristics to form an out-of-sample volatility forecast. Contrary to prior evidence, however, the results in this paper suggest that no asymmetric GARCH model is superior to basic GARCH(1,1 model. It is our prior knowledge that, for equity returns, it is unlikely that positive and negative shocks have the same impact on the volatility. In order to reflect this intuition, we implement three diagnostic tests for volatility models: the Sign Bias Test, the Negative Size Bias Test, and the Positive Size Bias Test and the tests against the alternatives of QGARCH and GJR-GARCH. The asymmetry test results indicate that the sign and the size of the unexpected return shock do not influence current volatility differently which contradicts our presumption that there are asymmetric effects in the stock market volatility. This result is in line with various diagnostic tests which are designed to determine whether the GARCH(1,1 volatility estimates adequately represent the data. The diagnostic tests in section 2 indicate that the GARCH(1,1 model for weekly KOSPI returns is robust to the misspecification test. We also investigate two representative asymmetric GARCH models, QGARCH and GJR-GARCH model, for our out-of-sample forecasting performance. The out-of-sample forecasting ability test reveals that no single model is clearly outperforming. It is seen that the GJR-GARCH and QGARCH model give mixed results in forecasting ability on all four criteria across all forecast horizons considered. Also, the predictive accuracy test of Diebold and Mariano based on both absolute and squared prediction errors suggest that the forecasts from the linear and asymmetric GARCH models need not be significantly different from each other.

  17. Ethical analysis of the justifiability of labelling with COPD for smoking cessation.

    Science.gov (United States)

    Kotz, D; Vos, R; Huibers, M J H

    2009-09-01

    Spirometry for early detection of chronic obstructive pulmonary disease (COPD) and smoking cessation is criticised because of the potential negative effects of labelling with disease. To assess the effects of opinions of smokers with mild to moderate COPD on the effectiveness of spirometry for smoking cessation, the justification of early detection of airflow limitation in smokers and the impact of confrontation with COPD. Qualitative study with data from a randomised controlled trial. General population of Dutch and Belgian Limburg. Semistructured ethical exit interviews were conducted with 205 smokers who were motivated to quit smoking and had no prior diagnosis of COPD but were detected with airflows limitation by means of spirometry. They received either (1) counselling, including labelling with COPD, plus with nortriptyline for smoking cessation, (2) counselling excluding labelling with COPD, plus nortriptyline for smoking cessation or (3) care as usual for smoking cessation by the general practitioner, without labelling with COPD. Of the participants, 177 (86%) agreed or completely agreed that it is justified to measure lung function in heavy smokers. These participants argued that measuring lung function raises consciousness of the negative effects of smoking, helps to prevent disease or increases motivation to stop smoking. Most of the 18 participants who disagreed argued that routinely measuring lung function in smokers would interfere with freedom of choice. Labelling with disease is probably a less important issue in the discussion about the pros and cons of early detection of COPD.

  18. How to define and build an effective cyber threat intelligence capability how to understand, justify and implement a new approach to security

    CERN Document Server

    Dalziel, Henry; Carnall, James

    2014-01-01

    Intelligence-Led Security: How to Understand, Justify and Implement a New Approach to Security is a concise review of the concept of Intelligence-Led Security. Protecting a business, including its information and intellectual property, physical infrastructure, employees, and reputation, has become increasingly difficult. Online threats come from all sides: internal leaks and external adversaries; domestic hacktivists and overseas cybercrime syndicates; targeted threats and mass attacks. And these threats run the gamut from targeted to indiscriminate to entirely accidental. Amo

  19. Modeling of hydrogen interactions with beryllium

    Energy Technology Data Exchange (ETDEWEB)

    Longhurst, G.R. [Lockheed Martin Idaho Technologies Co., Idaho Falls, ID (United States)

    1998-01-01

    In this paper, improved mathematical models are developed for hydrogen interactions with beryllium. This includes the saturation effect observed for high-flux implantation of ions from plasmas and retention of tritium produced from neutronic transmutations in beryllium. Use of the models developed is justified by showing how they can replicated experimental data using the TMAP4 tritium transport code. (author)

  20. Discrimination of Semi-Quantitative Models by Experiment Selection: Method Application in Population Biology

    NARCIS (Netherlands)

    Vatcheva, Ivayla; Bernard, Olivier; de Jong, Hidde; Gouze, Jean-Luc; Mars, Nicolaas; Nebel, B.

    2001-01-01

    Modeling an experimental system often results in a number of alternative models that are justified equally well by the experimental data. In order to discriminate between these models, additional experiments are needed. We present a method for the discrimination of models in the form of

  1. Statistical models for brain signals with properties that evolve across trials

    KAUST Repository

    Ombao, Hernando

    2017-12-07

    Most neuroscience cognitive experiments involve repeated presentations of various stimuli across several minutes or a few hours. It has been observed that brain responses, even to the same stimulus, evolve over the course of the experiment. These changes in brain activation and connectivity are believed to be associated with learning and/or habituation. In this paper, we present two general approaches to modeling dynamic brain connectivity using electroencephalograms (EEGs) recorded across replicated trials in an experiment. The first approach is the Markovian regime-switching vector autoregressive model (MS-VAR) which treats EEGs as realizations of an underlying brain process that switches between different states both within a trial and across trials in the entire experiment. The second is the slowly evolutionary locally stationary process (SEv-LSP) which characterizes the observed EEGs as a mixture of oscillatory activities at various frequency bands. The SEv-LSP model captures the dynamic nature of the amplitudes of the band-oscillations and cross-correlations between them. The MS-VAR model is able to capture abrupt changes in the dynamics while the SEv-LSP directly gives interpretable results. Moreover, it is nonparametric and hence does not suffer from model misspecification. For both of these models, time-evolving connectivity metrics in the frequency domain are derived from the model parameters for both functional and effective connectivity. We illustrate these two models for estimating cross-trial connectivity in selective attention using EEG data from an oddball paradigm auditory experiment where the goal is to characterize the evolution of brain responses to target stimuli and to standard tones presented randomly throughout the entire experiment. The results suggest dynamic changes in connectivity patterns over trials with inter-subject variability.

  2. Bayesian inference in an extended SEIR model with nonparametric disease transmission rate: an application to the Ebola epidemic in Sierra Leone.

    Science.gov (United States)

    Frasso, Gianluca; Lambert, Philippe

    2016-10-01

    SummaryThe 2014 Ebola outbreak in Sierra Leone is analyzed using a susceptible-exposed-infectious-removed (SEIR) epidemic compartmental model. The discrete time-stochastic model for the epidemic evolution is coupled to a set of ordinary differential equations describing the dynamics of the expected proportions of subjects in each epidemic state. The unknown parameters are estimated in a Bayesian framework by combining data on the number of new (laboratory confirmed) Ebola cases reported by the Ministry of Health and prior distributions for the transition rates elicited using information collected by the WHO during the follow-up of specific Ebola cases. The time-varying disease transmission rate is modeled in a flexible way using penalized B-splines. Our framework represents a valuable stochastic tool for the study of an epidemic dynamic even when only irregularly observed and possibly aggregated data are available. Simulations and the analysis of the 2014 Sierra Leone Ebola data highlight the merits of the proposed methodology. In particular, the flexible modeling of the disease transmission rate makes the estimation of the effective reproduction number robust to the misspecification of the initial epidemic states and to underreporting of the infectious cases. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  3. Business capital accumulation and the user cost: is there a heterogeneity bias?

    OpenAIRE

    FATICA SERENA

    2016-01-01

    Using data from 23 market economy sectors across 10 OECD countries over the period 1984-2007 we show that the homogeneity assumption underlying empirical models for aggregate capital accumulation may lead to misspecification. Thus, we adopt a fully disaggregated approach – by asset types and sectors – to estimate the responsiveness of investment to the tax-adjusted user cost of capital. In this framework, we are able to link unobserved common factors to the nature of the shocks affecting the ...

  4. PET/CT in cancer: moderate sample sizes may suffice to justify replacement of a regional gold standard

    DEFF Research Database (Denmark)

    Gerke, Oke; Poulsen, Mads Hvid; Bouchelouche, Kirsten

    2009-01-01

    PURPOSE: For certain cancer indications, the current patient evaluation strategy is a perfect but locally restricted gold standard procedure. If positron emission tomography/computed tomography (PET/CT) can be shown to be reliable within the gold standard region and if it can be argued that PET...... of metastasized prostate cancer. RESULTS: An added value in accuracy of PET/CT in adjacent areas can outweigh a downsized target level of accuracy in the gold standard region, justifying smaller sample sizes. CONCLUSIONS: If PET/CT provides an accuracy benefit in adjacent regions, then sample sizes can be reduced....../CT also performs well in adjacent areas, then sample sizes in accuracy studies can be reduced. PROCEDURES: Traditional standard power calculations for demonstrating sensitivities of both 80% and 90% are shown. The argument is then described in general terms and demonstrated by an ongoing study...

  5. Numerical Modeling of Rotary Kiln Productivity Increase

    NARCIS (Netherlands)

    Romero-Valle, M.A.; Pisaroni, M.; Van Puyvelde, D.; Lahaye, D.J.P.; Sadi, R.

    2013-01-01

    Rotary kilns are used in many industrial processes ranging from cement manufacturing to waste incineration. The operating conditions vary widely depending on the process. While there are many models available within the literature and industry, the wide range of operating conditions justifies

  6. Can context justify an ethical double standard for clinical research in developing countries?

    Directory of Open Access Journals (Sweden)

    Landes Megan

    2005-07-01

    Full Text Available Abstract Background The design of clinical research deserves special caution so as to safeguard the rights of participating individuals. While the international community has agreed on ethical standards for the design of research, these frameworks still remain open to interpretation, revision and debate. Recently a breach in the consensus of how to apply these ethical standards to research in developing countries has occurred, notably beginning with the 1994 placebo-controlled trials to reduce maternal to child transmission of HIV-1 in Africa, Asia and the Caribbean. The design of these trials sparked intense debate with the inclusion of a placebo-control group despite the existence of a 'gold standard' and trial supporters grounded their justifications of the trial design on the context of scarcity in resource-poor settings. Discussion These 'contextual' apologetics are arguably an ethical loophole inherent in current bioethical methodology. However, this convenient appropriation of 'contextual' analysis simply fails to acknowledge the underpinnings of feminist ethical analysis upon which it must stand. A more rigorous analysis of the political, social, and economic structures pertaining to the global context of developing countries reveals that the bioethical principles of beneficence and justice fail to be met in this trial design. Conclusion Within this broader, and theoretically necessary, understanding of context, it becomes impossible to justify an ethical double standard for research in developing countries.

  7. A Lacanian Reading of the Two Novels The Scarlet Letter And Private Memoirs And Confessions of A Justified Sinner

    Directory of Open Access Journals (Sweden)

    Marjan Yazdanpanahi

    2016-07-01

    Full Text Available This paper discusses two novels The Private Memoirs and Confessions of a Justified Sinner and The Scarlet Letter written by James Hogg and Nathaniel Hawthorn from the perspective of Jacques Lacan theories: the mirror stage, the-name-of-the-father and desire. The mirror stage refers to historical value and an essential libidinal relationship with the body-image. The-name-of-the-father is defined as the prohibitive role of the father as the one who lays down the incest taboo in the Oedipus complex. Meanwhile, desire is neither the appetite for satisfaction, nor the demand for love, but the difference that results from the subtraction of the first from the second.

  8. HEDR modeling approach

    International Nuclear Information System (INIS)

    Shipler, D.B.; Napier, B.A.

    1992-07-01

    This report details the conceptual approaches to be used in calculating radiation doses to individuals throughout the various periods of operations at the Hanford Site. The report considers the major environmental transport pathways--atmospheric, surface water, and ground water--and projects and appropriate modeling technique for each. The modeling sequence chosen for each pathway depends on the available data on doses, the degree of confidence justified by such existing data, and the level of sophistication deemed appropriate for the particular pathway and time period being considered

  9. Modeling Site Heterogeneity with Posterior Mean Site Frequency Profiles Accelerates Accurate Phylogenomic Estimation.

    Science.gov (United States)

    Wang, Huai-Chun; Minh, Bui Quang; Susko, Edward; Roger, Andrew J

    2018-03-01

    Proteins have distinct structural and functional constraints at different sites that lead to site-specific preferences for particular amino acid residues as the sequences evolve. Heterogeneity in the amino acid substitution process between sites is not modeled by commonly used empirical amino acid exchange matrices. Such model misspecification can lead to artefacts in phylogenetic estimation such as long-branch attraction. Although sophisticated site-heterogeneous mixture models have been developed to address this problem in both Bayesian and maximum likelihood (ML) frameworks, their formidable computational time and memory usage severely limits their use in large phylogenomic analyses. Here we propose a posterior mean site frequency (PMSF) method as a rapid and efficient approximation to full empirical profile mixture models for ML analysis. The PMSF approach assigns a conditional mean amino acid frequency profile to each site calculated based on a mixture model fitted to the data using a preliminary guide tree. These PMSF profiles can then be used for in-depth tree-searching in place of the full mixture model. Compared with widely used empirical mixture models with $k$ classes, our implementation of PMSF in IQ-TREE (http://www.iqtree.org) speeds up the computation by approximately $k$/1.5-fold and requires a small fraction of the RAM. Furthermore, this speedup allows, for the first time, full nonparametric bootstrap analyses to be conducted under complex site-heterogeneous models on large concatenated data matrices. Our simulations and empirical data analyses demonstrate that PMSF can effectively ameliorate long-branch attraction artefacts. In some empirical and simulation settings PMSF provided more accurate estimates of phylogenies than the mixture models from which they derive.

  10. Current Evidence to Justify, and the Methodological Considerations for a Randomised Controlled Trial Testing the Hypothesis that Statins Prevent the Malignant Progression of Barrett's Oesophagus

    Directory of Open Access Journals (Sweden)

    David Thurtle

    2014-12-01

    Full Text Available Barrett’s oesophagus is the predominant risk factor for oesophageal adenocarcinoma, a cancer whose incidence is increasing and which has a poor prognosis. This article reviews the latest experimental and epidemiological evidence justifying the development of a randomised controlled trial investigating the hypothesis that statins prevent the malignant progression of Barrett’s oesophagus, and explores the methodological considerations for such a trial. The experimental evidence suggests anti-carcinogenic properties of statins on oesophageal cancer cell lines, based on the inhibition of the mevalonate pathway and the production of pro-apoptotic proteins. The epidemiological evidence reports inverse associations between statin use and the incidence of oesophageal carcinoma in both general population and Barrett’s oesophagus cohorts. Such a randomised controlled trial would be a large multi-centre trial, probably investigating simvastatin, given the wide clinical experience with this drug, relatively low side-effect profile and low financial cost. As with any clinical trial, high adherence is important, which could be increased with therapy, patient, doctor and system-focussed interventions. We would suggest there is now sufficient evidence to justify a full clinical trial that attempts to prevent this aggressive cancer in a high-risk population.

  11. Inhomogeneous Markov Models for Describing Driving Patterns

    DEFF Research Database (Denmark)

    Iversen, Emil Banning; Møller, Jan K.; Morales, Juan Miguel

    2017-01-01

    . Specifically, an inhomogeneous Markov model that captures the diurnal variation in the use of a vehicle is presented. The model is defined by the time-varying probabilities of starting and ending a trip, and is justified due to the uncertainty associated with the use of the vehicle. The model is fitted to data...... collected from the actual utilization of a vehicle. Inhomogeneous Markov models imply a large number of parameters. The number of parameters in the proposed model is reduced using B-splines....

  12. Are more restrictive food cadmium standards justifiable health safety measures or opportunistic barriers to trade? An answer from economics and public health

    International Nuclear Information System (INIS)

    Figueroa B, Eugenio

    2008-01-01

    In the past, Cd regulations have imposed trade restrictions on foodstuffs from some developing countries seeking to access markets in the developed world and in recent years, there has been a trend towards imposing more rigorous standards. This trend seems to respond more to public and private sectors strategies in some developed countries to create disguised barriers to trade and to improve market competitiveness for their industries, than to scientifically justified health precautions (sanitary and phytosanitary measures) and/or technical barriers to trade acceptable under the Uruguay Round Agreement of the WTO. Applying more rigorous Cd standards in some developed countries will not only increase production costs in developing countries but it will also have a large impact on their economies highly dependent on international agricultural markets. In the current literature there are large uncertainties in the cause-effect relationship between current levels of Cd intakes and eventual health effects in human beings; even the risk of Cd to kidney function is under considerable debate. Recent works on the importance of zinc:Cd ratio rather than Cd levels alone to determine Cd risk factors, on the one hand, and on the declining trends of Cd level in foods and soils, on the other, also indicate a lack of scientific evidence justifying more restrictive cadmium standards. This shows that developing countries should fight for changing and making more transparent the current international structures and procedures for setting sanitary and phytosanitary measures and technical barriers to trade

  13. Are more restrictive food cadmium standards justifiable health safety measures or opportunistic barriers to trade? An answer from economics and public health.

    Science.gov (United States)

    Figueroa B, Eugenio

    2008-01-15

    In the past, Cd regulations have imposed trade restrictions on foodstuffs from some developing countries seeking to access markets in the developed world and in recent years, there has been a trend towards imposing more rigorous standards. This trend seems to respond more to public and private sectors strategies in some developed countries to create disguised barriers to trade and to improve market competitiveness for their industries, than to scientifically justified health precautions (sanitary and phytosanitary measures) and/or technical barriers to trade acceptable under the Uruguay Round Agreement of the WTO. Applying more rigorous Cd standards in some developed countries will not only increase production costs in developing countries but it will also have a large impact on their economies highly dependent on international agricultural markets. In the current literature there are large uncertainties in the cause-effect relationship between current levels of Cd intakes and eventual health effects in human beings; even the risk of Cd to kidney function is under considerable debate. Recent works on the importance of zinc:Cd ratio rather than Cd levels alone to determine Cd risk factors, on the one hand, and on the declining trends of Cd level in foods and soils, on the other, also indicate a lack of scientific evidence justifying more restrictive cadmium standards. This shows that developing countries should fight for changing and making more transparent the current international structures and procedures for setting sanitary and phytosanitary measures and technical barriers to trade.

  14. [Hemolytic disease of the newborn has not vanished from Finland--routine protection of RhD negative mothers during pregnancy is justifiable].

    Science.gov (United States)

    Sainio, Susanna; Kuosmanen, Malla

    2012-01-01

    Prophylaxis of RhD negative mothers with anti-D immunoglobulin after childbirth is the most important procedure reducing the immunization of the mother and the risk of severe hemolytic disease of the newborn. In spite of this, anti-D antibodies having relevance to pregnancy are later detected in 1.8% of RhD negative mothers. Half of these cases could be prevented by routine anti-D prophylaxis given to the mothers during weeks 28 to 34 of pregnancy. Convincing evidence of the effectiveness of this measure has accumulated in the last few years, and application of the treatment is justified also in Finland.

  15. On an elastic dissipation model as continuous approximation for discrete media

    Directory of Open Access Journals (Sweden)

    I. V. Andrianov

    2006-01-01

    Full Text Available Construction of an accurate continuous model for discrete media is an important topic in various fields of science. We deal with a 1D differential-difference equation governing the behavior of an n-mass oscillator with linear relaxation. It is known that a string-type approximation is justified for low part of frequency spectra of a continuous model, but for free and forced vibrations a solution of discrete and continuous models can be quite different. A difference operator makes analysis difficult due to its nonlocal form. Approximate equations can be obtained by replacing the difference operators via a local derivative operator. Although application of a model with derivative of more than second order improves the continuous model, a higher order of approximated differential equation seriously complicates a solution of continuous problem. It is known that accuracy of the approximation can dramatically increase using Padé approximations. In this paper, one- and two-point Padé approximations suitable for justify choice of structural damping models are used.

  16. Are chest radiographs justified in pre-employment examinations. Presentation of legal position and medical evidence based on 1760 cases

    International Nuclear Information System (INIS)

    Ladd, S.C.; Krause, U.; Ladd, M.E.

    2006-01-01

    The legal and medical basis for chest radiographs as part of pre-employment examinations (PEE) at a University Hospital is evaluated. The radiographs are primarily performed to exclude infectious lung disease. A total of 1760 consecutive chest radiographs performed as a routine part of PEEs were reviewed retrospectively. Pathologic findings were categorized as ''nonrelevant'' or ''relevant.'' No positive finding with respect to tuberculosis or any other infectious disease was found; 94.8% of the chest radiographs were completely normal. Only five findings were regarded as ''relevant'' for the individual. No employment-relevant diagnosis occurred. The performance of chest radiography as part of a PEE is most often not justified. The practice is expensive, can violate national and European law, and lacks medical justification. (orig.) [de

  17. Statistical models for brain signals with properties that evolve across trials.

    Science.gov (United States)

    Ombao, Hernando; Fiecas, Mark; Ting, Chee-Ming; Low, Yin Fen

    2017-12-07

    Most neuroscience cognitive experiments involve repeated presentations of various stimuli across several minutes or a few hours. It has been observed that brain responses, even to the same stimulus, evolve over the course of the experiment. These changes in brain activation and connectivity are believed to be associated with learning and/or habituation. In this paper, we present two general approaches to modeling dynamic brain connectivity using electroencephalograms (EEGs) recorded across replicated trials in an experiment. The first approach is the Markovian regime-switching vector autoregressive model (MS-VAR) which treats EEGs as realizations of an underlying brain process that switches between different states both within a trial and across trials in the entire experiment. The second is the slowly evolutionary locally stationary process (SEv-LSP) which characterizes the observed EEGs as a mixture of oscillatory activities at various frequency bands. The SEv-LSP model captures the dynamic nature of the amplitudes of the band-oscillations and cross-correlations between them. The MS-VAR model is able to capture abrupt changes in the dynamics while the SEv-LSP directly gives interpretable results. Moreover, it is nonparametric and hence does not suffer from model misspecification. For both of these models, time-evolving connectivity metrics in the frequency domain are derived from the model parameters for both functional and effective connectivity. We illustrate these two models for estimating cross-trial connectivity in selective attention using EEG data from an oddball paradigm auditory experiment where the goal is to characterize the evolution of brain responses to target stimuli and to standard tones presented randomly throughout the entire experiment. The results suggest dynamic changes in connectivity patterns over trials with inter-subject variability. Copyright © 2017. Published by Elsevier Inc.

  18. Preliminary Cost Model for Space Telescopes

    Science.gov (United States)

    Stahl, H. Philip; Prince, F. Andrew; Smart, Christian; Stephens, Kyle; Henrichs, Todd

    2009-01-01

    Parametric cost models are routinely used to plan missions, compare concepts and justify technology investments. However, great care is required. Some space telescope cost models, such as those based only on mass, lack sufficient detail to support such analysis and may lead to inaccurate conclusions. Similarly, using ground based telescope models which include the dome cost will also lead to inaccurate conclusions. This paper reviews current and historical models. Then, based on data from 22 different NASA space telescopes, this paper tests those models and presents preliminary analysis of single and multi-variable space telescope cost models.

  19. Bias and inference from misspecified mixed-effect models in stepped wedge trial analysis.

    Science.gov (United States)

    Thompson, Jennifer A; Fielding, Katherine L; Davey, Calum; Aiken, Alexander M; Hargreaves, James R; Hayes, Richard J

    2017-10-15

    Many stepped wedge trials (SWTs) are analysed by using a mixed-effect model with a random intercept and fixed effects for the intervention and time periods (referred to here as the standard model). However, it is not known whether this model is robust to misspecification. We simulated SWTs with three groups of clusters and two time periods; one group received the intervention during the first period and two groups in the second period. We simulated period and intervention effects that were either common-to-all or varied-between clusters. Data were analysed with the standard model or with additional random effects for period effect or intervention effect. In a second simulation study, we explored the weight given to within-cluster comparisons by simulating a larger intervention effect in the group of the trial that experienced both the control and intervention conditions and applying the three analysis models described previously. Across 500 simulations, we computed bias and confidence interval coverage of the estimated intervention effect. We found up to 50% bias in intervention effect estimates when period or intervention effects varied between clusters and were treated as fixed effects in the analysis. All misspecified models showed undercoverage of 95% confidence intervals, particularly the standard model. A large weight was given to within-cluster comparisons in the standard model. In the SWTs simulated here, mixed-effect models were highly sensitive to departures from the model assumptions, which can be explained by the high dependence on within-cluster comparisons. Trialists should consider including a random effect for time period in their SWT analysis model. © 2017 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. © 2017 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.

  20. Is febrile neutropenia prophylaxis with granulocyte-colony stimulating factors economically justified for adjuvant TC chemotherapy in breast cancer?

    Science.gov (United States)

    Skedgel, Chris; Rayson, Daniel; Younis, Tallal

    2016-01-01

    Febrile neutropenia (FN) during adjuvant chemotherapy is associated with morbidity, mortality risk, and substantial cost, and subsequent chemotherapy dose reductions may result in poorer outcomes. Patients at high risk of, or who develop FN, often receive prophylaxis with granulocyte colony-stimulating factors (G-CSF). We investigated whether different prophylaxis strategies with G-CSF offered favorable value-for-money. We developed a decision model to estimate the short- and long-term costs and outcomes of a hypothetical cohort of women with breast cancer receiving adjuvant taxotere + cyclophosphamide (TC) chemotherapy. The short-term phase estimated upfront costs and FN risks with adjuvant TC chemotherapy without G-CSF prophylaxis (i.e., chemotherapy dose reductions) as well as with secondary and primary G-CSF prophylaxis strategies. The long-term phase estimated the expected costs and quality-adjusted life years (QALYs) for patients who completed adjuvant TC chemotherapy with or without one or more episodes of FN. Secondary G-CSF was associated with lower costs and greater QALY gains than a no G-CSF strategy. Primary G-CSF appears likely to be cost-effective relative to secondary G-CSF at FN rates greater than 28%, assuming some loss of chemotherapy efficacy at lower dose intensities. The cost-effectiveness of primary vs. secondary G-CSF was sensitive to FN risk and mortality, and loss of chemotherapy efficacy following FN. Secondary G-CSF is more effective and less costly than a no G-CSF strategy. Primary G-CSF may be justified at higher willingness-to-pay thresholds and/or higher FN risks, but this threshold FN risk appears to be higher than the 20% rate recommended by current clinical guidelines.

  1. Some variations of the Kristallin-I near-field model

    International Nuclear Information System (INIS)

    Smith, P.A.; Curti, E.

    1995-11-01

    The Kristallin-I project is an integrated analysis of the final disposal of vitrified high-level radioactive waste (HLW) in the crystalline basement of Northern Switzerland. It includes an analysis of the radiological consequences of radionuclide release from a repository. This analysis employs a chain of independent models for the near-field, geosphere and biosphere. In constructing these models, processes are incorporated that are believed to be relevant to repository safety, while other processes are neglected. In the present report, a set of simplified, steady-state models of the near-field is developed to investigate the possible effects of specific processes which are neglected in the time-dependent Kristallin-I near-field model. These processes are neglected, either because they are thought unlikely to occur to a significant degree, or because they are likely to make a positive contribution to the performance of the near-field barrier to radionuclide migration, but are insufficiently understood to justify incorporating them in a safety assessment. The aim of this report is to investigate whether the arguments for neglecting these processes in the Kristallin-I near-field model can be justified. (author) figs., tabs., refs

  2. Methodical Approaches to Teaching of Computer Modeling in Computer Science Course

    Science.gov (United States)

    Rakhimzhanova, B. Lyazzat; Issabayeva, N. Darazha; Khakimova, Tiyshtik; Bolyskhanova, J. Madina

    2015-01-01

    The purpose of this study was to justify of the formation technique of representation of modeling methodology at computer science lessons. The necessity of studying computer modeling is that the current trends of strengthening of general education and worldview functions of computer science define the necessity of additional research of the…

  3. Measuring the potential of individual airports for pandemic spread over the world airline network.

    Science.gov (United States)

    Lawyer, Glenn

    2016-02-09

    Massive growth in human mobility has dramatically increased the risk and rate of pandemic spread. Macro-level descriptors of the topology of the World Airline Network (WAN) explains middle and late stage dynamics of pandemic spread mediated by this network, but necessarily regard early stage variation as stochastic. We propose that much of this early stage variation can be explained by appropriately characterizing the local network topology surrounding an outbreak's debut location. Based on a model of the WAN derived from public data, we measure for each airport the expected force of infection (AEF) which a pandemic originating at that airport would generate, assuming an epidemic process which transmits from airport to airport via scheduled commercial flights. We observe, for a subset of world airports, the minimum transmission rate at which a disease becomes pandemically competent at each airport. We also observe, for a larger subset, the time until a pandemically competent outbreak achieves pandemic status given its debut location. Observations are generated using a highly sophisticated metapopulation reaction-diffusion simulator under a disease model known to well replicate the 2009 influenza pandemic. The robustness of the AEF measure to model misspecification is examined by degrading the underlying model WAN. AEF powerfully explains pandemic risk, showing correlation of 0.90 to the transmission level needed to give a disease pandemic competence, and correlation of 0.85 to the delay until an outbreak becomes a pandemic. The AEF is robust to model misspecification. For 97 % of airports, removing 15 % of airports from the model changes their AEF metric by less than 1 %. Appropriately summarizing the size, shape, and diversity of an airport's local neighborhood in the WAN accurately explains much of the macro-level stochasticity in pandemic outcomes.

  4. A two-stage Bayesian design with sample size reestimation and subgroup analysis for phase II binary response trials.

    Science.gov (United States)

    Zhong, Wei; Koopmeiners, Joseph S; Carlin, Bradley P

    2013-11-01

    Frequentist sample size determination for binary outcome data in a two-arm clinical trial requires initial guesses of the event probabilities for the two treatments. Misspecification of these event rates may lead to a poor estimate of the necessary sample size. In contrast, the Bayesian approach that considers the treatment effect to be random variable having some distribution may offer a better, more flexible approach. The Bayesian sample size proposed by (Whitehead et al., 2008) for exploratory studies on efficacy justifies the acceptable minimum sample size by a "conclusiveness" condition. In this work, we introduce a new two-stage Bayesian design with sample size reestimation at the interim stage. Our design inherits the properties of good interpretation and easy implementation from Whitehead et al. (2008), generalizes their method to a two-sample setting, and uses a fully Bayesian predictive approach to reduce an overly large initial sample size when necessary. Moreover, our design can be extended to allow patient level covariates via logistic regression, now adjusting sample size within each subgroup based on interim analyses. We illustrate the benefits of our approach with a design in non-Hodgkin lymphoma with a simple binary covariate (patient gender), offering an initial step toward within-trial personalized medicine. Copyright © 2013 Elsevier Inc. All rights reserved.

  5. Integrated population modeling of black bears in Minnesota: implications for monitoring and management.

    Science.gov (United States)

    Fieberg, John R; Shertzer, Kyle W; Conn, Paul B; Noyce, Karen V; Garshelis, David L

    2010-08-12

    Wildlife populations are difficult to monitor directly because of costs and logistical challenges associated with collecting informative abundance data from live animals. By contrast, data on harvested individuals (e.g., age and sex) are often readily available. Increasingly, integrated population models are used for natural resource management because they synthesize various relevant data into a single analysis. We investigated the performance of integrated population models applied to black bears (Ursus americanus) in Minnesota, USA. Models were constructed using sex-specific age-at-harvest matrices (1980-2008), data on hunting effort and natural food supplies (which affects hunting success), and statewide mark-recapture estimates of abundance (1991, 1997, 2002). We compared this approach to Downing reconstruction, a commonly used population monitoring method that utilizes only age-at-harvest data. We first conducted a large-scale simulation study, in which our integrated models provided more accurate estimates of population trends than did Downing reconstruction. Estimates of trends were robust to various forms of model misspecification, including incorrectly specified cub and yearling survival parameters, age-related reporting biases in harvest data, and unmodeled temporal variability in survival and harvest rates. When applied to actual data on Minnesota black bears, the model predicted that harvest rates were negatively correlated with food availability and positively correlated with hunting effort, consistent with independent telemetry data. With no direct data on fertility, the model also correctly predicted 2-point cycles in cub production. Model-derived estimates of abundance for the most recent years provided a reasonable match to an empirical population estimate obtained after modeling efforts were completed. Integrated population modeling provided a reasonable framework for synthesizing age-at-harvest data, periodic large-scale abundance estimates, and

  6. Mankiw's Puzzle on Consumer Durables: A Misspecification

    OpenAIRE

    Tam Bang Vu

    2005-01-01

    Mankiw (1982) shows that consumer durables expenditures should follow a linear ARMA(1,1) process, but the data analyzed supports an AR(1) process instead; thus, a puzzle. In this paper, we employ a more general utility function than Mankiw's quadratic one. Further, the disturbance and depreciation rate are respecified, respectively, as multiplicative and stochastic. The analytical consequence is a nonlinear ARMA(infinity,1) process, which implies that the linear ARMA(1,1) is a misspecificatio...

  7. On-line validation of linear process models using generalized likelihood ratios

    International Nuclear Information System (INIS)

    Tylee, J.L.

    1981-12-01

    A real-time method for testing the validity of linear models of nonlinear processes is described and evaluated. Using generalized likelihood ratios, the model dynamics are continually monitored to see if the process has moved far enough away from the nominal linear model operating point to justify generation of a new linear model. The method is demonstrated using a seventh-order model of a natural circulation steam generator

  8. Dear Critics: Addressing Concerns and Justifying the Benefits of Photography as a Research Method

    Directory of Open Access Journals (Sweden)

    Kyle Elizabeth Miller

    2015-08-01

    Full Text Available Photography serves as an important tool for researchers to learn about the contextualized lives of individuals. This article explores the process of integrating photo elicitation interviews (PEI into research involving children and families. Much literature is dedicated to the general debate surrounding the ethics of visual methods in research, with little attention directed at the actual process of gaining study approval and publishing one's findings. There are two main critiques that researchers must face in order to conduct and disseminate studies involving visual images—ethics committees and peer reviewers. In this article, I identify and discuss some of the challenges that emerged across gaining protocol approval from an ethics committee in the United States. Ethical concerns and restrictions related to the use of photography can delay data collection and create barriers to research designs. Similarly, I describe the process of responding to reviewers' concerns as part of the publication process. Peer reviewers' lack of familiarity with the use of photography as a research tool may lead to misunderstandings and inappropriate requests for manuscript changes. While many concerns are sound, the range of benefits stemming from the use of visual data help to justify the time and energy required to defend this type of research. Implications are discussed for researchers using visual methods in their work. URN: http://nbn-resolving.de/urn:nbn:de:0114-fqs1503274

  9. Conclusion of LOD-score analysis for family data generated under two-locus models.

    Science.gov (United States)

    Dizier, M H; Babron, M C; Clerget-Darpoux, F

    1996-06-01

    The power to detect linkage by the LOD-score method is investigated here for diseases that depend on the effects of two genes. The classical strategy is, first, to detect a major-gene (MG) effect by segregation analysis and, second, to seek for linkage with genetic markers by the LOD-score method using the MG parameters. We already showed that segregation analysis can lead to evidence for a MG effect for many two-locus models, with the estimates of the MG parameters being very different from those of the two genes involved in the disease. We show here that use of these MG parameter estimates in the LOD-score analysis may lead to a failure to detect linkage for some two-locus models. For these models, use of the sib-pair method gives a non-negligible increase of power to detect linkage. The linkage-homogeneity test among subsamples differing for the familial disease distribution provides evidence of parameter misspecification, when the MG parameters are used. Moreover, for most of the models, use of the MG parameters in LOD-score analysis leads to a large bias in estimation of the recombination fraction and sometimes also to a rejection of linkage for the true recombination fraction. A final important point is that a strong evidence of an MG effect, obtained by segregation analysis, does not necessarily imply that linkage will be detected for at least one of the two genes, even with the true parameters and with a close informative marker.

  10. Conclusions of LOD-score analysis for family data generated under two-locus models

    Energy Technology Data Exchange (ETDEWEB)

    Dizier, M.H.; Babron, M.C.; Clergt-Darpoux, F. [Unite de Recherches d`Epidemiologie Genetique, Paris (France)

    1996-06-01

    The power to detect linkage by the LOD-score method is investigated here for diseases that depend on the effects of two genes. The classical strategy is, first, to detect a major-gene (MG) effect by segregation analysis and, second, to seek for linkage with genetic markers by the LOD-score method using the MG parameters. We already showed that segregation analysis can lead to evidence for a MG effect for many two-locus models, with the estimates of the MG parameters being very different from those of the two genes involved in the disease. We show here that use of these MG parameter estimates in the LOD-score analysis may lead to a failure to detect linkage for some two-locus models. For these models, use of the sib-pair method gives a non-negligible increase of power to detect linkage. The linkage-homogeneity test among subsamples differing for the familial disease distribution provides evidence of parameter misspecification, when the MG parameters are used. Moreover, for most of the models, use of the MG parameters in LOD-score analysis leads to a large bias in estimation of the recombination fraction and sometimes also to a rejection of linkage for the true recombination fraction. A final important point is that a strong evidence of an MG effect, obtained by segregation analysis, does not necessarily imply that linkage will be detected for at least one of the two genes, even with the true parameters and with a close informative marker. 17 refs., 3 tabs.

  11. The Use of Imputed Sibling Genotypes in Sibship-Based Association Analysis: On Modeling Alternatives, Power and Model Misspecification

    NARCIS (Netherlands)

    Minica, C.C.; Dolan, C.V.; Willemsen, G.; Vink, J.M.; Boomsma, D.I.

    2013-01-01

    When phenotypic, but no genotypic data are available for relatives of participants in genetic association studies, previous research has shown that family-based imputed genotypes can boost the statistical power when included in such studies. Here, using simulations, we compared the performance of

  12. Islamic vs. conventional banks : Business models, efficiency and stability

    NARCIS (Netherlands)

    Beck, T.H.L.; Demirgüc-Kunt, A.; Merrouche, O.

    2013-01-01

    How different are Islamic banks from conventional banks? Does the recent crisis justify a closer look at the Sharia-compliant business model for banking? When comparing conventional and Islamic banks, controlling for time-variant country-fixed effects, we find few significant differences in business

  13. Non-exponential extinction of radiation by fractional calculus modelling

    International Nuclear Information System (INIS)

    Casasanta, G.; Ciani, D.; Garra, R.

    2012-01-01

    Possible deviations from exponential attenuation of radiation in a random medium have been recently studied in several works. These deviations from the classical Beer-Lambert law were justified from a stochastic point of view by Kostinski (2001) . In his model he introduced the spatial correlation among the random variables, i.e. a space memory. In this note we introduce a different approach, including a memory formalism in the classical Beer-Lambert law through fractional calculus modelling. We find a generalized Beer-Lambert law in which the exponential memoryless extinction is only a special case of non-exponential extinction solutions described by Mittag-Leffler functions. We also justify this result from a stochastic point of view, using the space fractional Poisson process. Moreover, we discuss some concrete advantages of this approach from an experimental point of view, giving an estimate of the deviation from exponential extinction law, varying the optical depth. This is also an interesting model to understand the meaning of fractional derivative as an instrument to transmit randomness of microscopic dynamics to the macroscopic scale.

  14. Formal structures for extracting analytically justifiable decisions from ...

    African Journals Online (AJOL)

    This paper identifies the benefits of transforming business process models into Decision Support Systems (DSS). However, the literature reveals that a business process model “should have a formal foundation” as a major requirement for transforming it into a DSS. The paper further ascertains that formal structures refer to ...

  15. Modeling gene expression measurement error: a quasi-likelihood approach

    Directory of Open Access Journals (Sweden)

    Strimmer Korbinian

    2003-03-01

    Full Text Available Abstract Background Using suitable error models for gene expression measurements is essential in the statistical analysis of microarray data. However, the true probabilistic model underlying gene expression intensity readings is generally not known. Instead, in currently used approaches some simple parametric model is assumed (usually a transformed normal distribution or the empirical distribution is estimated. However, both these strategies may not be optimal for gene expression data, as the non-parametric approach ignores known structural information whereas the fully parametric models run the risk of misspecification. A further related problem is the choice of a suitable scale for the model (e.g. observed vs. log-scale. Results Here a simple semi-parametric model for gene expression measurement error is presented. In this approach inference is based an approximate likelihood function (the extended quasi-likelihood. Only partial knowledge about the unknown true distribution is required to construct this function. In case of gene expression this information is available in the form of the postulated (e.g. quadratic variance structure of the data. As the quasi-likelihood behaves (almost like a proper likelihood, it allows for the estimation of calibration and variance parameters, and it is also straightforward to obtain corresponding approximate confidence intervals. Unlike most other frameworks, it also allows analysis on any preferred scale, i.e. both on the original linear scale as well as on a transformed scale. It can also be employed in regression approaches to model systematic (e.g. array or dye effects. Conclusions The quasi-likelihood framework provides a simple and versatile approach to analyze gene expression data that does not make any strong distributional assumptions about the underlying error model. For several simulated as well as real data sets it provides a better fit to the data than competing models. In an example it also

  16. The use of logistic regression in modelling the distributions of bird ...

    African Journals Online (AJOL)

    The method of logistic regression was used to model the observed geographical distribution patterns of bird species in Swaziland in relation to a set of environmental variables. Reporting rates derived from bird atlas data are used as an index of population densities. This is justified in part by the success of the modelling ...

  17. Proposal for product development model focused on ce certification methodology

    Directory of Open Access Journals (Sweden)

    Nathalia Marcia Goulart Pinheiro

    2015-09-01

    Full Text Available This paper presents a critical analysis comparing 21 product development models in order to identify whether these structures meet the demands Product Certification of the European Community (CE. Furthermore, it presents a product development model, comprising the steps in the models analyzed, including improvements in activities for referred product certification. The proposed improvements are justified by the growing quest for the internationalization of products and processes within companies.

  18. The EU Seal Products Ban – Why Ineffective Animal Welfare Protection Cannot Justify Trade Restrictions under European and International Trade Law

    Directory of Open Access Journals (Sweden)

    Martin Hennig

    2015-03-01

    Full Text Available In this article, the author questions the legitimacy of the general ban on trade in seal products adopted by the European Union. It is submitted that the EU Seal Regime, which permits the marketing of Greenlandic seal products derived from Inuit hunts, but excludes Canadian and Norwegian seal products from the European market, does not ensure a satisfactory degree of animal welfare protection in order to justify the comprehensive trade restriction in place. It is argued that the current ineffective EU ban on seal products, which according to the WTO Appellate Body cannot be reconciled with the objective of protecting animal welfare, has no legal basis in EU Treaties and should be annulled.

  19. The contribution of Skyrme Hartree-Fock calculations to the understanding of the shell model

    International Nuclear Information System (INIS)

    Zamick, L.

    1984-01-01

    The authors present a detailed comparison of Skyrme Hartree-Fock and the shell model. The H-F calculations are sensitive to the parameters that are chosen. The H-F results justify the use of effective charges in restricted model space calculations by showing that the core contribution can be large. Further, the H-F results roughly justify the use of a constant E2 effective charge, but seem to yield nucleus dependent E4 effective charges. The H-F can yield results for E6 and higher multipoles, which would be zero in s-d model space calculations. On the other side of the coin in H-F the authors can easily consider only the lowest rotational band, whereas in the shell model one can calculate the energies and properties of many more states. In the comparison some apparent problems remain, in particular E4 transitions in the upper half of the s-d shell

  20. Kinematic Cosmology & a new ``Steady State'' Model of Continued Creation

    Science.gov (United States)

    Wegener, Mogens

    2006-03-01

    Only a new "steady state" model justifies the observations of fully mature galaxies at ever increasing distances. The basic idea behind the world model presented here, which is a synthesis of the cosmologies of Parmenides and Herakleitos, is that the invariant structure of the infinite contents of a universe in flux may be depicted as a finite hyperbolic pseudo-sphere.

  1. Is a Clean Development Mechanism project economically justified? Case study of an International Carbon Sequestration Project in Iran.

    Science.gov (United States)

    Katircioglu, Salih; Dalir, Sara; Olya, Hossein G

    2016-01-01

    The present study evaluates a carbon sequestration project for the three plant species in arid and semiarid regions of Iran. Results show that Haloxylon performed appropriately in the carbon sequestration process during the 6 years of the International Carbon Sequestration Project (ICSP). In addition to a high degree of carbon dioxide sequestration, Haloxylon shows high compatibility with severe environmental conditions and low maintenance costs. Financial and economic analysis demonstrated that the ICSP was justified from an economic perspective. The financial assessment showed that net present value (NPV) (US$1,098,022.70), internal rate of return (IRR) (21.53%), and payback period (6 years) were in an acceptable range. The results of the economic analysis suggested an NPV of US$4,407,805.15 and an IRR of 50.63%. Therefore, results of this study suggest that there are sufficient incentives for investors to participate in such kind of Clean Development Mechanism (CDM) projects.

  2. Optimising the management of complex dynamic ecosystems. An ecological-economic modelling approach

    NARCIS (Netherlands)

    Hein, L.G.

    2005-01-01

    Keywords: ecological-economic modelling; ecosystem services; resource use; efficient; sustainability; wetlands, rangelands.

  3. A random effects meta-analysis model with Box-Cox transformation.

    Science.gov (United States)

    Yamaguchi, Yusuke; Maruo, Kazushi; Partlett, Christopher; Riley, Richard D

    2017-07-19

    In a random effects meta-analysis model, true treatment effects for each study are routinely assumed to follow a normal distribution. However, normality is a restrictive assumption and the misspecification of the random effects distribution may result in a misleading estimate of overall mean for the treatment effect, an inappropriate quantification of heterogeneity across studies and a wrongly symmetric prediction interval. We focus on problems caused by an inappropriate normality assumption of the random effects distribution, and propose a novel random effects meta-analysis model where a Box-Cox transformation is applied to the observed treatment effect estimates. The proposed model aims to normalise an overall distribution of observed treatment effect estimates, which is sum of the within-study sampling distributions and the random effects distribution. When sampling distributions are approximately normal, non-normality in the overall distribution will be mainly due to the random effects distribution, especially when the between-study variation is large relative to the within-study variation. The Box-Cox transformation addresses this flexibly according to the observed departure from normality. We use a Bayesian approach for estimating parameters in the proposed model, and suggest summarising the meta-analysis results by an overall median, an interquartile range and a prediction interval. The model can be applied for any kind of variables once the treatment effect estimate is defined from the variable. A simulation study suggested that when the overall distribution of treatment effect estimates are skewed, the overall mean and conventional I 2 from the normal random effects model could be inappropriate summaries, and the proposed model helped reduce this issue. We illustrated the proposed model using two examples, which revealed some important differences on summary results, heterogeneity measures and prediction intervals from the normal random effects model. The

  4. A random effects meta-analysis model with Box-Cox transformation

    Directory of Open Access Journals (Sweden)

    Yusuke Yamaguchi

    2017-07-01

    Full Text Available Abstract Background In a random effects meta-analysis model, true treatment effects for each study are routinely assumed to follow a normal distribution. However, normality is a restrictive assumption and the misspecification of the random effects distribution may result in a misleading estimate of overall mean for the treatment effect, an inappropriate quantification of heterogeneity across studies and a wrongly symmetric prediction interval. Methods We focus on problems caused by an inappropriate normality assumption of the random effects distribution, and propose a novel random effects meta-analysis model where a Box-Cox transformation is applied to the observed treatment effect estimates. The proposed model aims to normalise an overall distribution of observed treatment effect estimates, which is sum of the within-study sampling distributions and the random effects distribution. When sampling distributions are approximately normal, non-normality in the overall distribution will be mainly due to the random effects distribution, especially when the between-study variation is large relative to the within-study variation. The Box-Cox transformation addresses this flexibly according to the observed departure from normality. We use a Bayesian approach for estimating parameters in the proposed model, and suggest summarising the meta-analysis results by an overall median, an interquartile range and a prediction interval. The model can be applied for any kind of variables once the treatment effect estimate is defined from the variable. Results A simulation study suggested that when the overall distribution of treatment effect estimates are skewed, the overall mean and conventional I 2 from the normal random effects model could be inappropriate summaries, and the proposed model helped reduce this issue. We illustrated the proposed model using two examples, which revealed some important differences on summary results, heterogeneity measures and

  5. Preliminary Multi-Variable Cost Model for Space Telescopes

    Science.gov (United States)

    Stahl, H. Philip; Hendrichs, Todd

    2010-01-01

    Parametric cost models are routinely used to plan missions, compare concepts and justify technology investments. This paper reviews the methodology used to develop space telescope cost models; summarizes recently published single variable models; and presents preliminary results for two and three variable cost models. Some of the findings are that increasing mass reduces cost; it costs less per square meter of collecting aperture to build a large telescope than a small telescope; and technology development as a function of time reduces cost at the rate of 50% per 17 years.

  6. Robust inference in the negative binomial regression model with an application to falls data.

    Science.gov (United States)

    Aeberhard, William H; Cantoni, Eva; Heritier, Stephane

    2014-12-01

    A popular way to model overdispersed count data, such as the number of falls reported during intervention studies, is by means of the negative binomial (NB) distribution. Classical estimating methods are well-known to be sensitive to model misspecifications, taking the form of patients falling much more than expected in such intervention studies where the NB regression model is used. We extend in this article two approaches for building robust M-estimators of the regression parameters in the class of generalized linear models to the NB distribution. The first approach achieves robustness in the response by applying a bounded function on the Pearson residuals arising in the maximum likelihood estimating equations, while the second approach achieves robustness by bounding the unscaled deviance components. For both approaches, we explore different choices for the bounding functions. Through a unified notation, we show how close these approaches may actually be as long as the bounding functions are chosen and tuned appropriately, and provide the asymptotic distributions of the resulting estimators. Moreover, we introduce a robust weighted maximum likelihood estimator for the overdispersion parameter, specific to the NB distribution. Simulations under various settings show that redescending bounding functions yield estimates with smaller biases under contamination while keeping high efficiency at the assumed model, and this for both approaches. We present an application to a recent randomized controlled trial measuring the effectiveness of an exercise program at reducing the number of falls among people suffering from Parkinsons disease to illustrate the diagnostic use of such robust procedures and their need for reliable inference. © 2014, The International Biometric Society.

  7. Using Spline Regression in Semi-Parametric Stochastic Frontier Analysis: An Application to Polish Dairy Farms

    DEFF Research Database (Denmark)

    Czekaj, Tomasz Gerard; Henningsen, Arne

    of specifying an unsuitable functional form and thus, model misspecification and biased parameter estimates. Given these problems of the DEA and the SFA, Fan, Li and Weersink (1996) proposed a semi-parametric stochastic frontier model that estimates the production function (frontier) by non......), Kumbhakar et al. (2007), and Henningsen and Kumbhakar (2009). The aim of this paper and its main contribution to the existing literature is the estimation semi-parametric stochastic frontier models using a different non-parametric estimation technique: spline regression (Ma et al. 2011). We apply...... efficiency of Polish dairy farms contributes to the insight into this dynamic process. Furthermore, we compare and evaluate the results of this spline-based semi-parametric stochastic frontier model with results of other semi-parametric stochastic frontier models and of traditional parametric stochastic...

  8. Stochastic differential equations in NONMEM: implementation, application, and comparison with ordinary differential equations

    DEFF Research Database (Denmark)

    Tornøe, Christoffer Wenzel; Overgaard, Rune Viig; Agerso, H.

    2005-01-01

    of noise: a measurement and a system noise term. The measurement noise represents uncorrelated error due to, for example, assay error while the system noise accounts for structural misspecifications, approximations of the dynamical model, and true random physiological fluctuations. Since the system noise...... degarelix. Conclusions. The EKF-based algorithm was successfully implemented in NONMEM for parameter estimation in population PK/PD models described by systems of SDEs. The example indicated that it was possible to pinpoint structural model deficiencies, and that valuable information may be obtained......Purpose. The objective of the present analysis was to explore the use of stochastic differential equations (SDEs) in population pharmacokinetic/pharmacodynamic (PK/PD) modeling. Methods. The intra-individual variability in nonlinear mixed-effects models based on SDEs is decomposed into two types...

  9. A mathematical model of star formation in the Galaxy

    Directory of Open Access Journals (Sweden)

    M.A. Sharaf

    2012-06-01

    Full Text Available This paper is generally concerned with star formation in the Galaxy, especially blue stars. Blue stars are the most luminous, massive and the largest in radius. A simple mathematical model of the formation of the stars is established and put in computational algorithm. This algorithm enables us to know more about the formation of the star. Some real and artificial examples had been used to justify this model.

  10. Multiscale Modelling and Analysis of Collective Decision Making in Swarm Robotics

    Science.gov (United States)

    Vigelius, Matthias; Meyer, Bernd; Pascoe, Geoffrey

    2014-01-01

    We present a unified approach to describing certain types of collective decision making in swarm robotics that bridges from a microscopic individual-based description to aggregate properties. Our approach encompasses robot swarm experiments, microscopic and probabilistic macroscopic-discrete simulations as well as an analytic mathematical model. Following up on previous work, we identify the symmetry parameter, a measure of the progress of the swarm towards a decision, as a fundamental integrated swarm property and formulate its time evolution as a continuous-time Markov process. Contrary to previous work, which justified this approach only empirically and a posteriori, we justify it from first principles and derive hard limits on the parameter regime in which it is applicable. PMID:25369026

  11. Multiscale modelling and analysis of collective decision making in swarm robotics.

    Science.gov (United States)

    Vigelius, Matthias; Meyer, Bernd; Pascoe, Geoffrey

    2014-01-01

    We present a unified approach to describing certain types of collective decision making in swarm robotics that bridges from a microscopic individual-based description to aggregate properties. Our approach encompasses robot swarm experiments, microscopic and probabilistic macroscopic-discrete simulations as well as an analytic mathematical model. Following up on previous work, we identify the symmetry parameter, a measure of the progress of the swarm towards a decision, as a fundamental integrated swarm property and formulate its time evolution as a continuous-time Markov process. Contrary to previous work, which justified this approach only empirically and a posteriori, we justify it from first principles and derive hard limits on the parameter regime in which it is applicable.

  12. CRITICAL ANALYSIS OF EVALUATION MODEL LOMCE

    Directory of Open Access Journals (Sweden)

    José Luis Bernal Agudo

    2015-06-01

    Full Text Available The evaluation model that the LOMCE projects sinks its roots into the neoliberal beliefs, reflecting a specific way of understanding the world. What matters is not the process but the results, being the evaluation the center of the education-learning processes. It presents an evil planning, since the theory that justifies the model doesn’t specify upon coherent proposals, where there is an excessive worry for excellence and diversity is left out. A comprehensive way of understanding education should be recovered.

  13. Estimating the location and spatial extent of a covert anthrax release.

    Directory of Open Access Journals (Sweden)

    Judith Legrand

    2009-01-01

    Full Text Available Rapidly identifying the features of a covert release of an agent such as anthrax could help to inform the planning of public health mitigation strategies. Previous studies have sought to estimate the time and size of a bioterror attack based on the symptomatic onset dates of early cases. We extend the scope of these methods by proposing a method for characterizing the time, strength, and also the location of an aerosolized pathogen release. A back-calculation method is developed allowing the characterization of the release based on the data on the first few observed cases of the subsequent outbreak, meteorological data, population densities, and data on population travel patterns. We evaluate this method on small simulated anthrax outbreaks (about 25-35 cases and show that it could date and localize a release after a few cases have been observed, although misspecifications of the spore dispersion model, or the within-host dynamics model, on which the method relies can bias the estimates. Our method could also provide an estimate of the outbreak's geographical extent and, as a consequence, could help to identify populations at risk and, therefore, requiring prophylactic treatment. Our analysis demonstrates that while estimates based on the first ten or 15 observed cases were more accurate and less sensitive to model misspecifications than those based on five cases, overall mortality is minimized by targeting prophylactic treatment early on the basis of estimates made using data on the first five cases. The method we propose could provide early estimates of the time, strength, and location of an aerosolized anthrax release and the geographical extent of the subsequent outbreak. In addition, estimates of release features could be used to parameterize more detailed models allowing the simulation of control strategies and intervention logistics.

  14. Method of modeling the cognitive radio using Opnet Modeler

    OpenAIRE

    Yakovenko, I. V.; Poshtarenko, V. M.; Kostenko, R. V.

    2012-01-01

    This article is a review of the first wireless standard based on cognitive radio networks. The necessity of wireless networks based on the technology of cognitive radio. An example of the use of standard IEEE 802.22 in Wimax network through which was implemented in the simulation software environment Opnet Modeler. Schedules to check the performance of HTTP and FTP protocols CR network. Simulation results justify the use of standard IEEE 802.22 in wireless networks. Ця стаття являє собою о...

  15. The disruption management model.

    Science.gov (United States)

    McAlister, James

    2011-10-01

    Within all organisations, business continuity disruptions present a set of dilemmas that managers may not have dealt with before in their normal daily duties. The disruption management model provides a simple but effective management tool to enable crisis management teams to stay focused on recovery in the midst of a business continuity incident. The model has four chronological primary headlines, which steer the team through a quick-time crisis decision-making process. The procedure facilitates timely, systematic, rationalised and justified decisions, which can withstand post-event scrutiny. The disruption management model has been thoroughly tested within an emergency services environment and is proven to significantly support clear and concise decision making in a business continuity context.

  16. ECONOMETRIC APPROACH OF HETEROSKEDASTICITY ON FINANCIAL TIME SERIES IN A GENERAL FRAMEWORK

    Directory of Open Access Journals (Sweden)

    FELICIA RAMONA BIRĂU

    2012-12-01

    Full Text Available The aim of this paper is to provide an overview of the diagnostic tests for detecting heteroskedasticity on financial time series. In financial econometrics, heteroskedasticity is generally associated with cross sectional data but can also be identified modeling time series data. The presence of heteroscedasticity in financial time series can be caused by certain specific factors, like a model misspecification, inadequate data transformation or as a result of certain outliers. Heteroskedasticity arise when the homoskedasticity assumption is violated. Testing for the presence of heteroskedasticity in financial time is performed by applying diagnostic test, such as : Breusch-Pagan LM test, White’s test, Glesjer LM test, Harvey-Godfrey LM test, Park LM test and Goldfeld-Quand test.

  17. Model Based Reasoning by Introductory Students When Analyzing Earth Systems and Societal Challenges

    Science.gov (United States)

    Holder, L. N.; Herbert, B. E.

    2014-12-01

    Understanding how students use their conceptual models to reason about societal challenges involving societal issues such as natural hazard risk assessment, environmental policy and management, and energy resources can improve instructional activity design that directly impacts student motivation and literacy. To address this question, we created four laboratory exercises for an introductory physical geology course at Texas A&M University that engages students in authentic scientific practices by using real world problems and issues that affect societies based on the theory of situated cognition. Our case-study design allows us to investigate the various ways that students utilize model based reasoning to identify and propose solutions to societally relevant issues. In each of the four interventions, approximately 60 students in three sections of introductory physical geology were expected to represent and evaluate scientific data, make evidence-based claims about the data trends, use those claims to express conceptual models, and use their models to analyze societal challenges. Throughout each step of the laboratory exercise students were asked to justify their claims, models, and data representations using evidence and through the use of argumentation with peers. Cognitive apprenticeship was the foundation for instruction used to scaffold students so that in the first exercise they are given a partially completed model and in the last exercise students are asked to generate a conceptual model on their own. Student artifacts, including representation of earth systems, representation of scientific data, verbal and written explanations of models and scientific arguments, and written solutions to specific societal issues or environmental problems surrounding earth systems, were analyzed through the use of a rubric that modeled authentic expertise and students were sorted into three categories. Written artifacts were examined to identify student argumentation and

  18. Robust portfolio choice with ambiguity and learning about return predictability

    DEFF Research Database (Denmark)

    Larsen, Linda Sandris; Branger, Nicole; Munk, Claus

    2013-01-01

    We analyze the optimal stock-bond portfolio under both learning and ambiguity aversion. Stock returns are predictable by an observable and an unobservable predictor, and the investor has to learn about the latter. Furthermore, the investor is ambiguity-averse and has a preference for investment...... strategies that are robust to model misspecifications. We derive a closed-form solution for the optimal robust investment strategy. We find that both learning and ambiguity aversion impact the level and structure of the optimal stock investment. Suboptimal strategies resulting either from not learning...... or from not considering ambiguity can lead to economically significant losses....

  19. A Model of the Perception of Facial Expressions of Emotion by Humans: Research Overview and Perspectives.

    Science.gov (United States)

    Martinez, Aleix; Du, Shichuan

    2012-05-01

    In cognitive science and neuroscience, there have been two leading models describing how humans perceive and classify facial expressions of emotion-the continuous and the categorical model. The continuous model defines each facial expression of emotion as a feature vector in a face space. This model explains, for example, how expressions of emotion can be seen at different intensities. In contrast, the categorical model consists of C classifiers, each tuned to a specific emotion category. This model explains, among other findings, why the images in a morphing sequence between a happy and a surprise face are perceived as either happy or surprise but not something in between. While the continuous model has a more difficult time justifying this latter finding, the categorical model is not as good when it comes to explaining how expressions are recognized at different intensities or modes. Most importantly, both models have problems explaining how one can recognize combinations of emotion categories such as happily surprised versus angrily surprised versus surprise. To resolve these issues, in the past several years, we have worked on a revised model that justifies the results reported in the cognitive science and neuroscience literature. This model consists of C distinct continuous spaces. Multiple (compound) emotion categories can be recognized by linearly combining these C face spaces. The dimensions of these spaces are shown to be mostly configural. According to this model, the major task for the classification of facial expressions of emotion is precise, detailed detection of facial landmarks rather than recognition. We provide an overview of the literature justifying the model, show how the resulting model can be employed to build algorithms for the recognition of facial expression of emotion, and propose research directions in machine learning and computer vision researchers to keep pushing the state of the art in these areas. We also discuss how the model can

  20. Some variations of the Kristallin-I near-field model

    Energy Technology Data Exchange (ETDEWEB)

    Smith, P A; Curti, E [Paul Scherrer Inst. (PSI), Villigen (Switzerland)

    1995-11-01

    The Kristallin-I project is an integrated analysis of the final disposal of vitrified high-level radioactive waste (HLW) in the crystalline basement of Northern Switzerland. It includes an analysis of the radiological consequences of radionuclide release from a repository. This analysis employs a chain of independent models for the near-field, geosphere and biosphere. In constructing these models, processes are incorporated that are believed to be relevant to repository safety, while other processes are neglected. In the present report, a set of simplified, steady-state models of the near-field is developed to investigate the possible effects of specific processes which are neglected in the time-dependent Kristallin-I near-field model. These processes are neglected, either because (i) they are thought unlikely to occur to a significant degree, or because (ii) they are likely to make a positive contribution to the performance of the near-field barrier to radionuclide migration, but are insufficiently understood to justify incorporating them in a safety assessment. The aim of this report is to investigate whether the arguments for neglecting these processes in the Kristallin-I near-field model can be justified. This work addresses the following topics: - radionuclide transport at the bentonite-host rock interface, - canister settlement, -chemical conditions and radionuclide transport at the glass-bentonite interface. (author) figs., tabs., refs.

  1. Some variations of the Kristallin-I near-field model

    International Nuclear Information System (INIS)

    Smith, P.A.; Curti, E.

    1995-11-01

    The Kristallin-I project is an integrated analysis of the final disposal of vitrified high-level radioactive waste (HLW) in the crystalline basement of Northern Switzerland. It includes an analysis of the radiological consequences of radionuclide release from a repository. This analysis employs a chain of independent models for the near-field, geosphere and biosphere. In constructing these models, processes are incorporated that are believed to be relevant to repository safety, while other processes are neglected. In the present report, a set of simplified, steady-state models of the near-field is developed to investigate the possible effects of specific processes which are neglected in the time-dependent Kristallin-I near-field model. These processes are neglected, either because (i) they are thought unlikely to occur to a significant degree, or because (ii) they are likely to make a positive contribution to the performance of the near-field barrier to radionuclide migration, but are insufficiently understood to justify incorporating them in a safety assessment. The aim of this report is to investigate whether the arguments for neglecting these processes in the Kristallin-I near-field model can be justified. This work addresses the following topics: - radionuclide transport at the bentonite-host rock interface, - canister settlement, -chemical conditions and radionuclide transport at the glass-bentonite interface. (author) figs., tabs., refs

  2. For Better or Worse? System-Justifying Beliefs in Sixth-Grade Predict Trajectories of Self-Esteem and Behavior Across Early Adolescence.

    Science.gov (United States)

    Godfrey, Erin B; Santos, Carlos E; Burson, Esther

    2017-06-19

    Scholars call for more attention to how marginalization influences the development of low-income and racial/ethnic minority youth and emphasize the importance of youth's subjective perceptions of contexts. This study examines how beliefs about the fairness of the American system (system justification) in sixth grade influence trajectories of self-esteem and behavior among 257 early adolescents (average age 11.4) from a diverse, low-income, middle school in an urban southwestern city. System justification was associated with higher self-esteem, less delinquent behavior, and better classroom behavior in sixth grade but worse trajectories of these outcomes from sixth to eighth grade. These findings provide novel evidence that system-justifying beliefs undermine the well-being of marginalized youth and that early adolescence is a critical developmental period for this process. © 2017 The Authors. Child Development © 2017 Society for Research in Child Development, Inc.

  3. Integrated population modeling of black bears in Minnesota: implications for monitoring and management.

    Directory of Open Access Journals (Sweden)

    John R Fieberg

    Full Text Available BACKGROUND: Wildlife populations are difficult to monitor directly because of costs and logistical challenges associated with collecting informative abundance data from live animals. By contrast, data on harvested individuals (e.g., age and sex are often readily available. Increasingly, integrated population models are used for natural resource management because they synthesize various relevant data into a single analysis. METHODOLOGY/PRINCIPAL FINDINGS: We investigated the performance of integrated population models applied to black bears (Ursus americanus in Minnesota, USA. Models were constructed using sex-specific age-at-harvest matrices (1980-2008, data on hunting effort and natural food supplies (which affects hunting success, and statewide mark-recapture estimates of abundance (1991, 1997, 2002. We compared this approach to Downing reconstruction, a commonly used population monitoring method that utilizes only age-at-harvest data. We first conducted a large-scale simulation study, in which our integrated models provided more accurate estimates of population trends than did Downing reconstruction. Estimates of trends were robust to various forms of model misspecification, including incorrectly specified cub and yearling survival parameters, age-related reporting biases in harvest data, and unmodeled temporal variability in survival and harvest rates. When applied to actual data on Minnesota black bears, the model predicted that harvest rates were negatively correlated with food availability and positively correlated with hunting effort, consistent with independent telemetry data. With no direct data on fertility, the model also correctly predicted 2-point cycles in cub production. Model-derived estimates of abundance for the most recent years provided a reasonable match to an empirical population estimate obtained after modeling efforts were completed. CONCLUSIONS/SIGNIFICANCE: Integrated population modeling provided a reasonable

  4. Neimark-Sacker bifurcation for the discrete-delay Kaldor model

    International Nuclear Information System (INIS)

    Dobrescu, Loretti I.; Opris, Dumitru

    2009-01-01

    We consider a discrete-delay time, Kaldor nonlinear business cycle model in income and capital. Given an investment function, resembling the one discussed by Rodano, we use the linear approximation analysis to state the local stability property and local bifurcations, in the parameter space. Finally, we will give some numerical examples to justify the theoretical results.

  5. Attitudes justifying domestic violence predict endorsement of corporal punishment and physical and psychological aggression towards children: a study in 25 low- and middle-income countries.

    Science.gov (United States)

    Lansford, Jennifer E; Deater-Deckard, Kirby; Bornstein, Marc H; Putnick, Diane L; Bradley, Robert H

    2014-05-01

    The Convention on the Rights of the Child has prompted countries to protect children from abuse and exploitation. Exposure to domestic violence and corporal punishment are risk factors in children's development. This study investigated how women's attitudes about domestic violence are related to attitudes about corporal punishment and harsh behaviors toward children, and whether country-wide norms regarding domestic violence and corporal punishment are related to psychological aggression and physical violence toward children. Data were drawn from the Multiple Indicator Cluster Survey, a nationally representative and internationally comparable household survey developed by the United Nations Children's Fund. Measures of domestic violence and discipline were completed by 85 999 female caregivers of children between the ages of 2 and 14 years from families in 25 low- and middle-income countries. Mothers who believed that husbands were justified in hitting their wives were more likely to believe that corporal punishment is necessary to rear children. Mothers who believed that husbands were justified in hitting their wives and that corporal punishment is necessary to rear children were more likely to report that their child had experienced psychological aggression and physical violence. Countrywide norms regarding the acceptability of husbands hitting wives and advisability of corporal punishment moderated the links between mothers' attitudes and their behaviors toward children. Pediatricians can address parents' psychological aggression and physical violence toward children by discussing parents' attitudes and behaviors within a framework that incorporates social norms regarding the acceptability of domestic violence and corporal punishment. Copyright © 2014 Elsevier Inc. All rights reserved.

  6. Cost effectiveness of recycling: A systems model

    Energy Technology Data Exchange (ETDEWEB)

    Tonjes, David J., E-mail: david.tonjes@stonybrook.edu [Department of Technology and Society, College of Engineering and Applied Sciences, Stony Brook University, Stony Brook, NY 11794-3560 (United States); Waste Reduction and Management Institute, School of Marine and Atmospheric Sciences, Stony Brook University, Stony Brook, NY 11794-5000 (United States); Center for Bioenergy Research and Development, Advanced Energy Research and Technology Center, Stony Brook University, 1000 Innovation Rd., Stony Brook, NY 11794-6044 (United States); Mallikarjun, Sreekanth, E-mail: sreekanth.mallikarjun@stonybrook.edu [Department of Technology and Society, College of Engineering and Applied Sciences, Stony Brook University, Stony Brook, NY 11794-3560 (United States)

    2013-11-15

    Highlights: • Curbside collection of recyclables reduces overall system costs over a range of conditions. • When avoided costs for recyclables are large, even high collection costs are supported. • When avoided costs for recyclables are not great, there are reduced opportunities for savings. • For common waste compositions, maximizing curbside recyclables collection always saves money. - Abstract: Financial analytical models of waste management systems have often found that recycling costs exceed direct benefits, and in order to economically justify recycling activities, externalities such as household expenses or environmental impacts must be invoked. Certain more empirically based studies have also found that recycling is more expensive than disposal. Other work, both through models and surveys, have found differently. Here we present an empirical systems model, largely drawn from a suburban Long Island municipality. The model accounts for changes in distribution of effort as recycling tonnages displace disposal tonnages, and the seven different cases examined all show that curbside collection programs that manage up to between 31% and 37% of the waste stream should result in overall system savings. These savings accrue partially because of assumed cost differences in tip fees for recyclables and disposed wastes, and also because recycling can result in a more efficient, cost-effective collection program. These results imply that increases in recycling are justifiable due to cost-savings alone, not on more difficult to measure factors that may not impact program budgets.

  7. Endogeneity Corrected Stochastic Production Frontier and Technical Efficiency

    NARCIS (Netherlands)

    Shee, A.; Stefanou, S.E.

    2015-01-01

    A major econometric issue in estimating production parameters and technical efficiency is the possibility that some forces influencing production are only observed by the firm and not by the econometrician. Not only can this misspecification lead to a biased inference on the output elasticity of

  8. Does uncertainty justify intensity emission caps?

    International Nuclear Information System (INIS)

    Quirion, Philippe

    2005-01-01

    Environmental policies often set 'relative' or 'intensity' emission caps, i.e. emission limits proportional to the polluting firm's output. One of the arguments put forth in favour of relative caps is based on the uncertainty on business-as-usual output: if the firm's production level is higher than expected, so will be business-as-usual emissions, hence reaching a given level of emissions will be more costly than expected. As a consequence, it is argued, a higher emission level should be allowed if the production level is more important than expected. We assess this argument with a stochastic analytical model featuring two random variables: the business-as-usual emission level, proportional to output, and the slope of the marginal abatement cost curve. We compare the relative cap to an absolute cap and to a price instrument, in terms of welfare impact. It turns out that in most plausible cases, either a price instrument or an absolute cap yields a higher expected welfare than a relative cap. Quantitatively, the difference in expected welfare is typically very small between the absolute and the relative cap but may be significant between the relative cap and the price instrument. (author)

  9. Comment on ''Spectroscopy of samarium isotopes in the sdg interacting boson model''

    International Nuclear Information System (INIS)

    Kuyucak, S.; Lac, V.

    1993-01-01

    We point out that the data used in the sdg boson model calculations by Devi and Kota [Phys. Rev. C 45, 2238 (1992)] can be equally well described by the much simpler sd boson model. We present additional data for the Sm isotopes which cannot be explained in the sd model and hence may justify such an extension to the sdg bosons. We also comment on the form of the Hamiltonian and the transition operators used in this paper

  10. Computed tomography is not justified in every pediatric blunt trauma patient with a suspicious mechanism of injury.

    Science.gov (United States)

    Hershkovitz, Yehuda; Zoarets, Itai; Stepansky, Albert; Kozer, Eran; Shapira, Zahar; Klin, Baruch; Halevy, Ariel; Jeroukhimov, Igor

    2014-07-01

    Computed tomography (CT) has become an important tool for the diagnosis of intra-abdominal and chest injuries in patients with blunt trauma. The role of CT in conscious asymptomatic patients with a suspicious mechanism of injury remains controversial. This controversy intensifies in the management of pediatric blunt trauma patients, who are much more susceptible to radiation exposure. The objective of this study was to evaluate the role of abdominal and chest CT imaging in asymptomatic pediatric patients with a suspicious mechanism of injury. Forty-two pediatric patients up to 15 years old were prospectively enrolled. All patients presented with a suspicious mechanism of blunt trauma and multisystem injury. They were neurologically intact and had no signs of injury to the abdomen or chest. Patients underwent CT imaging of the chest and abdomen as part of the initial evaluation. Thirty-one patients (74%) had a normal CT scan. Two patients of 11 with an abnormal CT scan required a change in management and were referred for observation in the Intensive Care Unit. None of the patients required surgical intervention. The routine use of CT in asymptomatic pediatric patients with a suspicious mechanism of blunt trauma injury is not justified. Copyright © 2014 Elsevier Inc. All rights reserved.

  11. Messing Up Texas?: A Re-Analysis of the Effects of Executions on Homicides.

    Directory of Open Access Journals (Sweden)

    Patrick T Brandt

    Full Text Available Executions in Texas from 1994-2005 do not deter homicides, contrary to the results of Land et al. (2009. We find that using different models--based on pre-tests for unit roots that correct for earlier model misspecifications--one cannot reject the null hypothesis that executions do not lead to a change in homicides in Texas over this period. Using additional control variables, we show that variables such as the number of prisoners in Texas may drive the main drop in homicides over this period. Such conclusions however are highly sensitive to model specification decisions, calling into question the assumptions about fixed parameters and constant structural relationships. This means that using dynamic regressions to account for policy changes that may affect homicides need to be done with significant care and attention.

  12. Messing Up Texas?: A Re-Analysis of the Effects of Executions on Homicides.

    Science.gov (United States)

    Brandt, Patrick T; Kovandzic, Tomislav V

    2015-01-01

    Executions in Texas from 1994-2005 do not deter homicides, contrary to the results of Land et al. (2009). We find that using different models--based on pre-tests for unit roots that correct for earlier model misspecifications--one cannot reject the null hypothesis that executions do not lead to a change in homicides in Texas over this period. Using additional control variables, we show that variables such as the number of prisoners in Texas may drive the main drop in homicides over this period. Such conclusions however are highly sensitive to model specification decisions, calling into question the assumptions about fixed parameters and constant structural relationships. This means that using dynamic regressions to account for policy changes that may affect homicides need to be done with significant care and attention.

  13. Investigation on Self-Organization Processes in DC Generators by Synergetic Modeling

    OpenAIRE

    Ion Voncilă; Mădălin Costin; Răzvan Buhosu

    2014-01-01

    In this paper is suggested a new mathematical model, based on which it can be justified the self-excitation DC generators, either shunt or series excitation, by self-organization phenomena that appear to overcome threshold values (self-excitation in these generators is an avalanche process, a positive feedback, considered at first glance uncontrollable).

  14. Modelling of air-conditioned and heated spaces

    Energy Technology Data Exchange (ETDEWEB)

    Moehl, U

    1987-01-01

    A space represents a complex system involving numerous components, manipulated variables and disturbances which need to be described if dynamic behaviour of space air is to be determined. A justifiable amount of simulation input is determined by the application of adjusted modelling of the individual components. The determination of natural air exchange in heated spaces and of space-air flow in air-conditioned space are a primary source of uncertainties. (orig.).

  15. The Separate Spheres Model of Gendered Inequality.

    Science.gov (United States)

    Miller, Andrea L; Borgida, Eugene

    2016-01-01

    Research on role congruity theory and descriptive and prescriptive stereotypes has established that when men and women violate gender stereotypes by crossing spheres, with women pursuing career success and men contributing to domestic labor, they face backlash and economic penalties. Less is known, however, about the types of individuals who are most likely to engage in these forms of discrimination and the types of situations in which this is most likely to occur. We propose that psychological research will benefit from supplementing existing research approaches with an individual differences model of support for separate spheres for men and women. This model allows psychologists to examine individual differences in support for separate spheres as they interact with situational and contextual forces. The separate spheres ideology (SSI) has existed as a cultural idea for many years but has not been operationalized or modeled in social psychology. The Separate Spheres Model presents the SSI as a new psychological construct characterized by individual differences and a motivated system-justifying function, operationalizes the ideology with a new scale measure, and models the ideology as a predictor of some important gendered outcomes in society. As a first step toward developing the Separate Spheres Model, we develop a new measure of individuals' endorsement of the SSI and demonstrate its reliability, convergent validity, and incremental predictive validity. We provide support for the novel hypotheses that the SSI predicts attitudes regarding workplace flexibility accommodations, income distribution within families between male and female partners, distribution of labor between work and family spheres, and discriminatory workplace behaviors. Finally, we provide experimental support for the hypothesis that the SSI is a motivated, system-justifying ideology.

  16. The Separate Spheres Model of Gendered Inequality.

    Directory of Open Access Journals (Sweden)

    Andrea L Miller

    Full Text Available Research on role congruity theory and descriptive and prescriptive stereotypes has established that when men and women violate gender stereotypes by crossing spheres, with women pursuing career success and men contributing to domestic labor, they face backlash and economic penalties. Less is known, however, about the types of individuals who are most likely to engage in these forms of discrimination and the types of situations in which this is most likely to occur. We propose that psychological research will benefit from supplementing existing research approaches with an individual differences model of support for separate spheres for men and women. This model allows psychologists to examine individual differences in support for separate spheres as they interact with situational and contextual forces. The separate spheres ideology (SSI has existed as a cultural idea for many years but has not been operationalized or modeled in social psychology. The Separate Spheres Model presents the SSI as a new psychological construct characterized by individual differences and a motivated system-justifying function, operationalizes the ideology with a new scale measure, and models the ideology as a predictor of some important gendered outcomes in society. As a first step toward developing the Separate Spheres Model, we develop a new measure of individuals' endorsement of the SSI and demonstrate its reliability, convergent validity, and incremental predictive validity. We provide support for the novel hypotheses that the SSI predicts attitudes regarding workplace flexibility accommodations, income distribution within families between male and female partners, distribution of labor between work and family spheres, and discriminatory workplace behaviors. Finally, we provide experimental support for the hypothesis that the SSI is a motivated, system-justifying ideology.

  17. The Separate Spheres Model of Gendered Inequality

    Science.gov (United States)

    Miller, Andrea L.; Borgida, Eugene

    2016-01-01

    Research on role congruity theory and descriptive and prescriptive stereotypes has established that when men and women violate gender stereotypes by crossing spheres, with women pursuing career success and men contributing to domestic labor, they face backlash and economic penalties. Less is known, however, about the types of individuals who are most likely to engage in these forms of discrimination and the types of situations in which this is most likely to occur. We propose that psychological research will benefit from supplementing existing research approaches with an individual differences model of support for separate spheres for men and women. This model allows psychologists to examine individual differences in support for separate spheres as they interact with situational and contextual forces. The separate spheres ideology (SSI) has existed as a cultural idea for many years but has not been operationalized or modeled in social psychology. The Separate Spheres Model presents the SSI as a new psychological construct characterized by individual differences and a motivated system-justifying function, operationalizes the ideology with a new scale measure, and models the ideology as a predictor of some important gendered outcomes in society. As a first step toward developing the Separate Spheres Model, we develop a new measure of individuals’ endorsement of the SSI and demonstrate its reliability, convergent validity, and incremental predictive validity. We provide support for the novel hypotheses that the SSI predicts attitudes regarding workplace flexibility accommodations, income distribution within families between male and female partners, distribution of labor between work and family spheres, and discriminatory workplace behaviors. Finally, we provide experimental support for the hypothesis that the SSI is a motivated, system-justifying ideology. PMID:26800454

  18. Kinetic models in spin chemistry. 1. The hyperfine interaction

    DEFF Research Database (Denmark)

    Mojaza, M.; Pedersen, J. B.

    2012-01-01

    Kinetic models for quantum systems are quite popular due to their simplicity, although they are difficult to justify. We show that the transformation from quantum to kinetic description can be done exactly for the hyperfine interaction of one nuclei with arbitrary spin; more spins are described w...... induced enhancement of the reaction yield. (C) 2012 Elsevier B.V. All rights reserved....

  19. Cointegration and Econometric Analysis of Non-Stationary Data in ...

    African Journals Online (AJOL)

    This is in conformity with the philosophy underlying the cointegration theory. Therefore, ignoring cointegration in non-stationary time series variables could lead to misspecification of the underlying process in the determination of corporate income tax in Nigeria. Thus, the study conclude that cointegration is greatly enhanced ...

  20. Developing a Model for Assigning Senior Officers in the Brazilian Air Force

    Science.gov (United States)

    2015-03-01

    Bandura , Albert. 1977. Social Learning Theory . Englewood Cliffs, NJ: Prentice- Hall. Black, Gene. 2014. “Surface Warfare Officer Community Brief...1982, 565) use social learning theory to justify the model. According to this theory , human behavior can be explained in terms of “continuous...DRIVERS ......................................................................... 21 A. THEORY OF WORK PERFORMANCE

  1. Semi-analytical model for a slab one-dimensional photonic crystal

    Science.gov (United States)

    Libman, M.; Kondratyev, N. M.; Gorodetsky, M. L.

    2018-02-01

    In our work we justify the applicability of a dielectric mirror model to the description of a real photonic crystal. We demonstrate that a simple one-dimensional model of a multilayer mirror can be employed for modeling of a slab waveguide with periodically changing width. It is shown that this width change can be recalculated to the effective refraction index modulation. The applicability of transfer matrix method of reflection properties calculation was demonstrated. Finally, our 1-D model was employed to analyze reflection properties of a 2-D structure - a slab photonic crystal with a number of elliptic holes.

  2. Impact of copula directional specification on multi-trial evaluation of surrogate endpoints

    Science.gov (United States)

    Renfro, Lindsay A.; Shang, Hongwei; Sargent, Daniel J.

    2014-01-01

    Evaluation of surrogate endpoints using patient-level data from multiple trials is the gold standard, where multi-trial copula models are used to quantify both patient-level and trial-level surrogacy. While limited consideration has been given in the literature to copula choice (e.g., Clayton), no prior consideration has been given to direction of implementation (via survival versus distribution functions). We demonstrate that evenwith the “correct” copula family, directional misspecification leads to biased estimates of patient-level and trial-level surrogacy. We illustrate with a simulation study and a re-analysis of disease-free survival as a surrogate for overall survival in early stage colon cancer. PMID:24905465

  3. Investigation on Self-Organization Processes in DC Generators by Synergetic Modeling

    Directory of Open Access Journals (Sweden)

    Ion Voncilă

    2014-09-01

    Full Text Available In this paper is suggested a new mathematical model, based on which it can be justified the self-excitation DC generators, either shunt or series excitation, by self-organization phenomena that appear to overcome threshold values (self-excitation in these generators is an avalanche process, a positive feedback, considered at first glance uncontrollable.

  4. Theoretical Background for the Decision-Making Process Modelling under Controlled Intervention Conditions

    OpenAIRE

    Bakanauskienė Irena; Baronienė Laura

    2017-01-01

    This article is intended to theoretically justify the decision-making process model for the cases, when active participation of investing entities in controlling the activities of an organisation and their results is noticeable. Based on scientific literature analysis, a concept of controlled conditions is formulated, and using a rational approach to the decision-making process, a model of the 11-steps decision-making process under controlled intervention is presented. Also, there have been u...

  5. Justified Self-Esteem

    Science.gov (United States)

    Kristjansson, Kristjan

    2007-01-01

    This paper develops a thread of argument from previous contributions to this journal by Richard Smith and Ruth Cigman about the educational salience of self-esteem. It is argued--contra Smith and Cigman--that the social science conception of self-esteem does serve a useful educational function, most importantly in undermining the inflated…

  6. Justifier l’injustifiable

    Directory of Open Access Journals (Sweden)

    Olivier Jouanjan

    2006-04-01

    Full Text Available Le « droit » tient aussi dans les discours qu’on tient sur lui, notamment les discours des juristes. L’analyse des discours des juristes engagés du Troisième Reich fait ressortir un schéma général de justification, un principe grammatical génératif de ces discours qu’on peut qualifier de « décisionnisme substantiel ». Le positivisme juridique, parce qu’abstrait et « juif », fut désigné comme l’ennemi principal de la science du « droit » nazi, une « science » qui ne pouvait se concevoir elle-même que comme politique. En analysant la construction idéologico-juridique de l’État total, la destruction de la notion de droits subjectifs, la substitution au concept de personnalité juridique d’une notion « concrète » de l’« être-membre-de-la-communauté », puis en montrant le fonctionnement de ces discours dans la pratique, la présente contribution met en évidence la double logique de l’incorporation et de l’incarnation à l’œuvre dans la science nazie du droit, une « science » dont Carl Schmitt fait la « théorie » en 1934 à travers la « pensée de l’ordre concret ».

  7. EM Adaptive LASSO – A Multilocus Modeling Strategy for Detecting SNPs Associated With Zero-inflated Count Phenotypes

    Directory of Open Access Journals (Sweden)

    Himel eMallick

    2016-03-01

    Full Text Available Count data are increasingly ubiquitous in genetic association studies, where it is possible to observe excess zero counts as compared to what is expected based on standard assumptions. For instance, in rheumatology, data are usually collected in multiple joints within a person or multiple sub-regions of a joint, and it is not uncommon that the phenotypes contain enormous number of zeroes due to the presence of excessive zero counts in majority of patients. Most existing statistical methods assume that the count phenotypes follow one of these four distributions with appropriate dispersion-handling mechanisms: Poisson, Zero-inflated Poisson (ZIP, Negative Binomial, and Zero-inflated Negative Binomial (ZINB. However, little is known about their implications in genetic association studies. Also, there is a relative paucity of literature on their usefulness with respect to model misspecification and variable selection. In this article, we have investigated the performance of several state-of-the-art approaches for handling zero-inflated count data along with a novel penalized regression approach with an adaptive LASSO penalty, by simulating data under a variety of disease models and linkage disequilibrium patterns. By taking into account data-adaptive weights in the estimation procedure, the proposed method provides greater flexibility in multi-SNP modeling of zero-inflated count phenotypes. A fast coordinate descent algorithm nested within an EM (expectation-maximization algorithm is implemented for estimating the model parameters and conducting variable selection simultaneously. Results show that the proposed method has optimal performance in the presence of multicollinearity, as measured by both prediction accuracy and empirical power, which is especially apparent as the sample size increases. Moreover, the Type I error rates become more or less uncontrollable for the competing methods when a model is misspecified, a phenomenon routinely

  8. A Framework for Developing the Structure of Public Health Economic Models.

    Science.gov (United States)

    Squires, Hazel; Chilcott, James; Akehurst, Ronald; Burr, Jennifer; Kelly, Michael P

    2016-01-01

    A conceptual modeling framework is a methodology that assists modelers through the process of developing a model structure. Public health interventions tend to operate in dynamically complex systems. Modeling public health interventions requires broader considerations than clinical ones. Inappropriately simple models may lead to poor validity and credibility, resulting in suboptimal allocation of resources. This article presents the first conceptual modeling framework for public health economic evaluation. The framework presented here was informed by literature reviews of the key challenges in public health economic modeling and existing conceptual modeling frameworks; qualitative research to understand the experiences of modelers when developing public health economic models; and piloting a draft version of the framework. The conceptual modeling framework comprises four key principles of good practice and a proposed methodology. The key principles are that 1) a systems approach to modeling should be taken; 2) a documented understanding of the problem is imperative before and alongside developing and justifying the model structure; 3) strong communication with stakeholders and members of the team throughout model development is essential; and 4) a systematic consideration of the determinants of health is central to identifying the key impacts of public health interventions. The methodology consists of four phases: phase A, aligning the framework with the decision-making process; phase B, identifying relevant stakeholders; phase C, understanding the problem; and phase D, developing and justifying the model structure. Key areas for further research involve evaluation of the framework in diverse case studies and the development of methods for modeling individual and social behavior. This approach could improve the quality of Public Health economic models, supporting efficient allocation of scarce resources. Copyright © 2016 International Society for Pharmacoeconomics

  9. Had We But World Enough, and Time... But We Don't!: Justifying the Thermodynamic and Infinite-Time Limits in Statistical Mechanics

    Science.gov (United States)

    Palacios, Patricia

    2018-05-01

    In this paper, I compare the use of the thermodynamic limit in the theory of phase transitions with the infinite-time limit in the explanation of equilibrium statistical mechanics. In the case of phase transitions, I will argue that the thermodynamic limit can be justified pragmatically since the limit behavior (i) also arises before we get to the limit and (ii) for values of N that are physically significant. However, I will contend that the justification of the infinite-time limit is less straightforward. In fact, I will point out that even in cases where one can recover the limit behavior for finite t, i.e. before we get to the limit, one cannot recover this behavior for realistic time scales. I will claim that this leads us to reconsider the role that the rate of convergence plays in the justification of infinite limits and calls for a revision of the so-called Butterfield's principle.

  10. Had We But World Enough, and Time... But We Don't!: Justifying the Thermodynamic and Infinite-Time Limits in Statistical Mechanics

    Science.gov (United States)

    Palacios, Patricia

    2018-04-01

    In this paper, I compare the use of the thermodynamic limit in the theory of phase transitions with the infinite-time limit in the explanation of equilibrium statistical mechanics. In the case of phase transitions, I will argue that the thermodynamic limit can be justified pragmatically since the limit behavior (i) also arises before we get to the limit and (ii) for values of N that are physically significant. However, I will contend that the justification of the infinite-time limit is less straightforward. In fact, I will point out that even in cases where one can recover the limit behavior for finite t, i.e. before we get to the limit, one cannot recover this behavior for realistic time scales. I will claim that this leads us to reconsider the role that the rate of convergence plays in the justification of infinite limits and calls for a revision of the so-called Butterfield's principle.

  11. A computerized model for integrating the physical environmental factors into metropolitan landscape planning

    Science.gov (United States)

    Julius Gy Fabos; Kimball H. Ferris

    1977-01-01

    This paper justifies and illustrates (in simplified form) a landscape planning approach to the environmental management of the metropolitan landscape. The model utilizes a computerized assessment and mapping system, which exhibits a recent advancement in computer technology that allows for greater accuracy and the weighting of different values when mapping at the...

  12. Diagnosing and dealing with multicollinearity.

    Science.gov (United States)

    Schroeder, M A

    1990-04-01

    The purpose of this article was to increase nurse researchers' awareness of the effects of collinear data in developing theoretical models for nursing practice. Collinear data distort the true value of the estimates generated from ordinary least-squares analysis. Theoretical models developed to provide the underpinnings of nursing practice need not be abandoned, however, because they fail to produce consistent estimates over repeated applications. It is also important to realize that multicollinearity is a data problem, not a problem associated with misspecification of a theorectical model. An investigator must first be aware of the problem, and then it is possible to develop an educated solution based on the degree of multicollinearity, theoretical considerations, and sources of error associated with alternative, biased, least-square regression techniques. Decisions based on theoretical and statistical considerations will further the development of theory-based nursing practice.

  13. Rethinking Recruitment in Policing in Australia: Can the Continued Use of Masculinised Recruitment Tests and Pass Standards that Limit the Number of Women be Justified?

    Directory of Open Access Journals (Sweden)

    Susan Robinson

    2015-06-01

    Full Text Available Over the past couple of decades, Australian police organisations have sought to increase the numbers of women in sworn policing roles by strictly adhering to equal treatment of men and women in the recruitment process. Unfortunately this blind adherence to equal treatment in the recruitment processes may inadvertently disadvantage and limit women. In particular, the emphasis on masculine attributes in recruitment, as opposed to the ‘soft’ attributes of communication and conflict resolution skills, and the setting of the minimum pass standards according to average male performance, disproportionately disadvantages women and serves to unnecessarily limit the number of women in policing. This paper reviews studies undertaken by physiotherapists and a range of occupational experts to discuss the relevance of physical fitness and agility tests and the pass standards that are applied to these in policing. It is suggested that masculinised recruitment tests that pose an unnecessary barrier to women cannot be justified unless directly linked to the job that is to be undertaken. Utilising a policy development and review model, an analysis of the problem posed by physical testing that is unadjusted for gender, is applied. As a result, it is recommended that police organisations objectively review recruitment processes and requirements to identify and eliminate unnecessary barriers to women’s entry to policing. It is also recommended that where fitness and agility tests are deemed essential to the job, the pass level is adjusted for gender.

  14. Photovoltaic array performance simulation models

    Energy Technology Data Exchange (ETDEWEB)

    Menicucci, D. F.

    1986-09-15

    The experience of the solar industry confirms that, despite recent cost reductions, the profitability of photovoltaic (PV) systems is often marginal and the configuration and sizing of a system is a critical problem for the design engineer. Construction and evaluation of experimental systems are expensive and seldom justifiable. A mathematical model or computer-simulation program is a desirable alternative, provided reliable results can be obtained. Sandia National Laboratories, Albuquerque (SNLA), has been studying PV-system modeling techniques in an effort to develop an effective tool to be used by engineers and architects in the design of cost-effective PV systems. This paper reviews two of the sources of error found in previous PV modeling programs, presents the remedies developed to correct these errors, and describes a new program that incorporates these improvements.

  15. So many genes, so little time: A practical approach to divergence-time estimation in the genomic era.

    Science.gov (United States)

    Smith, Stephen A; Brown, Joseph W; Walker, Joseph F

    2018-01-01

    Phylogenomic datasets have been successfully used to address questions involving evolutionary relationships, patterns of genome structure, signatures of selection, and gene and genome duplications. However, despite the recent explosion in genomic and transcriptomic data, the utility of these data sources for efficient divergence-time inference remains unexamined. Phylogenomic datasets pose two distinct problems for divergence-time estimation: (i) the volume of data makes inference of the entire dataset intractable, and (ii) the extent of underlying topological and rate heterogeneity across genes makes model mis-specification a real concern. "Gene shopping", wherein a phylogenomic dataset is winnowed to a set of genes with desirable properties, represents an alternative approach that holds promise in alleviating these issues. We implemented an approach for phylogenomic datasets (available in SortaDate) that filters genes by three criteria: (i) clock-likeness, (ii) reasonable tree length (i.e., discernible information content), and (iii) least topological conflict with a focal species tree (presumed to have already been inferred). Such a winnowing procedure ensures that errors associated with model (both clock and topology) mis-specification are minimized, therefore reducing error in divergence-time estimation. We demonstrated the efficacy of this approach through simulation and applied it to published animal (Aves, Diplopoda, and Hymenoptera) and plant (carnivorous Caryophyllales, broad Caryophyllales, and Vitales) phylogenomic datasets. By quantifying rate heterogeneity across both genes and lineages we found that every empirical dataset examined included genes with clock-like, or nearly clock-like, behavior. Moreover, many datasets had genes that were clock-like, exhibited reasonable evolutionary rates, and were mostly compatible with the species tree. We identified overlap in age estimates when analyzing these filtered genes under strict clock and uncorrelated

  16. Preconditions of forming of loyalty management model in pharmaceutical institution

    Directory of Open Access Journals (Sweden)

    O. O. Molodozhonova

    2013-04-01

    Full Text Available The first stage of the mechanism for implementing of two-level model of efficient management of loyalty was justified. It is based on the fundamental value systems of the formation of consumer commitment and institutional commitment of pharmaceutical professionals. The stage involves recruitment, selection and adaptation period for pharmaceutical professionals and pre-use of axiological questioning of consumers of pharmaceutic goods.

  17. Dose-response curve estimation: a semiparametric mixture approach.

    Science.gov (United States)

    Yuan, Ying; Yin, Guosheng

    2011-12-01

    In the estimation of a dose-response curve, parametric models are straightforward and efficient but subject to model misspecifications; nonparametric methods are robust but less efficient. As a compromise, we propose a semiparametric approach that combines the advantages of parametric and nonparametric curve estimates. In a mixture form, our estimator takes a weighted average of the parametric and nonparametric curve estimates, in which a higher weight is assigned to the estimate with a better model fit. When the parametric model assumption holds, the semiparametric curve estimate converges to the parametric estimate and thus achieves high efficiency; when the parametric model is misspecified, the semiparametric estimate converges to the nonparametric estimate and remains consistent. We also consider an adaptive weighting scheme to allow the weight to vary according to the local fit of the models. We conduct extensive simulation studies to investigate the performance of the proposed methods and illustrate them with two real examples. © 2011, The International Biometric Society.

  18. The algebraic collective model

    International Nuclear Information System (INIS)

    Rowe, D.J.; Turner, P.S.

    2005-01-01

    A recently proposed computationally tractable version of the Bohr collective model is developed to the extent that we are now justified in describing it as an algebraic collective model. The model has an SU(1,1)xSO(5) algebraic structure and a continuous set of exactly solvable limits. Moreover, it provides bases for mixed symmetry collective model calculations. However, unlike the standard realization of SU(1,1), used for computing beta wave functions and their matrix elements in a spherical basis, the algebraic collective model makes use of an SU(1,1) algebra that generates wave functions appropriate for deformed nuclei with intrinsic quadrupole moments ranging from zero to any large value. A previous paper focused on the SO(5) wave functions, as SO(5) (hyper-)spherical harmonics, and computation of their matrix elements. This paper gives analytical expressions for the beta matrix elements needed in applications of the model and illustrative results to show the remarkable gain in efficiency that is achieved by using such a basis in collective model calculations for deformed nuclei

  19. A Monte Carlo study comparing PIV, ULS and DWLS in the estimation of dichotomous confirmatory factor analysis.

    Science.gov (United States)

    Nestler, Steffen

    2013-02-01

    We conducted a Monte Carlo study to investigate the performance of the polychoric instrumental variable estimator (PIV) in comparison to unweighted least squares (ULS) and diagonally weighted least squares (DWLS) in the estimation of a confirmatory factor analysis model with dichotomous indicators. The simulation involved 144 conditions (1,000 replications per condition) that were defined by a combination of (a) two types of latent factor models, (b) four sample sizes (100, 250, 500, 1,000), (c) three factor loadings (low, moderate, strong), (d) three levels of non-normality (normal, moderately, and extremely non-normal), and (e) whether the factor model was correctly specified or misspecified. The results showed that when the model was correctly specified, PIV produced estimates that were as accurate as ULS and DWLS. Furthermore, the simulation showed that PIV was more robust to structural misspecifications than ULS and DWLS. © 2012 The British Psychological Society.

  20. Prediction of nitrogen and phosphorus leaching to groundwater and surface waters; process descriptions of the animo4.0 model

    NARCIS (Netherlands)

    Groenendijk, P.; Renaud, L.V.; Roelsma, J.

    2005-01-01

    The fertilization reduction policy intended to pursue environmental objects and regional water management strategies to meet Water Framework Directive objectives justify a thorough evaluation of the effectiveness of measures and reconnaissance of adverse impacts. The model aims at the evaluation and

  1. Education, Occupation and Career Expectations: Determinants of the Gender Pay Gap for UK Graduates. CEE DP 69

    Science.gov (United States)

    Chevalier, Arnaud

    2006-01-01

    A large proportion of the gender wage gap is usually left unexplained. In this paper, we investigate whether the unexplained component is due to misspecification. Using a sample of recent UK graduates, we introduce variables on career expectations and character traits, variables that are typically not observed. The evidence indicates that women…

  2. A Numerical-Analytical Approach to Modeling the Axial Rotation of the Earth

    Science.gov (United States)

    Markov, Yu. G.; Perepelkin, V. V.; Rykhlova, L. V.; Filippova, A. S.

    2018-04-01

    A model for the non-uniform axial rotation of the Earth is studied using a celestial-mechanical approach and numerical simulations. The application of an approximate model containing a small number of parameters to predict variations of the axial rotation velocity of the Earth over short time intervals is justified. This approximate model is obtained by averaging variable parameters that are subject to small variations due to non-stationarity of the perturbing factors. The model is verified and compared with predictions over a long time interval published by the International Earth Rotation and Reference Systems Service (IERS).

  3. Modeling cerebral blood flow during posture change from sitting to standing

    DEFF Research Database (Denmark)

    Ottesen, Johnny T.; Olufsen, M.; Tran, H.T.

    2004-01-01

    extremities, the brain, and the heart. We use physiologically based control mechanisms to describe the regulation of cerebral blood flow velocity and arterial pressure in response to orthostatic hypotension resulting from postural change. To justify the fidelity of our mathematical model and control......Abstract Hypertension, decreased cerebral blood flow, and diminished cerebral blood flow velocity regulation, are among the first signs indicating the presence of cerebral vascular disease. In this paper, we will present a mathematical model that can predict blood flow and pressure during posture...

  4. Utility Function and Optimum Consumption in the models with Habit Formation and Catching up with the Joneses

    OpenAIRE

    Naryshkin, Roman; Davison, Matt

    2009-01-01

    This paper analyzes popular time-nonseparable utility functions that describe "habit formation" consumer preferences comparing current consumption with the time averaged past consumption of the same individual and "catching up with the Joneses" (CuJ) models comparing individual consumption with a cross-sectional average consumption level. Few of these models give reasonable optimum consumption time series. We introduce theoretically justified utility specifications leading to a plausible cons...

  5. Liquid-drop model applied to heavy ions irradiation

    International Nuclear Information System (INIS)

    De Cicco, Hernan; Alurralde, Martin A.; Saint-Martin, Maria L. G.; Bernaola, Omar A.

    1999-01-01

    Liquid-drop model is used, previously applied in the study of radiation damage in metals, in an energy range not covered by molecular dynamics, in order to understand experimental data of particle tracks in an organic material (Makrofol E), which cannot be accurately described by the existing theoretical methods. The nuclear and electronic energy depositions are considered for each ion considered and the evolution of the thermal explosion is evaluated. The experimental observation of particle tracks in a region previously considered as 'prohibited' are justified. Although the model used has free parameters and some discrepancies with the experimental diametrical values exist, the agreement obtained is highly superior than that of other existing models. (author)

  6. Automated parameter estimation for biological models using Bayesian statistical model checking.

    Science.gov (United States)

    Hussain, Faraz; Langmead, Christopher J; Mi, Qi; Dutta-Moscato, Joyeeta; Vodovotz, Yoram; Jha, Sumit K

    2015-01-01

    Probabilistic models have gained widespread acceptance in the systems biology community as a useful way to represent complex biological systems. Such models are developed using existing knowledge of the structure and dynamics of the system, experimental observations, and inferences drawn from statistical analysis of empirical data. A key bottleneck in building such models is that some system variables cannot be measured experimentally. These variables are incorporated into the model as numerical parameters. Determining values of these parameters that justify existing experiments and provide reliable predictions when model simulations are performed is a key research problem. Using an agent-based model of the dynamics of acute inflammation, we demonstrate a novel parameter estimation algorithm by discovering the amount and schedule of doses of bacterial lipopolysaccharide that guarantee a set of observed clinical outcomes with high probability. We synthesized values of twenty-eight unknown parameters such that the parameterized model instantiated with these parameter values satisfies four specifications describing the dynamic behavior of the model. We have developed a new algorithmic technique for discovering parameters in complex stochastic models of biological systems given behavioral specifications written in a formal mathematical logic. Our algorithm uses Bayesian model checking, sequential hypothesis testing, and stochastic optimization to automatically synthesize parameters of probabilistic biological models.

  7. Homogenised constitutive model dedicated to reinforced concrete plates subjected to seismic solicitations

    International Nuclear Information System (INIS)

    Combescure, Christelle

    2013-01-01

    Safety reassessments are periodically performed on the EDF nuclear power plants and the recent seismic reassessments leaded to the necessity of taking into account the non-linear behaviour of materials when modeling and simulating industrial structures of these power plants under seismic solicitations. A large proportion of these infrastructures is composed of reinforced concrete buildings, including reinforced concrete slabs and walls, and literature seems to be poor on plate modeling dedicated to seismic applications for this material. As for the few existing models dedicated to these specific applications, they present either a lack of dissipation energy in the material behaviour, or no micromechanical approach that justifies the parameters needed to properly describe the model. In order to provide a constitutive model which better represents the reinforced concrete plate behaviour under seismic loadings and whose parameters are easier to identify for the civil engineer, a constitutive model dedicated to reinforced concrete plates under seismic solicitations is proposed: the DHRC (Dissipative Homogenised Reinforced Concrete) model. Justified by a periodic homogenisation approach, this model includes two dissipative phenomena: damage of concrete matrix and internal sliding at the interface between steel rebar and surrounding concrete. An original coupling term between damage and sliding, resulting from the homogenisation process, induces a better representation of energy dissipation during the material degradation. The model parameters are identified from the geometric characteristics of the plate and a restricted number of material characteristics, allowing a very simple use of the model. Numerical validations of the DHRC model are presented, showing good agreement with experimental behaviour. A one dimensional simplification of the DHRC model is proposed, allowing the representation of reinforced concrete bars and simplified models of rods and wire mesh

  8. Plume and Dose Modeling Performed to Assess Waste Management Enhancements Associated with Envirocare's Decision to Purchase of an Engineered Rail Rollover Facility Enclosure

    International Nuclear Information System (INIS)

    Rogers, T.; Clayman, B.

    2003-01-01

    This paper describes the modeling performed on a proposed enclosure for the existing railcar rollover facility located in Clive, Utah at a radioactive waste disposal site owned and operated by Envirocare of Utah, Inc. (Envirocare). The dose and plume modeling information was used as a tool to justify the decision to make the capital purchase and realize the modeled performance enhancements

  9. Animal models of pediatric chronic kidney disease. Is adenine intake an appropriate model?

    Directory of Open Access Journals (Sweden)

    Débora Claramunt

    2015-11-01

    Full Text Available Pediatric chronic kidney disease (CKD has peculiar features. In particular, growth impairment is a major clinical manifestation of CKD that debuts in pediatric age because it presents in a large proportion of infants and children with CKD and has a profound impact on the self-esteem and social integration of the stunted patients. Several factors associated with CKD may lead to growth retardation by interfering with the normal physiology of growth plate, the organ where longitudinal growth rate takes place. The study of growth plate is hardly possible in humans and justifies the use of animal models. Young rats made uremic by 5/6 nephrectomy have been widely used as a model to investigate growth retardation in CKD. This article examines the characteristics of this model and analyzes the utilization of CKD induced by high adenine diet as an alternative research protocol.

  10. Are multi-paddock grazing systems economically justifiable? | M.T. ...

    African Journals Online (AJOL)

    The financial implications of few- and multi-paddock systems were modelled by a discounted cash flow analysis with the (discounted) present value as the dependent variable, and number of paddocks, farm run-down time, time horizon and discount rate as the independent variables. Present values were higher for few- ...

  11. A hybrid mammalian cell cycle model

    Directory of Open Access Journals (Sweden)

    Vincent Noël

    2013-08-01

    Full Text Available Hybrid modeling provides an effective solution to cope with multiple time scales dynamics in systems biology. Among the applications of this method, one of the most important is the cell cycle regulation. The machinery of the cell cycle, leading to cell division and proliferation, combines slow growth, spatio-temporal re-organisation of the cell, and rapid changes of regulatory proteins concentrations induced by post-translational modifications. The advancement through the cell cycle comprises a well defined sequence of stages, separated by checkpoint transitions. The combination of continuous and discrete changes justifies hybrid modelling approaches to cell cycle dynamics. We present a piecewise-smooth version of a mammalian cell cycle model, obtained by hybridization from a smooth biochemical model. The approximate hybridization scheme, leading to simplified reaction rates and binary event location functions, is based on learning from a training set of trajectories of the smooth model. We discuss several learning strategies for the parameters of the hybrid model.

  12. Modeling liquid hydrogen cavitating flow with the full cavitation model

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, X.B.; Qiu, L.M.; Qi, H.; Zhang, X.J.; Gan, Z.H. [Institute of Refrigeration and Cryogenic Engineering, Zhejiang University, Hangzhou 310027 (China)

    2008-12-15

    Cavitation is the formation of vapor bubbles within a liquid where flow dynamics cause the local static pressure to drop below the vapor pressure. This paper strives towards developing an effective computational strategy to simulate liquid hydrogen cavitation relevant to liquid rocket propulsion applications. The aims are realized by performing a steady state computational fluid dynamic (CFD) study of liquid hydrogen flow over a 2D hydrofoil and an axisymmetric ogive in Hord's reports with a so-called full cavitation model. The thermodynamic effect was demonstrated with the assumption of thermal equilibrium between the gas phase and liquid phase. Temperature-dependent fluid thermodynamic properties were specified along the saturation line from the ''Gaspak 3.2'' databank. Justifiable agreement between the computed surface pressure, temperature and experimental data of Hord was obtained. Specifically, a global sensitivity analysis is performed to examine the sensitivity of the turbulent computations to the wall grid resolution, wall treatments and changes in model parameters. A proper near-wall model and grid resolution were suggested. The full cavitation model with default model parameters provided solutions with comparable accuracy to sheet cavitation in liquid hydrogen for the two geometries. (author)

  13. Modeling density-driven flow in porous media principles, numerics, software

    CERN Document Server

    Holzbecher, Ekkehard O

    1998-01-01

    Modeling of flow and transport in groundwater has become an important focus of scientific research in recent years. Most contributions to this subject deal with flow situations, where density and viscosity changes in the fluid are neglected. This restriction may not always be justified. The models presented in the book demonstrate immpressingly that the flow pattern may be completely different when density changes are taken into account. The main applications of the models are: thermal and saline convection, geothermal flow, saltwater intrusion, flow through salt formations etc. This book not only presents basic theory, but the reader can also test his knowledge by applying the included software and can set up own models.

  14. Analysis of the K-epsilon turbulence model

    International Nuclear Information System (INIS)

    Mohammadi, B.; Pironneau, O.

    1993-12-01

    This book is aimed at applied mathematicians interested in numerical simulation of turbulent flows. The book is centered around the k - ε model but it also deals with other models such as subgrid scale models, one equation models and Reynolds Stress models. The reader is expected to have some knowledge of numerical methods for fluids and, if possible, some understanding of fluid mechanics, the partial differential equations used and their variational formulations. This book presents the k - ε method for turbulence in a language familiar to applied mathematicians, stripped bare of all the technicalities of turbulence theory. The model is justified from a mathematical standpoint rather than from a physical one. The numerical algorithms are investigated and some theoretical and numerical results presented. This book should prove an invaluable tool for those studying a subject that is still controversial but very useful for industrial applications. (authors). 71 figs., 200 refs

  15. Estimates of live-tree carbon stores in the Pacific Northwest are sensitive to model selection

    Science.gov (United States)

    Susanna L. Melson; Mark E. Harmon; Jeremy S. Fried; James B. Domingo

    2011-01-01

    Estimates of live-tree carbon stores are influenced by numerous uncertainties. One of them is model-selection uncertainty: one has to choose among multiple empirical equations and conversion factors that can be plausibly justified as locally applicable to calculate the carbon store from inventory measurements such as tree height and diameter at breast height (DBH)....

  16. Identification of Super Phenix steam generator by a simple polynomial model

    International Nuclear Information System (INIS)

    Rousseau, I.

    1981-01-01

    This note suggests a method of identification for the steam generator of the Super-Phenix fast neutron power plant for simple polynomial models. This approach is justified in the selection of the adaptive control. The identification algorithms presented will be applied to multivariable input-output behaviours. The results obtained with the representation in self-regressive form and by simple polynomial models will be compared and the effect of perturbations on the output signal will be tested, in order to select a good identification algorithm for multivariable adaptive regulation [fr

  17. From the harmonic oscillator to the A-D-E classification of conformal models

    International Nuclear Information System (INIS)

    Itzykson, C.

    1988-01-01

    Arithmetical aspects of the solution of systems involving dimensional statistical models and conformal field theory. From this perspective, the analysis of the harmonic oscillator, the free particle in a box, the rational billards is effectuated. Moreover, the description of the classification of minimal conformal models and Weiss-Lumino-Witten models, based on the simplest affine algebra is also given. Attempts to interpret and justify the appearance of A-D-E classification of algebra in W-Z-W model are made. Extensions of W-Z-W model, based on SU(N) level one, and the ways to deal with rank two Lie groups, using the arithmetics of quadratic intergers, are described

  18. Theoretical Background for the Decision-Making Process Modelling under Controlled Intervention Conditions

    Directory of Open Access Journals (Sweden)

    Bakanauskienė Irena

    2017-12-01

    Full Text Available This article is intended to theoretically justify the decision-making process model for the cases, when active participation of investing entities in controlling the activities of an organisation and their results is noticeable. Based on scientific literature analysis, a concept of controlled conditions is formulated, and using a rational approach to the decision-making process, a model of the 11-steps decision-making process under controlled intervention is presented. Also, there have been unified conditions, describing the case of controlled interventions thus providing preconditions to ensure the adequacy of the proposed decision-making process model.

  19. Using Financial Management Techniqueswith in Public Sector Organizations, Does Result Control Matter? A Heterogeneous Choice Approach

    Directory of Open Access Journals (Sweden)

    Jan WYNEN

    2014-12-01

    Full Text Available Using a principal-agent framework and multi-country survey data of over 400 public sec-tor organizations, this article examines the effect of result control on the use of fnancial manage-ment techniques in public sector organizations. In order to avoid invalid conclusions, we test for heteroskedasticity and model residual vari-ance using a heterogeneous choice model. This model yields important insights into the effect of result control that would be overlooked in a mis-specifed ordered logit model. Our fndings reveal that result control matters, although size and pri-mary task of the organization also prove to be determinants of the use of fnancial management techniques. Within the context of the continuous attempts being made to improve public sector performance, policy makers should thus devel-op different strategies for different (individual agencies, while relying on a strong ex-post result control, when they want to stimulate the use of fnancial management techniques.

  20. Model instruments of effective segmentation of the fast food market

    OpenAIRE

    Mityaeva Tetyana L.

    2013-01-01

    The article presents results of optimisation step-type calculations of economic effectiveness of promotion of fast food with consideration of key parameters of assessment of efficiency of the marketing strategy of segmentation. The article justifies development of a mathematical model on the bases of 3D-presentations and three-dimensional system of management variables. The modern applied mathematical packages allow formation not only of one-dimensional and two-dimensional arrays and analyse ...

  1. Mesoscopic and continuum modelling of angiogenesis

    KAUST Repository

    Spill, F.; Guerrero, P.; Alarcon, T.; Maini, P. K.; Byrne, H. M.

    2014-01-01

    Angiogenesis is the formation of new blood vessels from pre-existing ones in response to chemical signals secreted by, for example, a wound or a tumour. In this paper, we propose a mesoscopic lattice-based model of angiogenesis, in which processes that include proliferation and cell movement are considered as stochastic events. By studying the dependence of the model on the lattice spacing and the number of cells involved, we are able to derive the deterministic continuum limit of our equations and compare it to similar existing models of angiogenesis. We further identify conditions under which the use of continuum models is justified, and others for which stochastic or discrete effects dominate. We also compare different stochastic models for the movement of endothelial tip cells which have the same macroscopic, deterministic behaviour, but lead to markedly different behaviour in terms of production of new vessel cells. © 2014 Springer-Verlag Berlin Heidelberg.

  2. Mesoscopic and continuum modelling of angiogenesis

    KAUST Repository

    Spill, F.

    2014-03-11

    Angiogenesis is the formation of new blood vessels from pre-existing ones in response to chemical signals secreted by, for example, a wound or a tumour. In this paper, we propose a mesoscopic lattice-based model of angiogenesis, in which processes that include proliferation and cell movement are considered as stochastic events. By studying the dependence of the model on the lattice spacing and the number of cells involved, we are able to derive the deterministic continuum limit of our equations and compare it to similar existing models of angiogenesis. We further identify conditions under which the use of continuum models is justified, and others for which stochastic or discrete effects dominate. We also compare different stochastic models for the movement of endothelial tip cells which have the same macroscopic, deterministic behaviour, but lead to markedly different behaviour in terms of production of new vessel cells. © 2014 Springer-Verlag Berlin Heidelberg.

  3. Mathematical modelling of plant transients in the PWR for simulator purposes

    International Nuclear Information System (INIS)

    Hartel, K.

    1984-01-01

    This chapter presents the results of the testing of anticipated and abnormal plant transients in pressurized water reactors (PWRs) of the type WWER 440 by means of the numerical simulation of 32 different, stationary and nonstationary, operational regimes. Topics considered include the formation of the PWR mathematical model, the physical approximation of the reactor core, the structure of the reactor core model, a mathematical approximation of the reactor model, the selection of numerical methods, and a computerized simulation system. The necessity of a PWR simulator in Czechoslovakia is justified by the present status and the outlook for the further development of the Czechoslovak nuclear power complex

  4. Portfolio Management with Stochastic Interest Rates and Inflation Ambiguity

    DEFF Research Database (Denmark)

    Munk, Claus; Rubtsov, Alexey Vladimirovich

    We solve a stock-bond-cash portfolio choice problem for a risk- and ambiguity-averse investor in a setting where the inflation rate and interest rates are stochastic. The expected inflation rate is unobservable, but the investor may learn about it from realized inflation and observed stock and bond...... prices. The investor is aware that his model for the observed inflation is potentially misspecified, and he seeks an investment strategy that maximizes his expected utility from real terminal wealth and is also robust to inflation model misspecification. We solve the corresponding robust Hamilton......-Jacobi-Bellman equation in closed form and derive and illustrate a number of interesting properties of the solution. For example, ambiguity aversion affects the optimal portfolio through the correlation of price level with the stock index, a bond, and the expected inflation rate. Furthermore, unlike other settings...

  5. Merging expert and empirical data for rare event frequency estimation: Pool homogenisation for empirical Bayes models

    International Nuclear Information System (INIS)

    Quigley, John; Hardman, Gavin; Bedford, Tim; Walls, Lesley

    2011-01-01

    Empirical Bayes provides one approach to estimating the frequency of rare events as a weighted average of the frequencies of an event and a pool of events. The pool will draw upon, for example, events with similar precursors. The higher the degree of homogeneity of the pool, then the Empirical Bayes estimator will be more accurate. We propose and evaluate a new method using homogenisation factors under the assumption that events are generated from a Homogeneous Poisson Process. The homogenisation factors are scaling constants, which can be elicited through structured expert judgement and used to align the frequencies of different events, hence homogenising the pool. The estimation error relative to the homogeneity of the pool is examined theoretically indicating that reduced error is associated with larger pool homogeneity. The effects of misspecified expert assessments of the homogenisation factors are examined theoretically and through simulation experiments. Our results show that the proposed Empirical Bayes method using homogenisation factors is robust under different degrees of misspecification.

  6. Real Time Updating in Distributed Urban Rainfall Runoff Modelling

    DEFF Research Database (Denmark)

    Borup, Morten; Madsen, Henrik

    that are being updated from system measurements was studied. The results showed that the fact alone that it takes time for rainfall data to travel the distance between gauges and catchments has such a big negative effect on the forecast skill of updated models, that it can justify the choice of even very...... as in a real data case study. The results confirmed that the method is indeed suitable for DUDMs and that it can be used to utilise upstream as well as downstream water level and flow observations to improve model estimates and forecasts. Due to upper and lower sensor limits many sensors in urban drainage...

  7. Modeling and Analysis of Power Processing Systems (MAPPS). Volume 2: Appendices

    Science.gov (United States)

    Lee, F. C.; Radman, S.; Carter, R. A.; Wu, C. H.; Yu, Y.; Chang, R.

    1980-01-01

    The computer programs and derivations generated in support of the modeling and design optimization program are presented. Programs for the buck regulator, boost regulator, and buck-boost regulator are described. The computer program for the design optimization calculations is presented. Constraints for the boost and buck-boost converter were derived. Derivations of state-space equations and transfer functions are presented. Computer lists for the converters are presented, and the input parameters justified.

  8. Selected bibliography on the modeling and control of plant processes

    Science.gov (United States)

    Viswanathan, M. M.; Julich, P. M.

    1972-01-01

    A bibliography of information pertinent to the problem of simulating plants is presented. Detailed simulations of constituent pieces are necessary to justify simple models which may be used for analysis. Thus, this area of study is necessary to support the Earth Resources Program. The report sums up the present state of the problem of simulating vegetation. This area holds the hope of major benefits to mankind through understanding the ecology of a region and in improving agricultural yield.

  9. Overdeepening development in a glacial landscape evolution model with quarrying

    DEFF Research Database (Denmark)

    Ugelvig, Sofie Vej; Egholm, D.L.; Iverson, Neal R.

    In glacial landscape evolution models, subglacial erosion rates are often related to basal sliding or ice discharge by a power-law. This relation can be justified when considering bed abrasion, where rock debris transported in the basal ice drives erosion. However, the relation is not well...... supported when considering models for quarrying of rock blocks from the bed. Field observations indicate that the principal mechanism of glacial erosion is quarrying, which emphasize the importance of a better way of implementing erosion by quarrying in glacial landscape evolution models. Iverson (2012...... around the obstacles. The erosion rate is quantified by considering the likelihood of rock fracturing on topographic bumps. The model includes a statistical treatment of the bedrock weakness, which is neglected in previous quarrying models. Sliding rate, effective pressure, and average bedslope...

  10. Validity of the electrical model representation of the effects of nuclear magnetic resonance (1961); Validite de la representation par modele electrique des effets de resonance magnetique nucleaire (1961)

    Energy Technology Data Exchange (ETDEWEB)

    Bonnet, G [Commissariat a l' Energie Atomique, Grenoble (France). Centre d' Etudes Nucleaires

    1961-07-01

    When studying the behaviour of a magnetic resonance transducer formed by the association of an electrical network and of a set of nuclear spins, it is possible to bring about a representation that is analytically equivalent by means of an entirely electrical model, available for transients as well as steady-state. A detailed study of the validity conditions justifies its use in most cases. Also proposed is a linearity criterion of Bloch's equations in transient state that is simply the prolongation of the well-known condition of non-saturation in the steady-state. (author) [French] L'etude du comportement d'un transducteur a resonance magnetique forme de l'association d'un reseau electrique et d'un ensemble de noyaux dotes de spin, montre qu'il est possible d'en deduire une representation analytiquement equivalente au moyen d'un modele entierement electrique utilisable pour un regime transitoire aussi bien que pour un regime permanent. Une etude detaillee des conditions de validite permet d'en justifier l'emploi dans la majorite des cas. On propose enfin un critere de linearite des equations de Bloch en regime transitoire, qui constitue un prolongement de la condition connue de non-saturation en regime stationnaire. (auteur)

  11. A Model Based on Cocitation for Web Information Retrieval

    Directory of Open Access Journals (Sweden)

    Yue Xie

    2014-01-01

    Full Text Available According to the relationship between authority and cocitation in HITS, we propose a new hyperlink weighting scheme to describe the strength of the relevancy between any two webpages. Then we combine hyperlink weight normalization and random surfing schemes as used in PageRank to justify the new model. In the new model based on cocitation (MBCC, the pages with stronger relevancy are assigned higher values, not just depending on the outlinks. This model combines both features of HITS and PageRank. Finally, we present the results of some numerical experiments, showing that the MBCC ranking agrees with the HITS ranking, especially in top 10. Meanwhile, MBCC keeps the superiority of PageRank, that is, existence and uniqueness of ranking vectors.

  12. Two independent pivotal statistics that test location and misspecification and add-up to the Anderson-Rubin statistic

    NARCIS (Netherlands)

    Kleibergen, F.R.

    2002-01-01

    We extend the novel pivotal statistics for testing the parameters in the instrumental variables regression model. We show that these statistics result from a decomposition of the Anderson-Rubin statistic into two independent pivotal statistics. The first statistic is a score statistic that tests

  13. Can Defense Spending Be Justified during a Period of Continual Peace?

    Science.gov (United States)

    1991-06-07

    although this was clearly unsatisfactory from a strictly theoretical perspective. 66Revealed Preference is a technique used to explain comsumer behavior ...insurgencies therefore a case cf irrational behavior ? In behaviorial sciences, it is usually tempting to assume away deviations from the prediction of a model...as irrational behavior or an inadequacy of the model. Rationality is axiomatic. All nation-states always act according to what they perceive (as

  14. Mechanical Impedance Modeling of Human Arm: A survey

    Science.gov (United States)

    Puzi, A. Ahmad; Sidek, S. N.; Sado, F.

    2017-03-01

    Human arm mechanical impedance plays a vital role in describing motion ability of the upper limb. One of the impedance parameters is stiffness which is defined as the ratio of an applied force to the measured deformation of the muscle. The arm mechanical impedance modeling is useful in order to develop a better controller for system that interacts with human as such an automated robot-assisted platform for automated rehabilitation training. The aim of the survey is to summarize the existing mechanical impedance models of human upper limb so to justify the need to have an improved version of the arm model in order to facilitate the development of better controller of such systems with ever increase in complexity. In particular, the paper will address the following issue: Human motor control and motor learning, constant and variable impedance models, methods for measuring mechanical impedance and mechanical impedance modeling techniques.

  15. Designing adaptive intensive interventions using methods from engineering.

    Science.gov (United States)

    Lagoa, Constantino M; Bekiroglu, Korkut; Lanza, Stephanie T; Murphy, Susan A

    2014-10-01

    Adaptive intensive interventions are introduced, and new methods from the field of control engineering for use in their design are illustrated. A detailed step-by-step explanation of how control engineering methods can be used with intensive longitudinal data to design an adaptive intensive intervention is provided. The methods are evaluated via simulation. Simulation results illustrate how the designed adaptive intensive intervention can result in improved outcomes with less treatment by providing treatment only when it is needed. Furthermore, the methods are robust to model misspecification as well as the influence of unobserved causes. These new methods can be used to design adaptive interventions that are effective yet reduce participant burden. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  16. Semiparametric efficient and robust estimation of an unknown symmetric population under arbitrary sample selection bias

    KAUST Repository

    Ma, Yanyuan

    2013-09-01

    We propose semiparametric methods to estimate the center and shape of a symmetric population when a representative sample of the population is unavailable due to selection bias. We allow an arbitrary sample selection mechanism determined by the data collection procedure, and we do not impose any parametric form on the population distribution. Under this general framework, we construct a family of consistent estimators of the center that is robust to population model misspecification, and we identify the efficient member that reaches the minimum possible estimation variance. The asymptotic properties and finite sample performance of the estimation and inference procedures are illustrated through theoretical analysis and simulations. A data example is also provided to illustrate the usefulness of the methods in practice. © 2013 American Statistical Association.

  17. Local yield stress statistics in model amorphous solids

    Science.gov (United States)

    Barbot, Armand; Lerbinger, Matthias; Hernandez-Garcia, Anier; García-García, Reinaldo; Falk, Michael L.; Vandembroucq, Damien; Patinet, Sylvain

    2018-03-01

    We develop and extend a method presented by Patinet, Vandembroucq, and Falk [Phys. Rev. Lett. 117, 045501 (2016), 10.1103/PhysRevLett.117.045501] to compute the local yield stresses at the atomic scale in model two-dimensional Lennard-Jones glasses produced via differing quench protocols. This technique allows us to sample the plastic rearrangements in a nonperturbative manner for different loading directions on a well-controlled length scale. Plastic activity upon shearing correlates strongly with the locations of low yield stresses in the quenched states. This correlation is higher in more structurally relaxed systems. The distribution of local yield stresses is also shown to strongly depend on the quench protocol: the more relaxed the glass, the higher the local plastic thresholds. Analysis of the magnitude of local plastic relaxations reveals that stress drops follow exponential distributions, justifying the hypothesis of an average characteristic amplitude often conjectured in mesoscopic or continuum models. The amplitude of the local plastic rearrangements increases on average with the yield stress, regardless of the system preparation. The local yield stress varies with the shear orientation tested and strongly correlates with the plastic rearrangement locations when the system is sheared correspondingly. It is thus argued that plastic rearrangements are the consequence of shear transformation zones encoded in the glass structure that possess weak slip planes along different orientations. Finally, we justify the length scale employed in this work and extract the yield threshold statistics as a function of the size of the probing zones. This method makes it possible to derive physically grounded models of plasticity for amorphous materials by directly revealing the relevant details of the shear transformation zones that mediate this process.

  18. The frequency of Tay-Sachs disease causing mutations in the Brazilian Jewish population justifies a carrier screening program.

    Science.gov (United States)

    Rozenberg, R; Pereira, L da V

    2001-07-05

    Tay-Sachs disease is an autosomal recessive disease characterized by progressive neurologic degeneration, fatal in early childhood. In the Ashkenazi Jewish population the disease incidence is about 1 in every 3,500 newborns and the carrier frequency is 1 in every 29 individuals. Carrier screening programs for Tay-Sachs disease have reduced disease incidence by 90% in high-risk populations in several countries. The Brazilian Jewish population is estimated at 90,000 individuals. Currently, there is no screening program for Tay-Sachs disease in this population. To evaluate the importance of a Tay-Sachs disease carrier screening program in the Brazilian Jewish population by determining the frequency of heterozygotes and the acceptance of the program by the community. Laboratory of Molecular Genetics--Institute of Biosciences--Universidade de São Paulo. 581 senior students from selected Jewish high schools. Molecular analysis of Tay-Sachs disease causing mutations by PCR amplification of genomic DNA, followed by restriction enzyme digestion. Among 581 students that attended educational classes, 404 (70%) elected to be tested for Tay-Sachs disease mutations. Of these, approximately 65% were of Ashkenazi Jewish origin. Eight carriers were detected corresponding to a carrier frequency of 1 in every 33 individuals in the Ashkenazi Jewish fraction of the sample. The frequency of Tay-Sachs disease carriers among the Ashkenazi Jewish population of Brazil is similar to that of other countries where carrier screening programs have led to a significant decrease in disease incidence. Therefore, it is justifiable to implement a Tay-Sachs disease carrier screening program for the Brazilian Jewish population.

  19. Marriage duration and divorce: the seven-year itch or a lifelong itch?

    Science.gov (United States)

    Kulu, Hill

    2014-06-01

    Previous studies have shown that the risk of divorce is low during the first months of marriage; it then increases, reaches a maximum, and thereafter begins to decline. Some researchers consider this pattern consistent with the notion of a "seven-year itch," while others argue that the rising-falling pattern of divorce risk is a consequence of misspecification of longitudinal models because of omitted covariates or unobserved heterogeneity. The aim of this study is to investigate the causes of the rising-falling pattern of divorce risk. Using register data from Finland and applying multilevel hazard models, the analysis supports the rising-falling pattern of divorce by marriage duration: the risk of marital dissolution increases, reaches its peak, and then gradually declines. This pattern persists when I control for the sociodemographic characteristics of women and their partners. The inclusion of unobserved heterogeneity in the model leads to some changes in the shape of the baseline risk; however, the rising-falling pattern of the divorce risk persists.

  20. Robust estimation of the correlation matrix of longitudinal data

    KAUST Repository

    Maadooliat, Mehdi

    2011-09-23

    We propose a double-robust procedure for modeling the correlation matrix of a longitudinal dataset. It is based on an alternative Cholesky decomposition of the form Σ=DLL⊤D where D is a diagonal matrix proportional to the square roots of the diagonal entries of Σ and L is a unit lower-triangular matrix determining solely the correlation matrix. The first robustness is with respect to model misspecification for the innovation variances in D, and the second is robustness to outliers in the data. The latter is handled using heavy-tailed multivariate t-distributions with unknown degrees of freedom. We develop a Fisher scoring algorithm for computing the maximum likelihood estimator of the parameters when the nonredundant and unconstrained entries of (L,D) are modeled parsimoniously using covariates. We compare our results with those based on the modified Cholesky decomposition of the form LD2L⊤ using simulations and a real dataset. © 2011 Springer Science+Business Media, LLC.

  1. Is nuclear energy justifiable?

    International Nuclear Information System (INIS)

    Roth, E.

    1988-01-01

    This is a comment on an article by Prof. Haerle a theologist, published earlier under the same heading, in which the use of nuclear energy is rejected for ethical reasons. The comment contents the claim mode by the first author that theologists, because they have general ethical competency, must needs have competency to decide on the fittest technique (of energy conversion) for satisfying, or potentially satisfying, the criteria of responsible action. Thus, an ethical comment on, for instance, nuclear energy is beyond the scope of the competency of the churches. One is only entitled as a private person to objecting to nuclear energy, not because of one's position in the church. (HSCH) [de

  2. The British Model in Britain: Failing slowly

    International Nuclear Information System (INIS)

    Thomas, Steve

    2006-01-01

    In 1990, Britain reorganised its electricity industry to run on competitive lines. The British reforms are widely regarded as successful and the model used provides the basis for reforms of electricity industries worldwide. The main reason for this perception of success is major reductions in the real price of electricity with no reduction in service quality. This paper examines whether the reputation of the British reforms is justified. It concludes that the reputation is not justified and that serious fundamental problems are beginning to emerge. The central question is: have the British reforms resulted in the creation of efficient wholesale and retail markets? On this criterion, the reforms have failed. The wholesale market is dominated by obscure long-term contracts, privileged access to the market and self-dealing within integrated generator/retailers, leaving the spot markets with minimal liquidity and unreliable prices. The failure to develop an efficient wholesale market places the onus on consumers to impose competitive forces on electricity companies by switching regularly. Small consumers will not do this and they are paying too much for their power. For the future, there is a serious risk that the electricity industry will become a weakly regulated oligopoly with a veneer of competition

  3. Improving practical atmospheric dispersion models

    International Nuclear Information System (INIS)

    Hunt, J.C.R.; Hudson, B.; Thomson, D.J.

    1992-01-01

    The new generation of practical atmospheric dispersion model (for short range ≤ 30 km) are based on dispersion science and boundary layer meteorology which have widespread international acceptance. In addition, recent improvements in computer skills and the widespread availability of small powerful computers make it possible to have new regulatory models which are more complex than the previous generation which were based on charts and simple formulae. This paper describes the basis of these models and how they have developed. Such models are needed to satisfy the urgent public demand for sound, justifiable and consistent environmental decisions. For example, it is preferable that the same models are used to simulate dispersion in different industries; in many countries at present different models are used for emissions from nuclear and fossil fuel power stations. The models should not be so simple as to be suspect but neither should they be too complex for widespread use; for example, at public inquiries in Germany, where simple models are mandatory, it is becoming usual to cite the results from highly complex computational models because the simple models are not credible. This paper is written in a schematic style with an emphasis on tables and diagrams. (au) (22 refs.)

  4. Whole-Volume Clustering of Time Series Data from Zebrafish Brain Calcium Images via Mixture Modeling.

    Science.gov (United States)

    Nguyen, Hien D; Ullmann, Jeremy F P; McLachlan, Geoffrey J; Voleti, Venkatakaushik; Li, Wenze; Hillman, Elizabeth M C; Reutens, David C; Janke, Andrew L

    2018-02-01

    Calcium is a ubiquitous messenger in neural signaling events. An increasing number of techniques are enabling visualization of neurological activity in animal models via luminescent proteins that bind to calcium ions. These techniques generate large volumes of spatially correlated time series. A model-based functional data analysis methodology via Gaussian mixtures is suggested for the clustering of data from such visualizations is proposed. The methodology is theoretically justified and a computationally efficient approach to estimation is suggested. An example analysis of a zebrafish imaging experiment is presented.

  5. An interpretation of the behavior of EoS/GE models for asymmetric systems

    DEFF Research Database (Denmark)

    Kontogeorgis, Georgios; Panayiotis, Vlamos

    2000-01-01

    or zero pressure or at other conditions (system's pressure, constant volume packing fraction). In a number of publications over the last years, the achievements and the shortcomings of the various EoS/G(E) models have been presented via phase equilibrium calculations. This short communication provides...... an explanation of several literature EoSIGE models, especially those based on zero-reference pressure (PSRK, MHV1, MHV2), in the prediction of phase equilibria for asymmetric systems as well as an interpretation of the LCVM and kappa-MHV1 models which provide an empirical - yet as shown here theoretically...... justified - solution to these problems. (C) 2000 Elsevier Science Ltd. All rights reserved....

  6. Justified requirements in private transportation and a recommendation for improving the efficiency of household energy utilisation through the use of small ecologically-friendly or 'ultralight' vehicles for mass private transportation in the 21st century

    International Nuclear Information System (INIS)

    Juravic, T.

    1999-01-01

    Needs and ownership are sociobiologically manifested in the alter-ego of a Homo sapiens where the natural progression of events (a household being the fundamental microlevel) and the social order, i.e. globalisation, are based on ownership and needs as sacred rights, and for this reason universal values like energy conservation end up as the waste of the mindless worship of consumption. Justified needs are phenomena of a consumerist (egocentric, pragmatic, voluntary) social conscience and instinctive behaviour - an unpredictable cause resulting from freedom being the foundation of the quality of life, socio-economic and political changes but are mutually exclusive to understanding (expressing and gaining deeper and richer knowledge). Inbuilt limits and/or control of consumption, which are already used in household appliances with aforeset processes (goals) for unknown consumers, to achieve large energy savings in 'routine' functions are more effective than attempts to prevent mistakes (lack of user knowledge through repression). A private vehicle, as a symbol of the freedom and quality of life, is a mechanism for achieving 'justified' needs and presents another means of household energy utilisation. The consumer's desires regarding private transportation are not sufficiently reconciled with intelligent microprocessors (expert systems), which achieve (the most) optimal behaviour in the process of transportation. This detailed consideration (as part of investigating the technical system) cannot be examined on a strictly logical or scientific basis, as it only proposes a method of co-agreement (not co-reponsability) of manufacturers and consumers and an alternative logical way of thinking, or organisation of the interaction between vehicles and traffic in order to form a judgement of really justifiable needs, and to achieve a robotic private vehicle, transportation and traffic. The goal of this consideration is to establish the DIVISION of energy with the help of

  7. Outcome and survival of patients aged 75 years and older compared to younger patients after ruptured abdominal aortic aneurysm repair: do the results justify the effort?

    DEFF Research Database (Denmark)

    Shahidi, S; Schroeder, T Veith; Carstensen, M.

    2009-01-01

    We evaluated early mortality (preoperative variables that may be predictive of 30-day mortality in elderly patients compared to younger patients after emergency open repair of ruptured abdominal aortic aneurysm (RAAA). The survey is a retrospective analysis based...... patients compared to the younger group. Between the survivors of the two groups, there were no significant differences in the total length of stay (LOS) and the LOS in the intensive care unit. Advanced age (>or=75) and the combination of this advanced age and serum creatinine of >or=0.150 mmol/L were...... the only significant (p preoperative risk factors in our single-center study. However, we believe that treatment for RAAA can be justified in elderly patients. In our experience, surgical open repair has been life-saving in 33% of patients aged 75 years and older, at a relatively low price for each...

  8. On the validity of evolutionary models with site-specific parameters.

    Directory of Open Access Journals (Sweden)

    Konrad Scheffler

    Full Text Available Evolutionary models that make use of site-specific parameters have recently been criticized on the grounds that parameter estimates obtained under such models can be unreliable and lack theoretical guarantees of convergence. We present a simulation study providing empirical evidence that a simple version of the models in question does exhibit sensible convergence behavior and that additional taxa, despite not being independent of each other, lead to improved parameter estimates. Although it would be desirable to have theoretical guarantees of this, we argue that such guarantees would not be sufficient to justify the use of these models in practice. Instead, we emphasize the importance of taking the variance of parameter estimates into account rather than blindly trusting point estimates - this is standardly done by using the models to construct statistical hypothesis tests, which are then validated empirically via simulation studies.

  9. Modeling in biopharmaceutics, pharmacokinetics, and pharmacodynamics homogeneous and heterogeneous approaches

    CERN Document Server

    Macheras, Panos

    2006-01-01

    The state of the art in Biopharmaceutics, Pharmacokinetics, and Pharmacodynamics Modeling is presented in this book. It shows how advanced physical and mathematical methods can expand classical models in order to cover heterogeneous drug-biological processes and therapeutic effects in the body. The book is divided into four parts; the first deals with the fundamental principles of fractals, diffusion and nonlinear dynamics; the second with drug dissolution, release, and absorption; the third with empirical, compartmental, and stochastic pharmacokinetic models, and the fourth mainly with nonclassical aspects of pharmacodynamics. The classical models that have relevance and application to these sciences are also considered throughout. Many examples are used to illustrate the intrinsic complexity of drug administration related phenomena in the human, justifying the use of advanced modeling methods. This timely and useful book will appeal to graduate students and researchers in pharmacology, pharmaceutical scienc...

  10. There are calls for a national screening programme for prostate cancer: what is the evidence to justify such a national screening programme?

    Science.gov (United States)

    Green, A; Tait, C; Aboumarzouk, O; Somani, B K; Cohen, N P

    2013-05-01

    Prostate cancer is the commonest cancer in men and a major health issue worldwide. Screening for early disease has been available for many years, but there is still no national screening programme established in the United Kingdom. To assess the latest evidence regarding prostate cancer screening and whether it meets the necessary requirements to be established as a national programme for all men. Electronic databases and library catalogues were searched electronically and manual retrieval was performed. Only primary research results were used for the analysis. In recent years, several important randomised controlled trials have produced varied outcomes. In Europe the largest study thus far concluded that screening reduced prostate cancer mortality by 20%. On the contrary, a large American trial found no reduction in mortality after 7-10 years follow-up. Most studies comment on the adverse effects of screening - principally those of overdiagnosis and subsequent overtreatment. Further information about the natural history of prostate cancer and accuracy of screening is needed before a screening programme can be truly justified. In the interim, doctors and patients should discuss the risks, benefits and sequelae of taking part in voluntary screening for prostate cancer.

  11. Solar Sail Models and Test Measurements Correspondence for Validation Requirements Definition

    Science.gov (United States)

    Ewing, Anthony; Adams, Charles

    2004-01-01

    Solar sails are being developed as a mission-enabling technology in support of future NASA science missions. Current efforts have advanced solar sail technology sufficient to justify a flight validation program. A primary objective of this activity is to test and validate solar sail models that are currently under development so that they may be used with confidence in future science mission development (e.g., scalable to larger sails). Both system and model validation requirements must be defined early in the program to guide design cycles and to ensure that relevant and sufficient test data will be obtained to conduct model validation to the level required. A process of model identification, model input/output documentation, model sensitivity analyses, and test measurement correspondence is required so that decisions can be made to satisfy validation requirements within program constraints.

  12. Family Life Cycle and Deforestation in Amazonia: Combining Remotely Sensed Information with Primary Data

    Science.gov (United States)

    Caldas, M.; Walker, R. T.; Shirota, R.; Perz, S.; Skole, D.

    2003-01-01

    This paper examines the relationships between the socio-demographic characteristics of small settlers in the Brazilian Amazon and the life cycle hypothesis in the process of deforestation. The analysis was conducted combining remote sensing and geographic data with primary data of 153 small settlers along the TransAmazon Highway. Regression analyses and spatial autocorrelation tests were conducted. The results from the empirical model indicate that socio-demographic characteristics of households as well as institutional and market factors, affect the land use decision. Although remotely sensed information is not very popular among Brazilian social scientists, these results confirm that they can be very useful for this kind of study. Furthermore, the research presented by this paper strongly indicates that family and socio-demographic data, as well as market data, may result in misspecification problems. The same applies to models that do not incorporate spatial analysis.

  13. Modeling of the water gap in BWR fuel elements using SCALE/TRITON; Modellierung des Wasserspalts bei SWR-BE mit SCALE/TRITON

    Energy Technology Data Exchange (ETDEWEB)

    Tittelbach, S.; Chernykh, M. [WTI Wissenschaftlich-Technische Ingenieurberatung GmbH, Juelich (Germany)

    2012-11-01

    The authors show that an adequate modeling of the water gap in BWR fuel element models using the code TRITON requires an explicit consideration of the Dancoff factors. The analysis of three modeling options reveals that considering the moderating effects of the water gap coolant for the peripheral fuel elements the resulting deviations of the U-235 and Pu-239 concentrations are significantly reduced. The increased temporal calculation efforts are justified with respect to the burnup credits for criticality safety analyses.

  14. Modelling of the behaviour of a UF_6 container in a fire

    International Nuclear Information System (INIS)

    Pinton, Eric

    1996-01-01

    This thesis is justified by the safety needs about storage and transport of UF_6 containers. To define their behaviour under fire conditions, a modelling was developed. Before tackling the numerical modelling, a phenomenological interpretation with experimental results of containers inside a furnace (800 C) during a fixed period was carried out. The internal heat transfers were considerably improved with these results. The 2D elaborated model takes into account most of the physical phenomena encountered in this type of situation (boiling, evaporation, condensation, radiant heat transfers through an absorbing gas, convection, pressurisation, thermal contact resistance, UF_6 expansion, solid core sinking in the liquid, elastic and plastic deformations of the steel container). This model was successfully confronted with experiments. (author) [fr

  15. A new model of wheezing severity in young children using the validated ISAAC wheezing module: A latent variable approach with validation in independent cohorts.

    Science.gov (United States)

    Brunwasser, Steven M; Gebretsadik, Tebeb; Gold, Diane R; Turi, Kedir N; Stone, Cosby A; Datta, Soma; Gern, James E; Hartert, Tina V

    2018-01-01

    The International Study of Asthma and Allergies in Children (ISAAC) Wheezing Module is commonly used to characterize pediatric asthma in epidemiological studies, including nearly all airway cohorts participating in the Environmental Influences on Child Health Outcomes (ECHO) consortium. However, there is no consensus model for operationalizing wheezing severity with this instrument in explanatory research studies. Severity is typically measured using coarsely-defined categorical variables, reducing power and potentially underestimating etiological associations. More precise measurement approaches could improve testing of etiological theories of wheezing illness. We evaluated a continuous latent variable model of pediatric wheezing severity based on four ISAAC Wheezing Module items. Analyses included subgroups of children from three independent cohorts whose parents reported past wheezing: infants ages 0-2 in the INSPIRE birth cohort study (Cohort 1; n = 657), 6-7-year-old North American children from Phase One of the ISAAC study (Cohort 2; n = 2,765), and 5-6-year-old children in the EHAAS birth cohort study (Cohort 3; n = 102). Models were estimated using structural equation modeling. In all cohorts, covariance patterns implied by the latent variable model were consistent with the observed data, as indicated by non-significant χ2 goodness of fit tests (no evidence of model misspecification). Cohort 1 analyses showed that the latent factor structure was stable across time points and child sexes. In both cohorts 1 and 3, the latent wheezing severity variable was prospectively associated with wheeze-related clinical outcomes, including physician asthma diagnosis, acute corticosteroid use, and wheeze-related outpatient medical visits when adjusting for confounders. We developed an easily applicable continuous latent variable model of pediatric wheezing severity based on items from the well-validated ISAAC Wheezing Module. This model prospectively associates with

  16. Causal Models and Exploratory Analysis in Heterogeneous Information Fusion for Detecting Potential Terrorists

    Science.gov (United States)

    2015-11-01

    independent. The PFT model is deliberately not that of a rational actor doing cost-benefit calculations. Real individuals are affected by emotions ...use the TLWS and PF methods discussed earlier. Our quasi-Bayesian method is “quasi” because we used heuristic methods to determine the weight given...are often justified heuristically on a case-by-case basis. One way to think about the structural issues around which we had to design is to think of

  17. Parametric Sensitivity Analysis of the WAVEWATCH III Model

    Directory of Open Access Journals (Sweden)

    Beng-Chun Lee

    2009-01-01

    Full Text Available The parameters in numerical wave models need to be calibrated be fore a model can be applied to a specific region. In this study, we selected the 8 most important parameters from the source term of the WAVEWATCH III model and subjected them to sensitivity analysis to evaluate the sensitivity of the WAVEWATCH III model to the selected parameters to determine how many of these parameters should be considered for further discussion, and to justify the significance priority of each parameter. After ranking each parameter by sensitivity and assessing their cumulative impact, we adopted the ARS method to search for the optimal values of those parameters to which the WAVEWATCH III model is most sensitive by comparing modeling results with ob served data at two data buoys off the coast of north eastern Taiwan; the goal being to find optimal parameter values for improved modeling of wave development. The procedure adopting optimal parameters in wave simulations did improve the accuracy of the WAVEWATCH III model in comparison to default runs based on field observations at two buoys.

  18. The frequency of Tay-Sachs disease causing mutations in the Brazilian Jewish population justifies a carrier screening program

    Directory of Open Access Journals (Sweden)

    Roberto Rozenberg

    Full Text Available CONTEXT: Tay-Sachs disease is an autosomal recessive disease characterized by progressive neurologic degeneration, fatal in early childhood. In the Ashkenazi Jewish population the disease incidence is about 1 in every 3,500 newborns and the carrier frequency is 1 in every 29 individuals. Carrier screening programs for Tay-Sachs disease have reduced disease incidence by 90% in high-risk populations in several countries. The Brazilian Jewish population is estimated at 90,000 individuals. Currently, there is no screening program for Tay-Sachs disease in this population. OBJECTIVE: To evaluate the importance of a Tay-Sachs disease carrier screening program in the Brazilian Jewish population by determining the frequency of heterozygotes and the acceptance of the program by the community. SETTING: Laboratory of Molecular Genetics - Institute of Biosciences - Universidade de São Paulo. PARTICIPANTS: 581 senior students from selected Jewish high schools. PROCEDURE: Molecular analysis of Tay-Sachs disease causing mutations by PCR amplification of genomic DNA, followed by restriction enzyme digestion. RESULTS: Among 581 students that attended educational classes, 404 (70% elected to be tested for Tay-Sachs disease mutations. Of these, approximately 65% were of Ashkenazi Jewish origin. Eight carriers were detected corresponding to a carrier frequency of 1 in every 33 individuals in the Ashkenazi Jewish fraction of the sample. CONCLUSION: The frequency of Tay-Sachs disease carriers among the Ashkenazi Jewish population of Brazil is similar to that of other countries where carrier screening programs have led to a significant decrease in disease incidence. Therefore, it is justifiable to implement a Tay-Sachs disease carrier screening program for the Brazilian Jewish population.

  19. GARUSO - Version 1.0. Uncertainty model for multipath ultrasonic transit time gas flow meters

    Energy Technology Data Exchange (ETDEWEB)

    Lunde, Per; Froeysa, Kjell-Eivind; Vestrheim, Magne

    1997-09-01

    This report describes an uncertainty model for ultrasonic transit time gas flow meters configured with parallel chords, and a PC program, GARUSO Version 1.0, implemented for calculation of the meter`s relative expanded uncertainty. The program, which is based on the theoretical uncertainty model, is used to carry out a simplified and limited uncertainty analysis for a 12`` 4-path meter, where examples of input and output uncertainties are given. The model predicts a relative expanded uncertainty for the meter at a level which further justifies today`s increasing tendency to use this type of instruments for fiscal metering of natural gas. 52 refs., 15 figs., 11 tabs.

  20. IMPORTANCE OF DIFFERENT MODELS IN DECISION MAKING, EXPLAINING THE STRATEGIC BEHAVIOR IN ORGANIZATIONS

    Directory of Open Access Journals (Sweden)

    Cristiano de Oliveira Maciel

    2006-11-01

    Full Text Available This study is about the different models of decision process analyzing the organizational strategy. The article presents the strategy according to a cognitive approach. The discussion about that approach has three models of decision process: rational actor model, organizational behavior, and political model. These models, respectively, present some improvement in the decision making results, search for a good decision facing the cognitive restrictions of the administrator, and lots of talks for making a decision. According to the emphasis of each model, the possibilities for analyzing the strategy are presented. The article also shows that it is necessary to take into account the three different ways of analysis. That statement is justified once the analysis as well as the decision making become more complex, mainly those which are more important for the organizations.

  1. Two dimensional Hall MHD modeling of a plasma opening switch with density inhomogeneities

    Energy Technology Data Exchange (ETDEWEB)

    Zabaidullin, O [Kurchatov Institute, Moscow (Russian Federation); Chuvatin, A; Etlicher, B [Ecole Polytechnique, Palaiseau (France). Laboratoire de Physique des Milieux Ionises

    1997-12-31

    The results of two-dimensional numerical modeling of the Plasma Opening Switch in the MHD framework with Hall effect are presented. An enhanced Hall diffusion coefficient was used in the simulations. Recent experiments justify the application of this approach. The result of the modeling also correlates better with the experiment than in the case of the classical diffusion coefficient. Numerically generated pictures propose a switching scenario in which the translation between the conduction and opening phases can be explained by an abrupt `switching on` and further domination of the Hall effect at the end of the conduction phase. (author). 3 figs., 6 refs.

  2. Discretization-dependent model for weakly connected excitable media

    Science.gov (United States)

    Arroyo, Pedro André; Alonso, Sergio; Weber dos Santos, Rodrigo

    2018-03-01

    Pattern formation has been widely observed in extended chemical and biological processes. Although the biochemical systems are highly heterogeneous, homogenized continuum approaches formed by partial differential equations have been employed frequently. Such approaches are usually justified by the difference of scales between the heterogeneities and the characteristic spatial size of the patterns. Under different conditions, for example, under weak coupling, discrete models are more adequate. However, discrete models may be less manageable, for instance, in terms of numerical implementation and mesh generation, than the associated continuum models. Here we study a model to approach discreteness which permits the computer implementation on general unstructured meshes. The model is cast as a partial differential equation but with a parameter that depends not only on heterogeneities sizes, as in the case of quasicontinuum models, but also on the discretization mesh. Therefore, we refer to it as a discretization-dependent model. We validate the approach in a generic excitable media that simulates three different phenomena: the propagation of action membrane potential in cardiac tissue, in myelinated axons of neurons, and concentration waves in chemical microemulsions.

  3. On turbulence models for rod bundle flow computations

    International Nuclear Information System (INIS)

    Hazi, Gabor

    2005-01-01

    In commercial computational fluid dynamics codes there is more than one turbulence model built in. It is the user responsibility to choose one of those models, suitable for the problem studied. In the last decade, several computations were presented using computational fluid dynamics for the simulation of various problems of the nuclear industry. A common feature in a number of those simulations is that they were performed using the standard k-ε turbulence model without justifying the choice of the model. The simulation results were rarely satisfactory. In this paper, we shall consider the flow in a fuel rod bundle as a case study and discuss why the application of the standard k-ε model fails to give reasonable results in this situation. We also show that a turbulence model based on the Reynolds stress transport equations can provide qualitatively correct results. Generally, our aim is pedagogical, we would like to call the readers attention to the fact that turbulence models have to be selected based on theoretical considerations and/or adequate information obtained from measurements

  4. Validation of image quality in full-field digital mammography: Is the replacement of wet by dry laser printers justified?

    International Nuclear Information System (INIS)

    Schueller, Gerd; Kaindl, Elisabeth; Langenberger, Herbert; Stadler, Alfred; Schueller-Weidekamm, Claudia; Semturs, Friedrich; Helbich, Thomas H.

    2007-01-01

    Objective: Dry laser printers have replaced wet laser printers to produce hard copies of high-resolution digital images, primarily because of environmental concerns. However, no scientific research data have been published that compare the image quality of dry and wet laser printers in full-field digital mammography (FFDM). This study questions the image quality of these printers. Materials and methods: Objective image quality parameters of both printers were evaluated using a standardized printer test image, i.e., optical density and detectability of specific image elements (lines, curves, and shapes). Furthermore, mammograms of 129 patients with different breast tissue composition patterns were imaged with both printers. A total of 1806 subjective image quality parameters (brightness, contrast, and detail detection of anatomic structures), the detectability of breast lesions, as well as diagnostic performance according to the BI-RADS classification were evaluated. In addition, the presence of film artifacts was investigated. Results: Optical density values were equal for the dry and the wet laser printer. Detection of specific image elements on the printer test image was not different. Ratings of subjective image quality parameters were equal, as were the detectability of breast lesions and the diagnostic performance. Dry laser printer images showed more artifacts (164 versus 27). However, these artifacts did not influence image quality. Conclusion: Based on the evidence of objective and subjective parameters, a dry laser printer equals the image quality of a wet laser printer in FFDM. Therefore, not only for reasons of environmental preference, the replacement of wet laser printers by dry laser printers in FFDM is justified

  5. Beyond the standard of care: a new model to judge medical negligence.

    Science.gov (United States)

    Brenner, Lawrence H; Brenner, Alison Tytell; Awerbuch, Eric J; Horwitz, Daniel

    2012-05-01

    The term "standard of care" has been used in law and medicine to determine whether medical care is negligent. However, the precise meaning of this concept is often unclear for both medical and legal professionals. Our purposes are to (1) examine the limitations of using standard of care as a measure of negligence, (2) propose the use of the legal concepts of justification and excuse in developing a new model of examining medical conduct, and (3) outline the framework of this model. We applied the principles of tort liability set forth in the clinical and legal literature to describe the difficulty in applying standard of care in medical negligence cases. Using the concepts of justification and excuse, we propose a judicial model that may promote fair and just jury verdicts in medical negligence cases. Contrary to conventional understanding, medical negligence is not simply nonconformity to norms. Two additional concepts of legal liability, ie, justification and excuse, must also be considered to properly judge medical conduct. Medical conduct is justified when the benefits outweigh the risks; the law sanctions the conduct and encourages future conduct under similar circumstances. Excuse, on the other hand, relieves a doctor of legal liability under specific circumstances even though his/her conduct was not justified. Standard of care is an inaccurate measure of medical negligence because it is premised on the faulty notion of conformity to norms. An alternative judicial model to determine medical negligence would (1) eliminate standard of care in medical malpractice law, (2) reframe the court instruction to jurors, and (3) establish an ongoing consensus committee on orthopaedic principles of negligence.

  6. Robust LOD scores for variance component-based linkage analysis.

    Science.gov (United States)

    Blangero, J; Williams, J T; Almasy, L

    2000-01-01

    The variance component method is now widely used for linkage analysis of quantitative traits. Although this approach offers many advantages, the importance of the underlying assumption of multivariate normality of the trait distribution within pedigrees has not been studied extensively. Simulation studies have shown that traits with leptokurtic distributions yield linkage test statistics that exhibit excessive Type I error when analyzed naively. We derive analytical formulae relating the deviation from the expected asymptotic distribution of the lod score to the kurtosis and total heritability of the quantitative trait. A simple correction constant yields a robust lod score for any deviation from normality and for any pedigree structure, and effectively eliminates the problem of inflated Type I error due to misspecification of the underlying probability model in variance component-based linkage analysis.

  7. Satisfaction with quality of ICU care for patients and families

    DEFF Research Database (Denmark)

    Jensen, Hanne Irene; Gerritsen, Rik T; Koopmans, Matty

    2017-01-01

    as reflective indicators was supported by analysis of a factor representing satisfaction with communication, measured with a combination of causal and reflective indicators. CONCLUSIONS: Most family members were moderately or very satisfied with patient care, family care, information and decision-making...... in and support during decision-making processes. Exploratory factor analysis suggested four underlying factors, but confirmatory factor analysis failed to yield a multi-factor model with between-country measurement invariance. A hypothesis that this failure was due to misspecification of causal indicators......BACKGROUND: Families' perspectives are of great importance in evaluating quality of care in the intensive care unit (ICU). This Danish-Dutch study tested a European adaptation of the "Family Satisfaction in the ICU" (euroFS-ICU). The aim of the study was to examine assessments of satisfaction...

  8. Tapered composite likelihood for spatial max-stable models

    KAUST Repository

    Sang, Huiyan

    2014-05-01

    Spatial extreme value analysis is useful to environmental studies, in which extreme value phenomena are of interest and meaningful spatial patterns can be discerned. Max-stable process models are able to describe such phenomena. This class of models is asymptotically justified to characterize the spatial dependence among extremes. However, likelihood inference is challenging for such models because their corresponding joint likelihood is unavailable and only bivariate or trivariate distributions are known. In this paper, we propose a tapered composite likelihood approach by utilizing lower dimensional marginal likelihoods for inference on parameters of various max-stable process models. We consider a weighting strategy based on a "taper range" to exclude distant pairs or triples. The "optimal taper range" is selected to maximize various measures of the Godambe information associated with the tapered composite likelihood function. This method substantially reduces the computational cost and improves the efficiency over equally weighted composite likelihood estimators. We illustrate its utility with simulation experiments and an analysis of rainfall data in Switzerland.

  9. Tapered composite likelihood for spatial max-stable models

    KAUST Repository

    Sang, Huiyan; Genton, Marc G.

    2014-01-01

    Spatial extreme value analysis is useful to environmental studies, in which extreme value phenomena are of interest and meaningful spatial patterns can be discerned. Max-stable process models are able to describe such phenomena. This class of models is asymptotically justified to characterize the spatial dependence among extremes. However, likelihood inference is challenging for such models because their corresponding joint likelihood is unavailable and only bivariate or trivariate distributions are known. In this paper, we propose a tapered composite likelihood approach by utilizing lower dimensional marginal likelihoods for inference on parameters of various max-stable process models. We consider a weighting strategy based on a "taper range" to exclude distant pairs or triples. The "optimal taper range" is selected to maximize various measures of the Godambe information associated with the tapered composite likelihood function. This method substantially reduces the computational cost and improves the efficiency over equally weighted composite likelihood estimators. We illustrate its utility with simulation experiments and an analysis of rainfall data in Switzerland.

  10. A likely-universal model of fracture density and scaling justified by both data and theory. Consequences for crustal hydro-mechanics

    Science.gov (United States)

    Davy, P.; Darcel, C.; Le Goc, R.; Bour, O.

    2011-12-01

    We discuss the parameters that control fracture density on the Earth. We argue that most of fracture systems are spatially organized according to two main regimes. The smallest fractures can grow independently of each others, defining a "dilute" regime controlled by nuclei occurrence rate and individual fracture growth law. Above a certain length, fractures stop growing due to mechanical interactions between fractures. For this "dense" regime, we derive the fracture density distribution by acknowledging that, statistically, fractures do not cross a larger one. This very crude rule, which expresses the inhibiting role of large fractures against smaller ones but not the reverse, actually appears be a very strong control on the eventual fracture density distribution since it results in a self-similar distribution whose exponents and density term are fully determined by the fractal dimension D and a dimensionless parameter γ that encompasses the details of fracture correlations and orientations. The range of values for D and γ appears to be extremely limited, which makes this model quite universal. This theory is supported by quantitative data on either fault or joint networks. The transition between the dilute and dense regimes occurs at about a few tenths of kilometers for faults systems, and a few meters for joints. This remarkable difference between both processes is likely due to a large-scale control (localization) of the fracture growth for faulting that does not exist for jointing. Finally, we discuss the consequences of this model on both flow and mechanical properties. In the dense regime, networks appears to be very close to a critical state.

  11. A Sensitivity Analysis of fMRI Balloon Model

    KAUST Repository

    Zayane, Chadia; Laleg-Kirati, Taous-Meriem

    2015-01-01

    Functional magnetic resonance imaging (fMRI) allows the mapping of the brain activation through measurements of the Blood Oxygenation Level Dependent (BOLD) contrast. The characterization of the pathway from the input stimulus to the output BOLD signal requires the selection of an adequate hemodynamic model and the satisfaction of some specific conditions while conducting the experiment and calibrating the model. This paper, focuses on the identifiability of the Balloon hemodynamic model. By identifiability, we mean the ability to estimate accurately the model parameters given the input and the output measurement. Previous studies of the Balloon model have somehow added knowledge either by choosing prior distributions for the parameters, freezing some of them, or looking for the solution as a projection on a natural basis of some vector space. In these studies, the identification was generally assessed using event-related paradigms. This paper justifies the reasons behind the need of adding knowledge, choosing certain paradigms, and completing the few existing identifiability studies through a global sensitivity analysis of the Balloon model in the case of blocked design experiment.

  12. A Sensitivity Analysis of fMRI Balloon Model

    KAUST Repository

    Zayane, Chadia

    2015-04-22

    Functional magnetic resonance imaging (fMRI) allows the mapping of the brain activation through measurements of the Blood Oxygenation Level Dependent (BOLD) contrast. The characterization of the pathway from the input stimulus to the output BOLD signal requires the selection of an adequate hemodynamic model and the satisfaction of some specific conditions while conducting the experiment and calibrating the model. This paper, focuses on the identifiability of the Balloon hemodynamic model. By identifiability, we mean the ability to estimate accurately the model parameters given the input and the output measurement. Previous studies of the Balloon model have somehow added knowledge either by choosing prior distributions for the parameters, freezing some of them, or looking for the solution as a projection on a natural basis of some vector space. In these studies, the identification was generally assessed using event-related paradigms. This paper justifies the reasons behind the need of adding knowledge, choosing certain paradigms, and completing the few existing identifiability studies through a global sensitivity analysis of the Balloon model in the case of blocked design experiment.

  13. DSNP models used in the pebble-bed HTGR dynamic simulation. V.2

    International Nuclear Information System (INIS)

    Saphier, D.

    1984-04-01

    A detailed description is given of the components that were used in the DSNP simulation of the PNP-500 high temperature gas-cooled pebble-bed reactor. Each component presented in this report describes in detail the mathematical model that was used, and the assumptions that were made in developing the model. Most of the models were developed using basic physical principles with the simplication that could be justified on the basis of the requested accuracy. Most of the models were developed as either one dimensional or lumped parameter models. The heat transfer and flow correlations, which are mostly based on semiempirical correlations were either provided by KFA or were adapted from the available literature. A short description of DSNP is also given, with a comprehensive list of all the statements available in Rev. 4.1 of DSNP. (H.K.)

  14. Efficient Work Team Scheduling: Using Psychological Models of Knowledge Retention to Improve Code Writing Efficiency

    Directory of Open Access Journals (Sweden)

    Michael J. Pelosi

    2014-12-01

    Full Text Available Development teams and programmers must retain critical information about their work during work intervals and gaps in order to improve future performance when work resumes. Despite time lapses, project managers want to maximize coding efficiency and effectiveness. By developing a mathematically justified, practically useful, and computationally tractable quantitative and cognitive model of learning and memory retention, this study establishes calculations designed to maximize scheduling payoff and optimize developer efficiency and effectiveness.

  15. Demand for food products in Finland: A demand system approach

    Directory of Open Access Journals (Sweden)

    Ilkka P. Laurila

    1994-07-01

    Full Text Available The study was concerned with the estimation of food-demand parameters in a system context. The patterns of food consumption in Finland were presented over the period 1950-1991, and a complete demand system of food expenditures was estimated. Price and expenditure elasticities of demand were derived, and the results were used to obtain projections on future consumption. While the real expenditure on food has increased, the budget share of food has decreased. In the early 19505, combined Food-at-Home and Food-away-from-Home corresponded to about 40% of consumers’ total expenditure. In 1991 the share was 28%. There was a shift to meals eaten outside the home. While the budget share of Food-away-from-Home increased from 3% to 7% over the observation period, Food-at-Home fell from 37% to 21%, and Food-at-Home excluding Alcoholic Drinks fell from 34% to 16%. Within Food-at-Home, the budget shares of the broad aggregate groups, Animalia (food from animal sources, Beverages, and Vegetablia (food from vegetable sources, remained about the same over the four decades, while structural change took place within the aggregates. Within Animalia, consumption shifted from Dairy Products (other than Fresh Milk to Meat and Fish. Within Beverages, consumption shifted from Fresh Milk and Hot Drinks to Alcoholic Drinks and Soft Drinks. Within Vegetablia, consumption shifted from Flour to Fruits, while the shares of Bread and Cake and Vegetables remained about the same. As the complete demand system, the Almost Ideal Demand System (AIDS was employed. The conventional AIDS was extended by developing a dynamic generalisation of the model and allowing for systematic shifts in structural relationships over time. A four-stage budgeting system was specified, consisting of seven sub-systems (groups, and covering 18 food categories. Tests on parameter restrictions and misspecification tests were used to choose the most preferred model specification for each group. Generally

  16. ANALYSIS MODEL FOR INVENTORY MANAGEMENT

    Directory of Open Access Journals (Sweden)

    CAMELIA BURJA

    2010-01-01

    Full Text Available The inventory represents an essential component for the assets of the enterprise and the economic analysis gives them special importance because their accurate management determines the achievement of the activity object and the financial results. The efficient management of inventory requires ensuring an optimum level for them, which will guarantee the normal functioning of the activity with minimum inventory expenses and funds which are immobilised. The paper presents an analysis model for inventory management based on their rotation speed and the correlation with the sales volume illustrated in an adequate study. The highlighting of the influence factors on the efficient inventory management ensures the useful information needed to justify managerial decisions, which will lead to a balancedfinancial position and to increased company performance.

  17. QUASI-STATIC MODEL OF MAGNETICALLY COLLIMATED JETS AND RADIO LOBES. II. JET STRUCTURE AND STABILITY

    Energy Technology Data Exchange (ETDEWEB)

    Colgate, Stirling A.; Li, Hui [Theoretical Division, Los Alamos National Laboratory, Los Alamos, NM 87545 (United States); Fowler, T. Kenneth [University of California, Berkeley, CA 94720 (United States); Hooper, E. Bickford [Lawrence Livermore National Laboratory, Livermore, CA 94550 (United States); McClenaghan, Joseph; Lin, Zhihong [University of California, Irvine, CA 92697 (United States)

    2015-11-10

    This is the second in a series of companion papers showing that when an efficient dynamo can be maintained by accretion disks around supermassive black holes in active galactic nuclei, it can lead to the formation of a powerful, magnetically driven, and mediated helix that could explain both the observed radio jet/lobe structures and ultimately the enormous power inferred from the observed ultrahigh-energy cosmic rays. In the first paper, we showed self-consistently that minimizing viscous dissipation in the disk naturally leads to jets of maximum power with boundary conditions known to yield jets as a low-density, magnetically collimated tower, consistent with observational constraints of wire-like currents at distances far from the black hole. In this paper we show that these magnetic towers remain collimated as they grow in length at nonrelativistic velocities. Differences with relativistic jet models are explained by three-dimensional magnetic structures derived from a detailed examination of stability properties of the tower model, including a broad diffuse pinch with current profiles predicted by a detailed jet solution outside the collimated central column treated as an electric circuit. We justify our model in part by the derived jet dimensions in reasonable agreement with observations. Using these jet properties, we also discuss the implications for relativistic particle acceleration in nonrelativistically moving jets. The appendices justify the low jet densities yielding our results and speculate how to reconcile our nonrelativistic treatment with general relativistic MHD simulations.

  18. Uncertainty, sensitivity analysis and the role of data based mechanistic modeling in hydrology

    Science.gov (United States)

    Ratto, M.; Young, P. C.; Romanowicz, R.; Pappenberger, F.; Saltelli, A.; Pagano, A.

    2007-05-01

    calibration of mechanistic hydrological models, making their properties more transparent. It also helps to highlight possible mis-specification problems, if these are identified. The results of the exercise show that the two modelling methodologies have good synergy; combining well to produce a complete joint modelling approach that has the kinds of checks-and-balances required in practical data-based modelling of rainfall-flow systems. Such a combined approach also produces models that are suitable for different kinds of application. As such, the DBM model considered in the paper is developed specifically as a vehicle for flow and flood forecasting (although the generality of DBM modelling means that a simulation version of the model could be developed if required); while TOPMODEL, suitably calibrated (and perhaps modified) in the light of the DBM and GSA results, immediately provides a simulation model with a variety of potential applications, in areas such as catchment management and planning.

  19. On the Formal Modeling of Games of Language and Adversarial Argumentation : A Logic-Based Artificial Intelligence Approach

    OpenAIRE

    Eriksson Lundström, Jenny S. Z.

    2009-01-01

    Argumentation is a highly dynamical and dialectical process drawing on human cognition. Successful argumentation is ubiquitous to human interaction. Comprehensive formal modeling and analysis of argumentation presupposes a dynamical approach to the following phenomena: the deductive logic notion, the dialectical notion and the cognitive notion of justified belief. For each step of an argumentation these phenomena form networks of rules which determine the propositions to be allowed to make se...

  20. Progresive diseases study using Markov´s multiple stage models

    Directory of Open Access Journals (Sweden)

    René Iral Palomino, Esp estadística

    2005-12-01

    Full Text Available Risk factors and their degree of association with a progressive disease,such as Alzheimerís disease or liver cancer, can be identifi edby using epidemiological models; some examples of these modelsinclude logistic and Poisson regression, log-linear, linear regression,and mixed models. Using models that take into account not onlythe different health status that a person could experience betweenvisits but also his/her characteristics (i.e. age, gender, genetic traits,etc. seems to be reasonable and justifi ed. In this paper we discussa methodology to estimate the effect of covariates that could beassociated with a disease when its progression or regression canbe idealized by means of a multi-state model that incorporates thelongitudinal nature of data. This method is based on the Markovproperty and it is illustrated using simulated data about Alzheimerísdisease. Finally, the merits and limitations of this method are discussed.

  1. On the empirical relevance of the transient in opinion models

    International Nuclear Information System (INIS)

    Banisch, Sven; Araujo, Tanya

    2010-01-01

    While the number and variety of models to explain opinion exchange dynamics is huge, attempts to justify the model results using empirical data are relatively rare. As linking to real data is essential for establishing model credibility, this Letter develops an empirical confirmation experiment by which an opinion model is related to real election data. The model is based on a representation of opinions as a vector of k bits. Individuals interact according to the principle that similarity leads to interaction and interaction leads to still more similarity. In the comparison to real data we concentrate on the transient opinion profiles that form during the dynamic process. An artificial election procedure is introduced which allows to relate transient opinion configurations to the electoral performance of candidates for which data are available. The election procedure based on the well-established principle of proximity voting is repeatedly performed during the transient period and remarkable statistical agreement with the empirical data is observed.

  2. On the empirical relevance of the transient in opinion models

    Energy Technology Data Exchange (ETDEWEB)

    Banisch, Sven, E-mail: sven.banisch@universecity.d [Mathematical Physics, Physics Department, Bielefeld University, 33501 Bielefeld (Germany); Institute for Complexity Science (ICC), 1249-078 Lisbon (Portugal); Araujo, Tanya, E-mail: tanya@iseg.utl.p [Research Unit on Complexity in Economics (UECE), ISEG, TULisbon, 1249-078 Lisbon (Portugal); Institute for Complexity Science (ICC), 1249-078 Lisbon (Portugal)

    2010-07-12

    While the number and variety of models to explain opinion exchange dynamics is huge, attempts to justify the model results using empirical data are relatively rare. As linking to real data is essential for establishing model credibility, this Letter develops an empirical confirmation experiment by which an opinion model is related to real election data. The model is based on a representation of opinions as a vector of k bits. Individuals interact according to the principle that similarity leads to interaction and interaction leads to still more similarity. In the comparison to real data we concentrate on the transient opinion profiles that form during the dynamic process. An artificial election procedure is introduced which allows to relate transient opinion configurations to the electoral performance of candidates for which data are available. The election procedure based on the well-established principle of proximity voting is repeatedly performed during the transient period and remarkable statistical agreement with the empirical data is observed.

  3. Evaluating Emulation-based Models of Distributed Computing Systems

    Energy Technology Data Exchange (ETDEWEB)

    Jones, Stephen T. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Cyber Initiatives; Gabert, Kasimir G. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Cyber Initiatives; Tarman, Thomas D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Emulytics Initiatives

    2017-08-01

    Emulation-based models of distributed computing systems are collections of virtual ma- chines, virtual networks, and other emulation components configured to stand in for oper- ational systems when performing experimental science, training, analysis of design alterna- tives, test and evaluation, or idea generation. As with any tool, we should carefully evaluate whether our uses of emulation-based models are appropriate and justified. Otherwise, we run the risk of using a model incorrectly and creating meaningless results. The variety of uses of emulation-based models each have their own goals and deserve thoughtful evaluation. In this paper, we enumerate some of these uses and describe approaches that one can take to build an evidence-based case that a use of an emulation-based model is credible. Predictive uses of emulation-based models, where we expect a model to tell us something true about the real world, set the bar especially high and the principal evaluation method, called validation , is comensurately rigorous. We spend the majority of our time describing and demonstrating the validation of a simple predictive model using a well-established methodology inherited from decades of development in the compuational science and engineering community.

  4. A mathematical model of crevice and pitting corrosion

    International Nuclear Information System (INIS)

    Sharland, S.M.; Tasker, P.W.

    1985-09-01

    A predictive and self-consistent mathematical model incorporating the electrochemical, chemical and ionic migration processes characterising crevice and pitting corrosion is described. The model predicts full details of the steady-state solution chemistry and electrode kinetics (and hence metal penetration rates) within the corrosion cavities as functions of the many parameters on which these depend such as external electrode potential and crevice dimensions. The crevice is modelled as a parallel-sided slot filled with a dilute sodium chloride solution. Corrosion in both one and two directions is considered. The model includes a solid hydroxide precipitation reaction and assesses the effect on the corrosion rates of consequent changes in the chemical and physical environment within the crevice. A time stepping method is developed for the study of the progression of the corrosion with a precipitation reaction included and is applied to a restricted range of parameters. The applicability of this method is justified in relation to the physical and mathematical approximations made during the construction of the model. (author)

  5. Data driven propulsion system weight prediction model

    Science.gov (United States)

    Gerth, Richard J.

    1994-10-01

    The objective of the research was to develop a method to predict the weight of paper engines, i.e., engines that are in the early stages of development. The impetus for the project was the Single Stage To Orbit (SSTO) project, where engineers need to evaluate alternative engine designs. Since the SSTO is a performance driven project the performance models for alternative designs were well understood. The next tradeoff is weight. Since it is known that engine weight varies with thrust levels, a model is required that would allow discrimination between engines that produce the same thrust. Above all, the model had to be rooted in data with assumptions that could be justified based on the data. The general approach was to collect data on as many existing engines as possible and build a statistical model of the engines weight as a function of various component performance parameters. This was considered a reasonable level to begin the project because the data would be readily available, and it would be at the level of most paper engines, prior to detailed component design.

  6. Linear versus quadratic portfolio optimization model with transaction cost

    Science.gov (United States)

    Razak, Norhidayah Bt Ab; Kamil, Karmila Hanim; Elias, Siti Masitah

    2014-06-01

    Optimization model is introduced to become one of the decision making tools in investment. Hence, it is always a big challenge for investors to select the best model that could fulfill their goal in investment with respect to risk and return. In this paper we aims to discuss and compare the portfolio allocation and performance generated by quadratic and linear portfolio optimization models namely of Markowitz and Maximin model respectively. The application of these models has been proven to be significant and popular among others. However transaction cost has been debated as one of the important aspects that should be considered for portfolio reallocation as portfolio return could be significantly reduced when transaction cost is taken into consideration. Therefore, recognizing the importance to consider transaction cost value when calculating portfolio' return, we formulate this paper by using data from Shariah compliant securities listed in Bursa Malaysia. It is expected that, results from this paper will effectively justify the advantage of one model to another and shed some lights in quest to find the best decision making tools in investment for individual investors.

  7. Data-driven outbreak forecasting with a simple nonlinear growth model.

    Science.gov (United States)

    Lega, Joceline; Brown, Heidi E

    2016-12-01

    Recent events have thrown the spotlight on infectious disease outbreak response. We developed a data-driven method, EpiGro, which can be applied to cumulative case reports to estimate the order of magnitude of the duration, peak and ultimate size of an ongoing outbreak. It is based on a surprisingly simple mathematical property of many epidemiological data sets, does not require knowledge or estimation of disease transmission parameters, is robust to noise and to small data sets, and runs quickly due to its mathematical simplicity. Using data from historic and ongoing epidemics, we present the model. We also provide modeling considerations that justify this approach and discuss its limitations. In the absence of other information or in conjunction with other models, EpiGro may be useful to public health responders. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  8. Model instruments of effective segmentation of the fast food market

    Directory of Open Access Journals (Sweden)

    Mityaeva Tetyana L.

    2013-03-01

    Full Text Available The article presents results of optimisation step-type calculations of economic effectiveness of promotion of fast food with consideration of key parameters of assessment of efficiency of the marketing strategy of segmentation. The article justifies development of a mathematical model on the bases of 3D-presentations and three-dimensional system of management variables. The modern applied mathematical packages allow formation not only of one-dimensional and two-dimensional arrays and analyse links of variables, but also of three-dimensional, besides, the more links and parameters are taken into account, the more adequate and adaptive are results of modelling and, as a result, more informative and strategically valuable. The article shows modelling possibilities that allow taking into account strategies and reactions on formation of the marketing strategy under conditions of entering the fast food market segments.

  9. Complex groundwater flow systems as traveling agent models

    Directory of Open Access Journals (Sweden)

    Oliver López Corona

    2014-10-01

    Full Text Available Analyzing field data from pumping tests, we show that as with many other natural phenomena, groundwater flow exhibits complex dynamics described by 1/f power spectrum. This result is theoretically studied within an agent perspective. Using a traveling agent model, we prove that this statistical behavior emerges when the medium is complex. Some heuristic reasoning is provided to justify both spatial and dynamic complexity, as the result of the superposition of an infinite number of stochastic processes. Even more, we show that this implies that non-Kolmogorovian probability is needed for its study, and provide a set of new partial differential equations for groundwater flow.

  10. Sentinel lymph node biopsy in patients with a needle core biopsy diagnosis of ductal carcinoma in situ: is it justified?

    LENUS (Irish Health Repository)

    Doyle, B

    2012-02-01

    BACKGROUND: The incidence of ductal carcinoma in situ (DCIS) has increased markedly with the introduction of population-based mammographic screening. DCIS is usually diagnosed non-operatively. Although sentinel lymph node biopsy (SNB) has become the standard of care for patients with invasive breast carcinoma, its use in patients with DCIS is controversial. AIM: To examine the justification for offering SNB at the time of primary surgery to patients with a needle core biopsy (NCB) diagnosis of DCIS. METHODS: A retrospective analysis was performed of 145 patients with an NCB diagnosis of DCIS who had SNB performed at the time of primary surgery. The study focused on rates of SNB positivity and underestimation of invasive carcinoma by NCB, and sought to identify factors that might predict the presence of invasive carcinoma in the excision specimen. RESULTS: 7\\/145 patients (4.8%) had a positive sentinel lymph node, four macrometastases and three micrometastases. 6\\/7 patients had invasive carcinoma in the final excision specimen. 55\\/145 patients (37.9%) with an NCB diagnosis of DCIS had invasive carcinoma in the excision specimen. The median invasive tumour size was 6 mm. A radiological mass and areas of invasion <1 mm, amounting to "at least microinvasion" on NCB were predictive of invasive carcinoma in the excision specimen. CONCLUSIONS: SNB positivity in pure DCIS is rare. In view of the high rate of underestimation of invasive carcinoma in patients with an NCB diagnosis of DCIS in this study, SNB appears justified in this group of patients.

  11. Boosted Multivariate Trees for Longitudinal Data

    Science.gov (United States)

    Pande, Amol; Li, Liang; Rajeswaran, Jeevanantham; Ehrlinger, John; Kogalur, Udaya B.; Blackstone, Eugene H.; Ishwaran, Hemant

    2017-01-01

    Machine learning methods provide a powerful approach for analyzing longitudinal data in which repeated measurements are observed for a subject over time. We boost multivariate trees to fit a novel flexible semi-nonparametric marginal model for longitudinal data. In this model, features are assumed to be nonparametric, while feature-time interactions are modeled semi-nonparametrically utilizing P-splines with estimated smoothing parameter. In order to avoid overfitting, we describe a relatively simple in sample cross-validation method which can be used to estimate the optimal boosting iteration and which has the surprising added benefit of stabilizing certain parameter estimates. Our new multivariate tree boosting method is shown to be highly flexible, robust to covariance misspecification and unbalanced designs, and resistant to overfitting in high dimensions. Feature selection can be used to identify important features and feature-time interactions. An application to longitudinal data of forced 1-second lung expiratory volume (FEV1) for lung transplant patients identifies an important feature-time interaction and illustrates the ease with which our method can find complex relationships in longitudinal data. PMID:29249866

  12. A dynamic model for costing disaster mitigation policies.

    Science.gov (United States)

    Altay, Nezih; Prasad, Sameer; Tata, Jasmine

    2013-07-01

    The optimal level of investment in mitigation strategies is usually difficult to ascertain in the context of disaster planning. This research develops a model to provide such direction by relying on cost of quality literature. This paper begins by introducing a static approach inspired by Joseph M. Juran's cost of quality management model (Juran, 1951) to demonstrate the non-linear trade-offs in disaster management expenditure. Next it presents a dynamic model that includes the impact of dynamic interactions of the changing level of risk, the cost of living, and the learning/investments that may alter over time. It illustrates that there is an optimal point that minimises the total cost of disaster management, and that this optimal point moves as governments learn from experience or as states get richer. It is hoped that the propositions contained herein will help policymakers to plan, evaluate, and justify voluntary disaster mitigation expenditures. © 2013 The Author(s). Journal compilation © Overseas Development Institute, 2013.

  13. Item selection via Bayesian IRT models.

    Science.gov (United States)

    Arima, Serena

    2015-02-10

    With reference to a questionnaire that aimed to assess the quality of life for dysarthric speakers, we investigate the usefulness of a model-based procedure for reducing the number of items. We propose a mixed cumulative logit model, which is known in the psychometrics literature as the graded response model: responses to different items are modelled as a function of individual latent traits and as a function of item characteristics, such as their difficulty and their discrimination power. We jointly model the discrimination and the difficulty parameters by using a k-component mixture of normal distributions. Mixture components correspond to disjoint groups of items. Items that belong to the same groups can be considered equivalent in terms of both difficulty and discrimination power. According to decision criteria, we select a subset of items such that the reduced questionnaire is able to provide the same information that the complete questionnaire provides. The model is estimated by using a Bayesian approach, and the choice of the number of mixture components is justified according to information criteria. We illustrate the proposed approach on the basis of data that are collected for 104 dysarthric patients by local health authorities in Lecce and in Milan. Copyright © 2014 John Wiley & Sons, Ltd.

  14. Dynamics of a Computer Virus Propagation Model with Delays and Graded Infection Rate

    Directory of Open Access Journals (Sweden)

    Zizhen Zhang

    2017-01-01

    Full Text Available A four-compartment computer virus propagation model with two delays and graded infection rate is investigated in this paper. The critical values where a Hopf bifurcation occurs are obtained by analyzing the distribution of eigenvalues of the corresponding characteristic equation. In succession, direction and stability of the Hopf bifurcation when the two delays are not equal are determined by using normal form theory and center manifold theorem. Finally, some numerical simulations are also carried out to justify the obtained theoretical results.

  15. The Heuristic Model Based on LPR in the Context of Material Conversion

    Directory of Open Access Journals (Sweden)

    Wilk-Kołodziejczyk D.

    2017-09-01

    Full Text Available High complexity of the physical and chemical processes occurring in liquid metal is the reason why it is so difficult, impossible even sometimes, to make analytical models of these phenomena. In this situation, the use of heuristic models based on the experimental data and experience of technicians is fully justified since, in an approximate manner at least, they allow predicting the mechanical properties of the metal manufactured under given process conditions. The study presents a methodology applicable in the design of a heuristic model based on the formalism of the logic of plausible reasoning (LPR. The problem under consideration consists in finding a technological variant of the process that will give the desired product parameters while minimizing the cost of production. The conducted tests have shown the effectiveness of the proposed approach.

  16. Risco no Modelo de Internacionalização de Uppsala Risk in the Uppsala Internationalization Model

    Directory of Open Access Journals (Sweden)

    Debora Chiavegatti

    2011-09-01

    Full Text Available Este ensaio revisita a evolução do modelo comportamental de internacionalização de Uppsala desde os anos setenta e insere neste contexto a proposição gráfica de Lemos, Johanson e Vahlne (2010, onde o risco desempenha um papel central. A falta de conhecimento sobre o mercado representa a restrição mais crítica para as dificuldades envolvidas no processo de internacionalização (Lemos, Johanson e Vahlne, 2010 e está na origem do modelo comportamental que explica a internacionalização em estágios seqüenciais de aquisição de conhecimento. Na versão original, é relevante a liability of foreignness. Verifica-se uma revisão do modelo de Uppsala, em que esta perspectiva do processo de internacionalização foi revisitada em especial pela complexidade do mercado, propondo que tal movimentação não seria uma sequência de passos e etapas planejados e deliberados por uma análise racional. Na nova versão, é relevante a idéia de liability of outsidership, que se refere às redes existentes. A nova proposta da Escola de Uppsala passa a considerar a rede de relacionamentos (networks como elemento central da entrada nos mercados estrangeiros.

    justify">This paper revisits the evolution of the Uppsala behavioral model of internationalization since the 1970s and inserts Lemos, Johanson and Vahlne’s graphic proposal in this context, where it plays a central role. The lack of knowledge about the market is the most critical restriction to the difficulties involved in the internationalization process (LEMOS; JOHANSON; VAHLNE, 2010

    justify">and is in the origin of the behavioral model that explains internationalization in sequential stages of knowledge acquisition.

    justify">Liability of foreignness is relevant in

  17. Pengembangan Soal Penalaran Model TIMSS Matematika SMP

    Directory of Open Access Journals (Sweden)

    A. Rizta

    2013-06-01

    Full Text Available Penelitian ini bertujuan mengembangkan soal penalaran model TIMSS pada mata pelajaran matematika SMP. Subjek penelitian adalah siswa kelas VIII.7 SMP Negeri 1 Palembang yang berjumlah 27 orang. Metode penelitian yang digunakan development research atau pengembangan. Hasil penelitian ini menunjukkan bahwa sebanyak 22,22% siswa mendapat skor  penalaran di atas 65%, dan 77,78% siswa memperoleh skor penalaran di bawah 65%. Lebih rinci pencapaian hasil tes penalaran pada domain penalaran generalize 11,11%,  domain penalaran justify 3,7%, domain penalaran integrate 29,63%, domain penalaran analyze 44,45%, dan domain penalaran non-routine problem 51,85%. Berdasarkan hasil tes tersebut, jika acuan batas pencapaian 65% maka  penalaran siswa masih berada di bawah batas pencapaian minimal dengan kata lain kemampuan penalaran siswa masih rendah.   The aim of this research was developing TIMSS reasoning problem on mathematics SMP. Subject of this research was 27 students on VIII.7 SMPN 1 Palembang. This research used development research. The result show that 22,22% students reach above 65% of reasoning problem, and vice versa. More detail result show that 11,11% reached generalize reasoning level, 3,7% reached justify level, 29,63% reached integrate level, 44,45% reached analyze level, and 51,85% reached non-routine problem. Based on the result, if 65% was determined as minimum limit of success, it means the student reasoning ability still low.  

  18. Causal Inference and Model Selection in Complex Settings

    Science.gov (United States)

    Zhao, Shandong

    Propensity score methods have become a part of the standard toolkit for applied researchers who wish to ascertain causal effects from observational data. While they were originally developed for binary treatments, several researchers have proposed generalizations of the propensity score methodology for non-binary treatment regimes. In this article, we firstly review three main methods that generalize propensity scores in this direction, namely, inverse propensity weighting (IPW), the propensity function (P-FUNCTION), and the generalized propensity score (GPS), along with recent extensions of the GPS that aim to improve its robustness. We compare the assumptions, theoretical properties, and empirical performance of these methods. We propose three new methods that provide robust causal estimation based on the P-FUNCTION and GPS. While our proposed P-FUNCTION-based estimator preforms well, we generally advise caution in that all available methods can be biased by model misspecification and extrapolation. In a related line of research, we consider adjustment for posttreatment covariates in causal inference. Even in a randomized experiment, observations might have different compliance performance under treatment and control assignment. This posttreatment covariate cannot be adjusted using standard statistical methods. We review the principal stratification framework which allows for modeling this effect as part of its Bayesian hierarchical models. We generalize the current model to add the possibility of adjusting for pretreatment covariates. We also propose a new estimator of the average treatment effect over the entire population. In a third line of research, we discuss the spectral line detection problem in high energy astrophysics. We carefully review how this problem can be statistically formulated as a precise hypothesis test with point null hypothesis, why a usual likelihood ratio test does not apply for problem of this nature, and a doable fix to correctly

  19. Basics of modern mathematical statistics

    CERN Document Server

    Spokoiny, Vladimir

    2015-01-01

    This textbook provides a unified and self-contained presentation of the main approaches to and ideas of mathematical statistics. It collects the basic mathematical ideas and tools needed as a basis for more serious studies or even independent research in statistics. The majority of existing textbooks in mathematical statistics follow the classical asymptotic framework. Yet, as modern statistics has changed rapidly in recent years, new methods and approaches have appeared. The emphasis is on finite sample behavior, large parameter dimensions, and model misspecifications. The present book provides a fully self-contained introduction to the world of modern mathematical statistics, collecting the basic knowledge, concepts and findings needed for doing further research in the modern theoretical and applied statistics. This textbook is primarily intended for graduate and postdoc students and young researchers who are interested in modern statistical methods.

  20. Substitution between Cars within the Household

    DEFF Research Database (Denmark)

    de Borger, Bruno; Mulalic, Ismir; Rouwendal, Jan

    In this paper we study the demand for car kilometres in two-car households, focusing on the substitution between cars in response to fuel price changes. We use a large sample of detailed Danish data on two-car households to estimate—for each car owned by the household—own and cross-price effects...... of increases in fuel costs per kilometre. The empirical results show that failure to capture substitution between cars within the household can result in substantial misspecification biases. Ignoring substitution, we estimate fuel price elasticities of –0.81 and -0.65 for the primary and secondary cars...... efficient car, finding partial support for the underlying hypothesis. More importantly, the results of this extended model emphasize the importance of behavioural differences related to the position of the most fuel efficient car in the household, suggesting that households’ fuel efficiency choices...

  1. Car use within the household

    DEFF Research Database (Denmark)

    de Borger, Bruno; Mulalic, Ismir; Rouwendal, Jan

    2013-01-01

    In this paper we study the demand for car kilometres in two-car households, focusing on the substitution between cars in response to fuel price changes. We use a large sample of detailed Danish data on two-car households to estimate—for each car owned by the household—own and cross-price effects...... of increases in fuel costs per kilometre. The empirical results show that failure to capture substitution between cars within the household can result in substantial misspecification biases. Ignoring substitution, we estimate fuel price elasticities of –0.81 and -0.65 for the primary and secondary cars...... efficient car, finding partial support for the underlying hypothesis. More importantly, the results of this extended model emphasize the importance of behavioural differences related to the position of the most fuel efficient car in the household, suggesting that households’ fuel efficiency choices...

  2. A new calibrated bayesian internal goodness-of-fit method: sampled posterior p-values as simple and general p-values that allow double use of the data.

    Directory of Open Access Journals (Sweden)

    Frédéric Gosselin

    Full Text Available BACKGROUND: Recent approaches mixing frequentist principles with bayesian inference propose internal goodness-of-fit (GOF p-values that might be valuable for critical analysis of bayesian statistical models. However, GOF p-values developed to date only have known probability distributions under restrictive conditions. As a result, no known GOF p-value has a known probability distribution for any discrepancy function. METHODOLOGY/PRINCIPAL FINDINGS: We show mathematically that a new GOF p-value, called the sampled posterior p-value (SPP, asymptotically has a uniform probability distribution whatever the discrepancy function. In a moderate finite sample context, simulations also showed that the SPP appears stable to relatively uninformative misspecifications of the prior distribution. CONCLUSIONS/SIGNIFICANCE: These reasons, together with its numerical simplicity, make the SPP a better canonical GOF p-value than existing GOF p-values.

  3. Predicting the electricity demand of an oil industry region on the basis of a stochastic model

    Energy Technology Data Exchange (ETDEWEB)

    Ragimova, R A; Khaykin, I Ye

    1979-01-01

    A justified decision to accept a particular development design may be made only on the basis of a scientific prediction of the basic technical and economic indicators. Used as the basic factor which impacts on the electricity demand is the total oil production and the flow of the total liquid pumped from the bowels of the earth. The initial information is statistical data about the expenditure of electricity, the oil and liquid production for 8-10 years. The existence is accepted of a direct relation between the resultive and the factorial signs. Based on a normal law of distribution of random errors, reliable probabilities are found for determining the electricity demand of an object with an assigned degree of precision. Calculations through the proposed model in the practical work of the energy services make it possible to expose the degree of quantitative influence of the basic parameters of the development of a deposit on the value of the expenditure of electricity and to justifiably predict the electricity demand for oil production.

  4. Animal models of chronic obstructive pulmonary disease.

    Science.gov (United States)

    Pérez-Rial, Sandra; Girón-Martínez, Álvaro; Peces-Barba, Germán

    2015-03-01

    Animal models of disease have always been welcomed by the scientific community because they provide an approach to the investigation of certain aspects of the disease in question. Animal models of COPD cannot reproduce the heterogeneity of the disease and usually only manage to represent the disease in its milder stages. Moreover, airflow obstruction, the variable that determines patient diagnosis, not always taken into account in the models. For this reason, models have focused on the development of emphysema, easily detectable by lung morphometry, and have disregarded other components of the disease, such as airway injury or associated vascular changes. Continuous, long-term exposure to cigarette smoke is considered the main risk factor for this disease, justifying the fact that the cigarette smoke exposure model is the most widely used. Some variations on this basic model, related to exposure time, the association of other inducers or inhibitors, exacerbations or the use of transgenic animals to facilitate the identification of pathogenic pathways have been developed. Some variations or heterogeneity of this disease, then, can be reproduced and models can be designed for resolving researchers' questions on disease identification or treatment responses. Copyright © 2014 SEPAR. Published by Elsevier Espana. All rights reserved.

  5. Theoretical study on instability mechanism of jet-induced sloshing. Model development using Orr-Sommerfeld equation generalized for non-parallel flow; Funryu reiki sloshing gensho no hassei kiko ni kansuru rironteki kenkyu. Hiheiko nagare ni ippankashita Orr-Sommerfeld hoteishiki wo mochiita model ka

    Energy Technology Data Exchange (ETDEWEB)

    Eguchi, Y. [Central Research Institute of Electric Power Industry, Tokyo (Japan)

    1998-07-25

    A theoretical model was developed to study the mechanism of free surface sloshing in a vessel induced by a steady vertical jet flow. In the model, jet deflection is calculated with eigen values of the generalized Orr-Sommerfeld equation which is applicable to slightly non-parallel jet. Instability criteria employed in the model are (1) resonace condition between sloshing and jet frequencies and (2) {pi} phase relation between jet displacement at an inlet and global jet deflection. Numerical results of the mathematical model have shown good agreement with experimental ones, which justifies that the inherent instability of free jet itself and edge tone feedback are the main causes of the self-excited sloshing. 9 refs., 10 figs.

  6. Self-Esteem: Justifying Its Existence.

    Science.gov (United States)

    Street, Sue; Isaacs, Madelyn

    1998-01-01

    The role of self-esteem as a professional and personality construct has been obscured by its panacea role. Definitions of self-esteem and related terms are distinguished. Self-esteem is discussed as a developmental construct, a personality construct, and as a therapeutic goal. Therapeutic, educational, and counseling implications are discussed.…

  7. HOW TO JUSTIFY AUTOMATION PROJECTS

    OpenAIRE

    Velásquez C., José

    2014-01-01

    This article deals with an Automation Project development. Important aspects with regard to its financial advantages are shown, with the purpose of knowing about the savings engaging a variety of areas within an enterprise, such as security, quality, marketing and logistics. El artículo trata sobre el desarrollo de un proyecto de automatización, se muestran aspectos importantes para su justificación económica, a fin de conocer los ahorros que pueden darse en distintas áreas de la empresa c...

  8. Justifying group-specific common morality.

    Science.gov (United States)

    Strong, Carson

    2008-01-01

    Some defenders of the view that there is a common morality have conceived such morality as being universal, in the sense of extending across all cultures and times. Those who deny the existence of such a common morality often argue that the universality claim is implausible. Defense of common morality must take account of the distinction between descriptive and normative claims that there is a common morality. This essay considers these claims separately and identifies the nature of the arguments for each claim. It argues that the claim that there is a universal common morality in the descriptive sense has not been successfully defended to date. It maintains that the claim that there is a common morality in the normative sense need not be understood as universalist. This paper advocates the concept of group specific common morality, including country-specific versions. It suggests that both the descriptive and the normative claims that there are country-specific common moralities are plausible, and that a country-specific normative common morality could provide the basis for a country's bioethics.

  9. The Ends Justify the Memes

    OpenAIRE

    Miller, Ian D.; Cupchik, Gerald C.

    2016-01-01

    This talk presents an update on my research into memes.  It begins with an introduction to memes that is suitable for any audience.  It concludes with a detailed description of human research and simulation results that converge with one another.  I also present a short online study on email forwarding chains.

  10. Conceptual model to determine maximum activity of radioactive waste in near-surface disposal facilities

    International Nuclear Information System (INIS)

    Iarmosh, I.; Olkhovyk, Yu.

    2016-01-01

    For development of the management strategy for radioactive waste to be placed in near - surface disposal facilities (NSDF), it is necessary to justify long - term safety of such facilities. Use of mathematical modelling methods for long - term forecasts of radwaste radiation impact and assessment of radiation risks from radionuclides migration can help to resolve this issue. The purpose of the research was to develop the conceptual model for determining the maximum activity of radwaste to be safely disposed in the NSDF and to test it in the case of Lot 3 Vector NSDF (Chornobyl exclusion zone). This paper describes an approach to the development of such a model. The conceptual model of "9"0 Sr migration from Lot 3 through aeration zone and aquifer soils was developed. The results of modelling are shown. The proposals on further steps for the model improvement were developed

  11. MODEL OF ACCOUNTING ENGINEERING IN VIEW OF EARNINGS MANAGEMENT IN POLAND

    Directory of Open Access Journals (Sweden)

    Leszek Michalczyk

    2012-10-01

    Full Text Available The article introduces the theoretical foundations of the author’s original concept of accounting engineering. We assume a theoretical premise whereby accounting engineering is understood as a system of accounting practice utilising differences in economic events resultant from the use of divergent accounting methods. Unlike, for instance, creative or praxeological accounting, accounting engineering is composed only, and under all circumstances, of lawful activities and adheres to the current regulations of the balance sheet law. The aim of the article is to construct a model of accounting engineering exploiting taking into account differences inherently present in variant accounting. These differences result in disparate financial results of identical economic events. Given the fact that regardless of which variant is used in accounting, all settlements are eventually equal to one another, a new class of differences emerges - the accounting engineering potential. It is transferred to subsequent reporting (balance sheet periods. In the end, the profit “made” in a given period reduces the financial result of future periods. This effect is due to the “transfer” of costs from one period to another. Such actions may have sundry consequences and are especially dangerous whenever many individuals are concerned with the profit of a given company, e.g. on a stock exchange. The reverse may be observed when a company is privatised and its value is being intentionally reduced by a controlled recording of accounting provisions, depending on the degree to which they are justified. The reduction of a company’s goodwill in Balcerowicz’s model of no-tender privatisation allows to justify the low value of the purchased company. These are only some of many manifestations of variant accounting which accounting engineering employs. A theoretical model of the latter is presented in this article.

  12. A mathematical model to determine incorporated quantities of radioactivity from the measured photometric values of tritium-autoradiographs in neuroanatomy

    International Nuclear Information System (INIS)

    Jennissen, J.J.

    1981-01-01

    The mathematical/empirical model developed in this paper helps to determine the incorporated radioactivity from the measured photometric values and the exposure time T. Possible errors of autoradiography due to the exposure time or the preparation are taken into consideration by the empirical model. It is shown that the error of appr. 400% appearing in the sole comparison of the measured photometric values can be corrected. The model is valid for neuroanatomy as optical nerves, i.e. neuroanatomical material, were used to develop it. Its application also to the other sections of the central nervous system seems to be justified due to the reduction of errors thus achieved. (orig.) [de

  13. Semiparametric approach for non-monotone missing covariates in a parametric regression model

    KAUST Repository

    Sinha, Samiran

    2014-02-26

    Missing covariate data often arise in biomedical studies, and analysis of such data that ignores subjects with incomplete information may lead to inefficient and possibly biased estimates. A great deal of attention has been paid to handling a single missing covariate or a monotone pattern of missing data when the missingness mechanism is missing at random. In this article, we propose a semiparametric method for handling non-monotone patterns of missing data. The proposed method relies on the assumption that the missingness mechanism of a variable does not depend on the missing variable itself but may depend on the other missing variables. This mechanism is somewhat less general than the completely non-ignorable mechanism but is sometimes more flexible than the missing at random mechanism where the missingness mechansim is allowed to depend only on the completely observed variables. The proposed approach is robust to misspecification of the distribution of the missing covariates, and the proposed mechanism helps to nullify (or reduce) the problems due to non-identifiability that result from the non-ignorable missingness mechanism. The asymptotic properties of the proposed estimator are derived. Finite sample performance is assessed through simulation studies. Finally, for the purpose of illustration we analyze an endometrial cancer dataset and a hip fracture dataset.

  14. Modeling and Simulation of Bus Dispatching Policy for Timed Transfers on Signalized Networks

    Science.gov (United States)

    Cho, Hsun-Jung; Lin, Guey-Shii

    2007-12-01

    The major work of this study is to formulate the system cost functions and to integrate the bus dispatching policy with signal control. The integrated model mainly includes the flow dispersion model for links, signal control model for nodes, and dispatching control model for transfer terminals. All such models are inter-related for transfer operations in one-center transit network. The integrated model that combines dispatching policies with flexible signal control modes can be applied to assess the effectiveness of transfer operations. It is found that, if bus arrival information is reliable, an early dispatching decision made at the mean bus arrival times is preferable. The costs for coordinated operations with slack times are relatively low at the optimal common headway when applying adaptive route control. Based on such findings, a threshold function of bus headway for justifying an adaptive signal route control under various time values of auto drivers is developed.

  15. A general phenomenological model for work function

    Science.gov (United States)

    Brodie, I.; Chou, S. H.; Yuan, H.

    2014-07-01

    A general phenomenological model is presented for obtaining the zero Kelvin work function of any crystal facet of metals and semiconductors, both clean and covered with a monolayer of electropositive atoms. It utilizes the known physical structure of the crystal and the Fermi energy of the two-dimensional electron gas assumed to form on the surface. A key parameter is the number of electrons donated to the surface electron gas per surface lattice site or adsorbed atom, which is taken to be an integer. Initially this is found by trial and later justified by examining the state of the valence electrons of the relevant atoms. In the case of adsorbed monolayers of electropositive atoms a satisfactory justification could not always be found, particularly for cesium, but a trial value always predicted work functions close to the experimental values. The model can also predict the variation of work function with temperature for clean crystal facets. The model is applied to various crystal faces of tungsten, aluminium, silver, and select metal oxides, and most demonstrate good fits compared to available experimental values.

  16. Modelling contractor’s bidding decision

    Directory of Open Access Journals (Sweden)

    Biruk Sławomir

    2017-03-01

    Full Text Available The authors aim to provide a set of tools to facilitate the main stages of the competitive bidding process for construction contractors. These involve 1 deciding whether to bid, 2 calculating the total price, and 3 breaking down the total price into the items of the bill of quantities or the schedule of payments to optimise contractor cash flows. To define factors that affect the decision to bid, the authors rely upon literature on the subject and put forward that multi-criteria methods are applied to calculate a single measure of contract attractiveness (utility value. An attractive contract implies that the contractor is likely to offer a lower price to increase chances of winning the competition. The total bid price is thus to be interpolated between the lowest acceptable and the highest justifiable price based on the contract attractiveness. With the total bid price established, the next step is to split it between the items of the schedule of payments. A linear programming model is proposed for this purpose. The application of the models is illustrated with a numerical example.

  17. Classical ethical positions and their relevance in justifying behavior: A model of prescriptive attribution.

    OpenAIRE

    Witte, E.H.

    2002-01-01

    This paper separates empirical research on ethics from classical research on morality and relates it to other central questions of social psychology and sociology, e.g., values, culture, justice, attribution. In addition, reference is made to some founding studies of ethical research and its historical development. Based on this line of tradition the development of prescriptive attribution research is introduced, which concentrates on the justification of actions by weighting the importance o...

  18. Modelling severe Staphylococcus aureus sepsis in conscious pigs: are implications for animal welfare justified?

    DEFF Research Database (Denmark)

    Olsen, Helle G; Kjelgaard-Hansen, Mads; Tveden-Nyborg, Pernille

    2016-01-01

    by the severity of induced disease, which in some cases necessitated humane euthanasia. A pilot study was therefore performed in order to establish the sufficient inoculum concentration and application protocol needed to produce signs of liver dysfunction within limits of our pre-defined humane endpoints. Four....... Prior to euthanasia, a galactose elimination capacity test was performed to assess liver function. Pigs were euthanised 48 h post inoculation for necropsy and histopathological evaluation. While infusion times of 6.66 min, and higher, did not induce liver dysfunction (n = 3), the infusion time of 3...

  19. The tipping point: A mathematical model for the profit-driven abandonment of restaurant tipping

    Science.gov (United States)

    Clifton, Sara M.; Herbers, Eileen; Chen, Jack; Abrams, Daniel M.

    2018-02-01

    The custom of voluntarily tipping for services rendered has gone in and out of fashion in America since its introduction in the 19th century. Restaurant owners that ban tipping in their establishments often claim that social justice drives their decisions, but we show that rational profit-maximization may also justify the decisions. Here, we propose a conceptual model of restaurant competition for staff and customers, and we show that there exists a critical conventional tip rate at which restaurant owners should eliminate tipping to maximize profits. Because the conventional tip rate has been increasing steadily for the last several decades, our model suggests that restaurant owners may abandon tipping en masse when that critical tip rate is reached.

  20. Analysis of Longitudinal Studies With Repeated Outcome Measures: Adjusting for Time-Dependent Confounding Using Conventional Methods.

    Science.gov (United States)

    Keogh, Ruth H; Daniel, Rhian M; VanderWeele, Tyler J; Vansteelandt, Stijn

    2018-05-01

    Estimation of causal effects of time-varying exposures using longitudinal data is a common problem in epidemiology. When there are time-varying confounders, which may include past outcomes, affected by prior exposure, standard regression methods can lead to bias. Methods such as inverse probability weighted estimation of marginal structural models have been developed to address this problem. However, in this paper we show how standard regression methods can be used, even in the presence of time-dependent confounding, to estimate the total effect of an exposure on a subsequent outcome by controlling appropriately for prior exposures, outcomes, and time-varying covariates. We refer to the resulting estimation approach as sequential conditional mean models (SCMMs), which can be fitted using generalized estimating equations. We outline this approach and describe how including propensity score adjustment is advantageous. We compare the causal effects being estimated using SCMMs and marginal structural models, and we compare the two approaches using simulations. SCMMs enable more precise inferences, with greater robustness against model misspecification via propensity score adjustment, and easily accommodate continuous exposures and interactions. A new test for direct effects of past exposures on a subsequent outcome is described.

  1. Sensitivity of fluvial sediment source apportionment to mixing model assumptions: A Bayesian model comparison.

    Science.gov (United States)

    Cooper, Richard J; Krueger, Tobias; Hiscock, Kevin M; Rawlins, Barry G

    2014-11-01

    Mixing models have become increasingly common tools for apportioning fluvial sediment load to various sediment sources across catchments using a wide variety of Bayesian and frequentist modeling approaches. In this study, we demonstrate how different model setups can impact upon resulting source apportionment estimates in a Bayesian framework via a one-factor-at-a-time (OFAT) sensitivity analysis. We formulate 13 versions of a mixing model, each with different error assumptions and model structural choices, and apply them to sediment geochemistry data from the River Blackwater, Norfolk, UK, to apportion suspended particulate matter (SPM) contributions from three sources (arable topsoils, road verges, and subsurface material) under base flow conditions between August 2012 and August 2013. Whilst all 13 models estimate subsurface sources to be the largest contributor of SPM (median ∼76%), comparison of apportionment estimates reveal varying degrees of sensitivity to changing priors, inclusion of covariance terms, incorporation of time-variant distributions, and methods of proportion characterization. We also demonstrate differences in apportionment results between a full and an empirical Bayesian setup, and between a Bayesian and a frequentist optimization approach. This OFAT sensitivity analysis reveals that mixing model structural choices and error assumptions can significantly impact upon sediment source apportionment results, with estimated median contributions in this study varying by up to 21% between model versions. Users of mixing models are therefore strongly advised to carefully consider and justify their choice of model structure prior to conducting sediment source apportionment investigations. An OFAT sensitivity analysis of sediment fingerprinting mixing models is conductedBayesian models display high sensitivity to error assumptions and structural choicesSource apportionment results differ between Bayesian and frequentist approaches.

  2. Lotka-Volterra competition models for sessile organisms.

    Science.gov (United States)

    Spencer, Matthew; Tanner, Jason E

    2008-04-01

    Markov models are widely used to describe the dynamics of communities of sessile organisms, because they are easily fitted to field data and provide a rich set of analytical tools. In typical ecological applications, at any point in time, each point in space is in one of a finite set of states (e.g., species, empty space). The models aim to describe the probabilities of transitions between states. In most Markov models for communities, these transition probabilities are assumed to be independent of state abundances. This assumption is often suspected to be false and is rarely justified explicitly. Here, we start with simple assumptions about the interactions among sessile organisms and derive a model in which transition probabilities depend on the abundance of destination states. This model is formulated in continuous time and is equivalent to a Lotka-Volterra competition model. We fit this model and a variety of alternatives in which transition probabilities do not depend on state abundances to a long-term coral reef data set. The Lotka-Volterra model describes the data much better than all models we consider other than a saturated model (a model with a separate parameter for each transition at each time interval, which by definition fits the data perfectly). Our approach provides a basis for further development of stochastic models of sessile communities, and many of the methods we use are relevant to other types of community. We discuss possible extensions to spatially explicit models.

  3. Time-dependent shell-model theory of dissipative heavy-ion collisions

    International Nuclear Information System (INIS)

    Ayik, S.; Noerenberg, W.

    1982-01-01

    A transport theory is formulated within a time-dependent shell-model approach. Time averaging of the equations for macroscopic quantities lead to irreversibility and justifies weak-coupling limit and Markov approximation for the (energy-conserving) one- and two-body collision terms. Two coupled equations for the occupation probabilities of dynamical single-particle states and for the collective variable are derived and explicit formulas for transition rates, dynamical forces, mass parameters and friction coefficients are given. The applicability of the formulation in terms of characteristic quantities of nuclear systems is considered in detail and some peculiarities due to memory effects in the initial equilibration process of heavy-ion collisions are discussed. (orig.)

  4. Intrafirm planning and mathematical modeling of owner's equity in industrial enterprises

    Science.gov (United States)

    Ponomareva, S. V.; Zheleznova, I. V.

    2018-05-01

    The article aims to review the different approaches to intrafirm planning of owner's equity in industrial enterprises. Since charter capital, additional capital and reserve capital do not change in the process of enterprise activity, the main interest lies on the field of share repurchases from shareholders and retained earnings within the owner's equity of the enterprise. In order to study the effect of share repurchases on the activities of the enterprise, let us use such mathematical methods as event study and econometric modeling. This article describes the step-by-step algorithm of carrying out event study and justifies the choice of Logit model in econometric analysis. The article represents basic results of conducted regression analysis on the effect of share repurchases on the key financial indicators in industrial enterprises.

  5. Formation of an Integrated Stock Price Forecast Model in Lithuania

    Directory of Open Access Journals (Sweden)

    Audrius Dzikevičius

    2016-12-01

    Full Text Available Technical and fundamental analyses are widely used to forecast stock prices due to lack of knowledge of other modern models and methods such as Residual Income Model, ANN-APGARCH, Support Vector Machine, Probabilistic Neural Network and Genetic Fuzzy Systems. Although stock price forecast models integrating both technical and fundamental analyses are currently used widely, their integration is not justified comprehensively enough. This paper discusses theoretical one-factor and multi-factor stock price forecast models already applied by investors at a global level and determines possibility to create and apply practically a stock price forecast model which integrates fundamental and technical analysis with the reference to the Lithuanian stock market. The research is aimed to determine the relationship between stock prices of the 14 Lithuanian companies listed in the Main List by the Nasdaq OMX Baltic and various fundamental variables. Based on correlation and regression analysis results and application of c-Squared Test, ANOVA method, a general stock price forecast model is generated. This paper discusses practical implications how the developed model can be used to forecast stock prices by individual investors and suggests additional check measures.

  6. Longitudinal beta-binomial modeling using GEE for overdispersed binomial data.

    Science.gov (United States)

    Wu, Hongqian; Zhang, Ying; Long, Jeffrey D

    2017-03-15

    Longitudinal binomial data are frequently generated from multiple questionnaires and assessments in various scientific settings for which the binomial data are often overdispersed. The standard generalized linear mixed effects model may result in severe underestimation of standard errors of estimated regression parameters in such cases and hence potentially bias the statistical inference. In this paper, we propose a longitudinal beta-binomial model for overdispersed binomial data and estimate the regression parameters under a probit model using the generalized estimating equation method. A hybrid algorithm of the Fisher scoring and the method of moments is implemented for computing the method. Extensive simulation studies are conducted to justify the validity of the proposed method. Finally, the proposed method is applied to analyze functional impairment in subjects who are at risk of Huntington disease from a multisite observational study of prodromal Huntington disease. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  7. Education, Occupation and Career Expectations: Determinants of the Gender Pay Gap for UK Graduates

    OpenAIRE

    Arnaud Chevalier

    2006-01-01

    Despite anti-discrimination policies, women are paid 20% less then men in the UK. A large proportion of this wage gap is usually left unexplained. In this paper, I investigate whether the unexplained component is due to misspecification. Using a sample of recent UK graduates, I examine the role of choice variables (subject of study and occupation) as well as career expectations and aspirations. The evidence indicates that women are more altruistic and less career-oriented than men. Career bre...

  8. A study of the Gaussian overlap approach in the two-center shell model

    International Nuclear Information System (INIS)

    Reinhard, P.-G.

    1976-01-01

    The Gaussian overlap approach (GOA) to the generator coordinate method (GCM) is carried through up to fourth order in the derivatives. By diagonalizing the norm overlap, a collective Schroedinger equation is obtained. The potential therein contains the usual potential energy surface (PES) plus correction terms, which subtract the zero-point energies (ZPE) is the PES. The formalism is applied to BCS states obtained from a two-center shell model (TCSM). To understand the crucial role of the pairing contributions in the GOA a schematic picture, the multi-level model, is constructed. An explicit numerical study of the convergence of the GOA is given for the TCSM, with the result that the GOA seems to be justified for medium and heavy nuclei but critical for light nuclei. (Auth.)

  9. APPLICATION OF IMPRECISE MODELS IN ANALYSIS OF RISK MANAGEMENT OF SOFTWARE SYSTEMS

    Directory of Open Access Journals (Sweden)

    Volodymyr Kharchenko

    2017-11-01

    Full Text Available The analysis of functional completeness for existing detection systems was conducted. It made it possible to define information systems with a similar feature set, to assess the degree of similarity and the matching degree of the means from the "standard" model of risk management system, that considers the recommended ICAO practices and standards on aviation safety, to justify the advisability of decision-making support system creation, using imprecise model and imprecise logic for risk analysis at aviation activities. Imprecise models have a number of features regarding the possibility of taking into account the experts’ intuition and experience, the possibility of more adequate flight safety management processes modelling and obtaining the accurate decisions that correlate with the initial data; support for the rapid development of a safety management system with its further functionality complexity increase; their hardware and software implementation in control systems and decision making is less sophisticated in comparison with classical algorithms.

  10. Tacit Beginnings Towards a Model of Scientific Thinking

    Science.gov (United States)

    Glass, Rory J.

    2013-10-01

    The purpose of this paper is to provide an examination of the role tacit knowledge plays in understanding, and to provide a model to make such knowledge identifiable. To do this I first consider the needs of society, the ubiquity of information in our world and the future demands of the science classroom. I propose the use of more implicit or tacit understandings as foundational elements for the development of student knowledge. To justify this proposition I consider a wide range of philosophical and psychological perspectives on knowledge. Then develop a Model of Scientific Knowledge, based in large part on a similar model created by Paul Ernest (Social constructivism as a philosophy of mathematics, SUNY Press, Albany, NY, 1998a; Situated cognition and the learning of mathematics, University of Oxford Department of Educational Studies, Oxford, 1998b). Finally, I consider the work that has been done by those in fields beyond education and the ways in which tacit knowledge can be used as a starting point for knowledge building.

  11. New robust statistical procedures for the polytomous logistic regression models.

    Science.gov (United States)

    Castilla, Elena; Ghosh, Abhik; Martin, Nirian; Pardo, Leandro

    2018-05-17

    This article derives a new family of estimators, namely the minimum density power divergence estimators, as a robust generalization of the maximum likelihood estimator for the polytomous logistic regression model. Based on these estimators, a family of Wald-type test statistics for linear hypotheses is introduced. Robustness properties of both the proposed estimators and the test statistics are theoretically studied through the classical influence function analysis. Appropriate real life examples are presented to justify the requirement of suitable robust statistical procedures in place of the likelihood based inference for the polytomous logistic regression model. The validity of the theoretical results established in the article are further confirmed empirically through suitable simulation studies. Finally, an approach for the data-driven selection of the robustness tuning parameter is proposed with empirical justifications. © 2018, The International Biometric Society.

  12. Use of Just in Time Maintenance of Reinforced Concrete Bridge Structures based on Real Historical Data Deterioration Models

    Directory of Open Access Journals (Sweden)

    Abu-Tair A.

    2016-01-01

    Full Text Available Concrete is the backbone of any developed economy. Concrete can suffer from a large number of deleterious effects including physical, chemical and biological causes. Large owning bridge structures organizations are facing very serious questions when asking for maintenance budgets. The questions range from needing to justify the need for the work, its urgency, to also have to predict or show the consequences of delayed rehabilitation of a particular structure. There is therefore a need for a probabilistic model that can estimate the range of service lives of bridge populations and also the likelihood of level of deteriorations it can reached for every incremental time interval. A model was developed for such estimation based on statistical data from actual inspection records of a large reinforced concrete bridge portfolio. The method used both deterministic and stochastic methods to predict the service life of a bridge, using these service lives in combination with the just in time (JIT principle of management would enable maintenance managers to justify the need for action and the budgets needed, to intervene at the optimum time in the life of the structure and that of the deterioration. The paper will report on the model which is based on a large database of deterioration records of concrete bridges covering a period of over 60 years and include data from over 400 bridge structures. The paper will also illustrate how the service life model was developed and how these service lives combined with the JIT can be used to effectively allocate resources and use them to keep a major infrastructure asset moving with little disruption to the transport system and its users.

  13. Study and discretization of kinetic models and fluid models at low Mach number

    International Nuclear Information System (INIS)

    Dellacherie, Stephane

    2011-01-01

    This thesis summarizes our work between 1995 and 2010. It concerns the analysis and the discretization of Fokker-Planck or semi-classical Boltzmann kinetic models and of Euler or Navier-Stokes fluid models at low Mach number. The studied Fokker-Planck equation models the collisions between ions and electrons in a hot plasma, and is here applied to the inertial confinement fusion. The studied semi-classical Boltzmann equations are of two types. The first one models the thermonuclear reaction between a deuterium ion and a tritium ion producing an α particle and a neutron particle, and is also in our case used to describe inertial confinement fusion. The second one (known as the Wang-Chang and Uhlenbeck equations) models the transitions between electronic quantified energy levels of uranium and iron atoms in the AVLIS isotopic separation process. The basic properties of these two Boltzmann equations are studied, and, for the Wang-Chang and Uhlenbeck equations, a kinetic-fluid coupling algorithm is proposed. This kinetic-fluid coupling algorithm incited us to study the relaxation concept for gas and immiscible fluids mixtures, and to underline connections with classical kinetic theory. Then, a diphasic low Mach number model without acoustic waves is proposed to model the deformation of the interface between two immiscible fluids induced by high heat transfers at low Mach number. In order to increase the accuracy of the results without increasing computational cost, an AMR algorithm is studied on a simplified interface deformation model. These low Mach number studies also incited us to analyse on cartesian meshes the inaccuracy at low Mach number of Godunov schemes. Finally, the LBM algorithm applied to the heat equation is justified

  14. Integration of field data into operational snowmelt-runoff models

    International Nuclear Information System (INIS)

    Brandt, M.; Bergström, S.

    1994-01-01

    Conceptual runoff models have become standard tools for operational hydrological forecasting in Scandinavia. These models are normally based on observations from the national climatological networks, but in mountainous areas the stations are few and sometimes not representative. Due to the great economic importance of good hydrological forecasts for the hydro-power industry attempts have been made to improve the model simulations by support from field observations of the snowpack. The snowpack has been mapped by several methods; airborne gamma-spectrometry, airborne georadars, satellites and by conventional snow courses. The studies cover more than ten years of work in Sweden. The conclusion is that field observations of the snow cover have a potential for improvement of the forecasts of inflow to the reservoirs in the mountainous part of the country, where the climatological data coverages is poor. This is pronounced during years with unusual snow distribution. The potential for model improvement is smaller in the climatologically more homogeneous forested lowlands, where the climatological network is denser. The costs of introduction of airborne observations into the modelling procedure are high and can only be justified in areas of great hydropower potential. (author)

  15. Substitution between cars within the household

    DEFF Research Database (Denmark)

    De Borger, Bruno; Mulalic, Ismir; Rouwendal, Jan

    2016-01-01

    In this paper we study the demand for car kilometres in two-car households, focusing on the substitution between cars of different fuel efficiency in response to fuel price changes. We use a large sample of detailed Danish data on two-car households to estimate – for each car owned by the household...... – own and cross-price effects of increases in fuel costs per kilometre. The empirical results show that failure to capture substitution between cars within the household can result in substantial misspecification biases. Ignoring substitution, the basic model yielded fuel price elasticities of 0.......98 and 1.41 for the primary and secondary cars, respectively. Accounting for substitution effects, these figures reduce to, respectively, 0.32 and 0.45. Consistent with substitution behaviour, we find that the fuel price elasticity of fuel demand exceeds the elasticity of kilometre demands with respect...

  16. A Novel Computer Virus Propagation Model under Security Classification

    Directory of Open Access Journals (Sweden)

    Qingyi Zhu

    2017-01-01

    Full Text Available In reality, some computers have specific security classification. For the sake of safety and cost, the security level of computers will be upgraded with increasing of threats in networks. Here we assume that there exists a threshold value which determines when countermeasures should be taken to level up the security of a fraction of computers with low security level. And in some specific realistic environments the propagation network can be regarded as fully interconnected. Inspired by these facts, this paper presents a novel computer virus dynamics model considering the impact brought by security classification in full interconnection network. By using the theory of dynamic stability, the existence of equilibria and stability conditions is analysed and proved. And the above optimal threshold value is given analytically. Then, some numerical experiments are made to justify the model. Besides, some discussions and antivirus measures are given.

  17. Creating Shared Mental Models: The Support of Visual Language

    Science.gov (United States)

    Landman, Renske B.; van den Broek, Egon L.; Gieskes, José F. B.

    Cooperative design involves multiple stakeholders that often hold different ideas of the problem, the ways to solve it, and to its solutions (i.e., mental models; MM). These differences can result in miscommunication, misunderstanding, slower decision making processes, and less chance on cooperative decisions. In order to facilitate the creation of a shared mental model (sMM), visual languages (VL) are often used. However, little scientific foundation is behind this choice. To determine whether or not this gut feeling is justified, a research was conducted in which various stakeholders had to cooperatively redesign a process chain, with and without VL. To determine whether or not a sMM was created, scores on agreement in individual MM, communication, and cooperation were analyzed. The results confirmed the assumption that VL can indeed play an important role in the creation of sMM and, hence, can aid the processes of cooperative design and engineering.

  18. Communication: modeling charge-sign asymmetric solvation free energies with nonlinear boundary conditions.

    Science.gov (United States)

    Bardhan, Jaydeep P; Knepley, Matthew G

    2014-10-07

    We show that charge-sign-dependent asymmetric hydration can be modeled accurately using linear Poisson theory after replacing the standard electric-displacement boundary condition with a simple nonlinear boundary condition. Using a single multiplicative scaling factor to determine atomic radii from molecular dynamics Lennard-Jones parameters, the new model accurately reproduces MD free-energy calculations of hydration asymmetries for: (i) monatomic ions, (ii) titratable amino acids in both their protonated and unprotonated states, and (iii) the Mobley "bracelet" and "rod" test problems [D. L. Mobley, A. E. Barber II, C. J. Fennell, and K. A. Dill, "Charge asymmetries in hydration of polar solutes," J. Phys. Chem. B 112, 2405-2414 (2008)]. Remarkably, the model also justifies the use of linear response expressions for charging free energies. Our boundary-element method implementation demonstrates the ease with which other continuum-electrostatic solvers can be extended to include asymmetry.

  19. Communication: Modeling charge-sign asymmetric solvation free energies with nonlinear boundary conditions

    International Nuclear Information System (INIS)

    Bardhan, Jaydeep P.; Knepley, Matthew G.

    2014-01-01

    We show that charge-sign-dependent asymmetric hydration can be modeled accurately using linear Poisson theory after replacing the standard electric-displacement boundary condition with a simple nonlinear boundary condition. Using a single multiplicative scaling factor to determine atomic radii from molecular dynamics Lennard-Jones parameters, the new model accurately reproduces MD free-energy calculations of hydration asymmetries for: (i) monatomic ions, (ii) titratable amino acids in both their protonated and unprotonated states, and (iii) the Mobley “bracelet” and “rod” test problems [D. L. Mobley, A. E. Barber II, C. J. Fennell, and K. A. Dill, “Charge asymmetries in hydration of polar solutes,” J. Phys. Chem. B 112, 2405–2414 (2008)]. Remarkably, the model also justifies the use of linear response expressions for charging free energies. Our boundary-element method implementation demonstrates the ease with which other continuum-electrostatic solvers can be extended to include asymmetry

  20. Communication: Modeling charge-sign asymmetric solvation free energies with nonlinear boundary conditions

    Energy Technology Data Exchange (ETDEWEB)

    Bardhan, Jaydeep P. [Department of Mechanical and Industrial Engineering, Northeastern University, Boston, Massachusetts 02115 (United States); Knepley, Matthew G. [Computation Institute, The University of Chicago, Chicago, Illinois 60637 (United States)

    2014-10-07

    We show that charge-sign-dependent asymmetric hydration can be modeled accurately using linear Poisson theory after replacing the standard electric-displacement boundary condition with a simple nonlinear boundary condition. Using a single multiplicative scaling factor to determine atomic radii from molecular dynamics Lennard-Jones parameters, the new model accurately reproduces MD free-energy calculations of hydration asymmetries for: (i) monatomic ions, (ii) titratable amino acids in both their protonated and unprotonated states, and (iii) the Mobley “bracelet” and “rod” test problems [D. L. Mobley, A. E. Barber II, C. J. Fennell, and K. A. Dill, “Charge asymmetries in hydration of polar solutes,” J. Phys. Chem. B 112, 2405–2414 (2008)]. Remarkably, the model also justifies the use of linear response expressions for charging free energies. Our boundary-element method implementation demonstrates the ease with which other continuum-electrostatic solvers can be extended to include asymmetry.

  1. Innovation Production Models

    Directory of Open Access Journals (Sweden)

    Tamam N. Guseinova

    2016-01-01

    Full Text Available The article is dedicated to the study of the models of production of innovations at enterprise and state levels. The shift towards a new technology wave induces a change in systems of division of labour as well as establishment of new forms of cooperation that are reflected both in theory and practice of innovation policy and management. Within the scope of the research question we have studied different generation of innovation process, starting with simple linear models - "technology push" and "market pull" - and ending with a complex integrated model of open innovations. There are two organizational models of innovation production at the enterprise level: start-ups in the early stages of their development and ambidextrous organizations. The former are prone to linear models of innovation process, while the latter create innovation within more sophisticated inclusive processes. Companies that effectuate reciprocal ambidexterity stand out from all the rest, since together with start-ups, research and development centres, elements of innovation infrastructure and other economic agents operating in the same value chain they constitute the core of most advanced forms of national innovation systems, namely Triple Helix and Quadruple Helix systems. National innovation systems - models of innovation production at the state level - evolve into systems with a more profound division of labour that enable "line production" of innovations. These tendencies are closely related to the advent and development of the concept of serial entrepreneurship that transforms entrepreneurship into a new type of profession. International experience proves this concept to be efficient in various parts of the world. Nevertheless, the use of above mentioned models and concepts in national innovation system should be justified by socioeconomic conditions of economic regions, since they determine the efficiency of implementation of certain innovation processes and

  2. Fractured rock modeling in the National Waste Terminal Storage Program: a review of requirements and status

    International Nuclear Information System (INIS)

    St John, C.; Krug, A.; Key, S.; Monsees, J.

    1983-05-01

    Generalized computer codes capable of forming the basis for numerical models of fractured rock masses are being used within the NWTS program. Little additional development of these codes is considered justifiable, except in the area of representation of discrete fractures. On the other hand, model preparation requires definition of medium-specific constitutive descriptions and site characteristics and is therefore legitimately conducted by each of the media-oriented projects within the National Waste Terminal Storage program. However, it is essential that a uniform approach to the role of numerical modeling be adopted, including agreement upon the contribution of modeling to the design and licensing process and the need for, and means of, model qualification for particular purposes. This report discusses the role of numerical modeling, reviews the capabilities of several computer codes that are being used to support design or performance assessment, and proposes a framework for future numerical modeling activities within the NWTS program

  3. The Intercultural Danube - a European Model

    Directory of Open Access Journals (Sweden)

    Gheorghe Lateș

    2014-08-01

    Full Text Available The EU construction began following the logic of economics, which in time it has created dysfunctions that seem to accentuate and create a quasi-general skepticism. This paper aims at analyzing the union construction and reconstruction on other conceptual premises, placing culture at the forefront of the new strategy. A multicultural Europe, based on the state primordial ethnicity is no longer current; the cultural diversity does not lead to unity, but rather it is a factor of dissolution. The Danubian model reunites races, languages and religions, being so diverse that their functional diachrony justifies the idea of reconstruction, based on what it was, and it did not generate tensions or conflicts. The ethnic identity did not become, in the Danube area, ethnicism, what it constitutes in a synchronic approach, as a model of rethinking the union, not by hierarchies, barriers, but rather by the opportunity of the coexistence of the peoples that connect history and the present of the horizontal axis River of a united Europe.

  4. [Requirements imposed on model objects in microevolutionary investigations].

    Science.gov (United States)

    Mina, M V

    2015-01-01

    Extrapolation of results of investigations of a model object is justified only within the limits of a set of objects that have essential properties in common with the modal object. Which properties are essential depends on the aim of a study. Similarity of objects emerged in the process of their independent evolution does not prove similarity of ways and mechanisms of their evolution. If the objects differ in their essential properties then extrapolation of results of investigation of an object on another one is risky because it may lead to wrong decisions and, moreover, to the loss of interest to alternative hypotheses. Positions formulated above are considered with the reference to species flocks of fishes, large African Barbus in particular.

  5. A multi-criteria model for maintenance job scheduling

    Directory of Open Access Journals (Sweden)

    Sunday A. Oke

    2007-12-01

    Full Text Available This paper presents a multi-criteria maintenance job scheduling model, which is formulated using a weighted multi-criteria integer linear programming maintenance scheduling framework. Three criteria, which have direct relationship with the primary objectives of a typical production setting, were used. These criteria are namely minimization of equipment idle time, manpower idle time and lateness of job with unit parity. The mathematical model constrained by available equipment, manpower and job available time within planning horizon was tested with a 10-job, 8-hour time horizon problem with declared equipment and manpower available as against the required. The results, analysis and illustrations justify multi-criteria consideration. Thus, maintenance managers are equipped with a tool for adequate decision making that guides against error in the accumulated data which may lead to wrong decision making. The idea presented is new since it provides an approach that has not been documented previously in the literature.

  6. Modeling Cerebral Blood Flow Control During Posture Change from Sitting to Standing

    DEFF Research Database (Denmark)

    Olufsen, Mette; Tran, Hien; Ottesen, Johnny T.

    2004-01-01

    , the heart, and venous valves. We use physiologically based control mechanisms to describe the regulation of cerebral blood velocity and arterial pressure in response to orthostatic hypotension resulting from postural change. Beyond active control mechanisms we also have to include certain passive non......Hypertension, decreased cerebral blood flow, and diminished cerebral blood flow regulation, are among the first signs indicating the presence of cerebral vascular disease. In this paper, we will present a mathematical model that can predict blood flow and pressure during posture change from sitting......-linearities in some of the compliance-pressure and resistance-pressure relationships. Futhermore, an acurate and physiologically based submodel, describing the dynamics of how gravity effects the blood distribution during suspine changes, is included. To justify the fidelity of our mathematical model and control...

  7. Technical Data to Justify Full Burnup Credit in Criticality Safety Licensing Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Enercon Services, Inc.

    2011-03-14

    Enercon Services, Inc. (ENERCON) was requested under Task Order No.2 to identify scientific and technical data needed to benchmark and justify Full Burnup Credit, which adds 16 fission products and 4 minor actinides1 to Actinide-Only burnup credit. The historical perspective for Full Burnup Credit is discussed, and interviews of organizations participating in burnup credit activities are summarized as a basis for identifying additional data needs and making recommendation. Input from burnup credit participants representing two segments of the commercial nuclear industry is provided. First, the Electric Power Research Institute (EPRI) has been very active in the development of Full Burnup Credit, representing the interests of nuclear utilities in achieving capacity gains for storage and transport casks. EPRI and its utility customers are interested in a swift resolution of the validation issues that are delaying the implementation of Full Burnup Credit [EPRI 2010b]. Second, used nuclear fuel storage and transportation Cask Vendors favor improving burnup credit beyond Actinide-Only burnup credit, although their discussion of specific burnup credit achievements and data needs was limited citing business sensitive and technical proprietary concerns. While Cask Vendor proprietary items are not specifically identified in this report, the needs of all nuclear industry participants are reflected in the conclusions and recommendations of this report. In addition, Oak Ridge National Laboratory (ORNL) and Sandia National Laboratory (SNL) were interviewed for their input into additional data needs to achieve Full Burnup Credit. ORNL was very open to discussions of Full Burnup Credit, with several telecoms and a visit by ENERCON to ORNL. For many years, ORNL has provided extensive support to the NRC regarding burnup credit in all of its forms. Discussions with ORNL focused on potential resolutions to the validation issues for the use of fission products. SNL was helpful in

  8. Technical Data to Justify Full Burnup Credit in Criticality Safety Licensing Analysis

    International Nuclear Information System (INIS)

    2011-01-01

    Enercon Services, Inc. (ENERCON) was requested under Task Order No.2 to identify scientific and technical data needed to benchmark and justify Full Burnup Credit, which adds 16 fission products and 4 minor actinides1 to Actinide-Only burnup credit. The historical perspective for Full Burnup Credit is discussed, and interviews of organizations participating in burnup credit activities are summarized as a basis for identifying additional data needs and making recommendation. Input from burnup credit participants representing two segments of the commercial nuclear industry is provided. First, the Electric Power Research Institute (EPRI) has been very active in the development of Full Burnup Credit, representing the interests of nuclear utilities in achieving capacity gains for storage and transport casks. EPRI and its utility customers are interested in a swift resolution of the validation issues that are delaying the implementation of Full Burnup Credit [EPRI 2010b]. Second, used nuclear fuel storage and transportation Cask Vendors favor improving burnup credit beyond Actinide-Only burnup credit, although their discussion of specific burnup credit achievements and data needs was limited citing business sensitive and technical proprietary concerns. While Cask Vendor proprietary items are not specifically identified in this report, the needs of all nuclear industry participants are reflected in the conclusions and recommendations of this report. In addition, Oak Ridge National Laboratory (ORNL) and Sandia National Laboratory (SNL) were interviewed for their input into additional data needs to achieve Full Burnup Credit. ORNL was very open to discussions of Full Burnup Credit, with several telecoms and a visit by ENERCON to ORNL. For many years, ORNL has provided extensive support to the NRC regarding burnup credit in all of its forms. Discussions with ORNL focused on potential resolutions to the validation issues for the use of fission products. SNL was helpful in

  9. Transit and lifespan in neutrophil production: implications for drug intervention.

    Science.gov (United States)

    Câmara De Souza, Daniel; Craig, Morgan; Cassidy, Tyler; Li, Jun; Nekka, Fahima; Bélair, Jacques; Humphries, Antony R

    2018-02-01

    A comparison of the transit compartment ordinary differential equation modelling approach to distributed and discrete delay differential equation models is studied by focusing on Quartino's extension to the Friberg transit compartment model of myelosuppression, widely relied upon in the pharmaceutical sciences to predict the neutrophil response after chemotherapy, and on a QSP delay differential equation model of granulopoiesis. An extension to the Quartino model is provided by considering a general number of transit compartments and introducing an extra parameter that allows for the decoupling of the maturation time from the production rate of cells. An overview of the well established linear chain technique, used to reformulate transit compartment models with constant transit rates as distributed delay differential equations (DDEs), is then given. A state-dependent time rescaling of the Quartino model is performed to apply the linear chain technique and rewrite the Quartino model as a distributed DDE, yielding a discrete DDE model in a certain parameter limit. Next, stability and bifurcation analyses are undertaken in an effort to situate such studies in a mathematical pharmacology context. We show that both the original Friberg and the Quartino extension models incorrectly define the mean maturation time, essentially treating the proliferative pool as an additional maturation compartment. This misspecification can have far reaching consequences on the development of future models of myelosuppression in PK/PD.

  10. Predicting the heat of vaporization of iron at high temperatures using time-resolved laser-induced incandescence and Bayesian model selection

    Science.gov (United States)

    Sipkens, Timothy A.; Hadwin, Paul J.; Grauer, Samuel J.; Daun, Kyle J.

    2018-03-01

    Competing theories have been proposed to account for how the latent heat of vaporization of liquid iron varies with temperature, but experimental confirmation remains elusive, particularly at high temperatures. We propose time-resolved laser-induced incandescence measurements on iron nanoparticles combined with Bayesian model plausibility, as a novel method for evaluating these relationships. Our approach scores the explanatory power of candidate models, accounting for parameter uncertainty, model complexity, measurement noise, and goodness-of-fit. The approach is first validated with simulated data and then applied to experimental data for iron nanoparticles in argon. Our results justify the use of Román's equation to account for the temperature dependence of the latent heat of vaporization of liquid iron.

  11. Modeling premartensitic effects in Ni2MnGa: A mean-field and Monte Carlo simulation study

    DEFF Research Database (Denmark)

    Castan, T.; Vives, E.; Lindgård, Per-Anker

    1999-01-01

    is constructed and justified based on the analysis of the experimentally observed strain variables and precursor phenomena. The description includes the (local) tetragonal distortion, the amplitude of the plane-modulating strain, and the magnetization. The model is solved by means of mean-field theory and Monte......The degenerate Blume-Emery-Griffiths model for martensitic transformations is extended by including both structural and magnetic degrees of freedom in order to elucidate premartensitic effects. Special attention is paid to the effect of the magnetoelastic coupling in Ni2MnGa. The microscopic model...... heat, not always associated with a true phase transition. The main conclusion is that premartensitic effects result from the interplay between the softness of the anomalous phonon driving the modulation and the magnetoelastic coupling. In particular, the premartensitic transition occurs when...

  12. Withholding stereotactic radiotherapy in elderly patients with stage I non-small cell lung cancer and co-existing COPD is not justified: Outcomes of a markov model analysis

    International Nuclear Information System (INIS)

    Louie, Alexander V.; Rodrigues, George; Hannouf, Malek; Lagerwaard, Frank; Palma, David; Zaric, Gregory S.; Haasbeek, Cornelis; Senan, Suresh

    2011-01-01

    Background and purpose: To model outcomes of SBRT versus best supportive care (BSC) in elderly COPD patients with stage I NSCLC. Material and methods: A Markov model was constructed to simulate the quality-adjusted and overall survival (OS) in patients ≥75 years undergoing either SBRT or BSC for a five-year timeframe. SBRT rates of local, regional and distant recurrences were obtained from 247 patients treated at the VUMC, Amsterdam. Recurrence rates were converted into transition probabilities and stratified into four groups according to T stage (1, 2) and COPD GOLD score (I-II, III-IV). Data for untreated patients were obtained from the California Cancer Registry. Tumor stage and GOLD score utilities were adapted from the literature. Results: Our model correlated closely with the source OS data for SBRT treated and untreated patients. After SBRT, our model predicted for 6.8-47.2% five-year OS and 14.9-27.4 quality adjusted life months (QALMs). The model predicted for 9.0% and 2.8% five-year OS, and 10.1 and 6.1 QALMs for untreated T1 and T2 patients, respectively. The benefit of SBRT was the least for T2, GOLD III-IV patients. Conclusion: Our model indicates that SBRT should be considered in elderly stage I NSCLC patients with COPD.

  13. A heteroscedastic generalized linear model with a non-normal speed factor for responses and response times.

    Science.gov (United States)

    Molenaar, Dylan; Bolsinova, Maria

    2017-05-01

    In generalized linear modelling of responses and response times, the observed response time variables are commonly transformed to make their distribution approximately normal. A normal distribution for the transformed response times is desirable as it justifies the linearity and homoscedasticity assumptions in the underlying linear model. Past research has, however, shown that the transformed response times are not always normal. Models have been developed to accommodate this violation. In the present study, we propose a modelling approach for responses and response times to test and model non-normality in the transformed response times. Most importantly, we distinguish between non-normality due to heteroscedastic residual variances, and non-normality due to a skewed speed factor. In a simulation study, we establish parameter recovery and the power to separate both effects. In addition, we apply the model to a real data set. © 2017 The Authors. British Journal of Mathematical and Statistical Psychology published by John Wiley & Sons Ltd on behalf of British Psychological Society.

  14. Solid-fluid characteristics at the blast furnace hearth according to the nodal wear model (NWM)

    International Nuclear Information System (INIS)

    Martin, R.; Barbes, M. A.; Barbes, M. F.; Marinas, E.; Ayala, N.; Mochon, J.; Verdeja, L. F.; Garcia, F.

    2009-01-01

    The coke porosity is one of the most important variables that can affect the pig iron production and the lining corrosion. Up to now, the existing bibliography about lining corrosion always connects a deeper wear to an increase in the fluid flow (pig iron) at the blast furnace hearth. However, there is no evidence of any deterministic model that could link, from the theoretical point of view, the following variables: lining corrosion, porosity of dead coke and flow of pig iron at the hearth. Besides justifying the lining corrosion profiles, the Nodal Wear Model (NWM) can be an effective instrument to interpret the coke porosity and the pig iron speed rates that are generated inside the hearth. (Author) 23 refs

  15. Hybrid discrete choice models: Gained insights versus increasing effort

    Energy Technology Data Exchange (ETDEWEB)

    Mariel, Petr, E-mail: petr.mariel@ehu.es [UPV/EHU, Economía Aplicada III, Avda. Lehendakari Aguire, 83, 48015 Bilbao (Spain); Meyerhoff, Jürgen [Institute for Landscape Architecture and Environmental Planning, Technical University of Berlin, D-10623 Berlin, Germany and The Kiel Institute for the World Economy, Duesternbrooker Weg 120, 24105 Kiel (Germany)

    2016-10-15

    Hybrid choice models expand the standard models in discrete choice modelling by incorporating psychological factors as latent variables. They could therefore provide further insights into choice processes and underlying taste heterogeneity but the costs of estimating these models often significantly increase. This paper aims at comparing the results from a hybrid choice model and a classical random parameter logit. Point of departure for this analysis is whether researchers and practitioners should add hybrid choice models to their suite of models routinely estimated. Our comparison reveals, in line with the few prior studies, that hybrid models gain in efficiency by the inclusion of additional information. The use of one of the two proposed approaches, however, depends on the objective of the analysis. If disentangling preference heterogeneity is most important, hybrid model seems to be preferable. If the focus is on predictive power, a standard random parameter logit model might be the better choice. Finally, we give recommendations for an adequate use of hybrid choice models based on known principles of elementary scientific inference. - Highlights: • The paper compares performance of a Hybrid Choice Model (HCM) and a classical Random Parameter Logit (RPL) model. • The HCM indeed provides insights regarding preference heterogeneity not gained from the RPL. • The RPL has similar predictive power as the HCM in our data. • The costs of estimating HCM seem to be justified when learning more on taste heterogeneity is a major study objective.

  16. Hybrid discrete choice models: Gained insights versus increasing effort

    International Nuclear Information System (INIS)

    Mariel, Petr; Meyerhoff, Jürgen

    2016-01-01

    Hybrid choice models expand the standard models in discrete choice modelling by incorporating psychological factors as latent variables. They could therefore provide further insights into choice processes and underlying taste heterogeneity but the costs of estimating these models often significantly increase. This paper aims at comparing the results from a hybrid choice model and a classical random parameter logit. Point of departure for this analysis is whether researchers and practitioners should add hybrid choice models to their suite of models routinely estimated. Our comparison reveals, in line with the few prior studies, that hybrid models gain in efficiency by the inclusion of additional information. The use of one of the two proposed approaches, however, depends on the objective of the analysis. If disentangling preference heterogeneity is most important, hybrid model seems to be preferable. If the focus is on predictive power, a standard random parameter logit model might be the better choice. Finally, we give recommendations for an adequate use of hybrid choice models based on known principles of elementary scientific inference. - Highlights: • The paper compares performance of a Hybrid Choice Model (HCM) and a classical Random Parameter Logit (RPL) model. • The HCM indeed provides insights regarding preference heterogeneity not gained from the RPL. • The RPL has similar predictive power as the HCM in our data. • The costs of estimating HCM seem to be justified when learning more on taste heterogeneity is a major study objective.

  17. Communication: Modeling charge-sign asymmetric solvation free energies with nonlinear boundary conditions

    Science.gov (United States)

    Bardhan, Jaydeep P.; Knepley, Matthew G.

    2014-01-01

    We show that charge-sign-dependent asymmetric hydration can be modeled accurately using linear Poisson theory after replacing the standard electric-displacement boundary condition with a simple nonlinear boundary condition. Using a single multiplicative scaling factor to determine atomic radii from molecular dynamics Lennard-Jones parameters, the new model accurately reproduces MD free-energy calculations of hydration asymmetries for: (i) monatomic ions, (ii) titratable amino acids in both their protonated and unprotonated states, and (iii) the Mobley “bracelet” and “rod” test problems [D. L. Mobley, A. E. Barber II, C. J. Fennell, and K. A. Dill, “Charge asymmetries in hydration of polar solutes,” J. Phys. Chem. B 112, 2405–2414 (2008)]. Remarkably, the model also justifies the use of linear response expressions for charging free energies. Our boundary-element method implementation demonstrates the ease with which other continuum-electrostatic solvers can be extended to include asymmetry. PMID:25296776

  18. A viscoplastic model with plasticity for dry clay. Application to underground structures

    International Nuclear Information System (INIS)

    Tchiyep Piepi, G.

    1995-10-01

    Stiff clays are generally encountered at a great depth (more than 300 m). These materials have a relatively low water content. A lot of industrial studies justify the recent interest borne by these materials. This work deals in particular with stiff clays able to answer to stresses by elastic, plastic and viscoplastic deformations. In the first part are given the experimental study and the modelling of the stiff clays mechanical behavior. In this part, considered materials are described as well as the tests carried out. The obtained results are discussed and a viscoplastic model with rupture is elaborated. The second part deals with the elaboration of an original semi analytical solution and of an algorithm implemented in GEOMEC91. The third part shows the influence of the model on the tunnel convergence at the moment of the support laying and by consequently on the stresses of this last one. The calculations results show a strong influence of the short-term cohesion on the tunnel convergence. (O.M.)

  19. CEAI: CCM-based email authorship identification model

    Directory of Open Access Journals (Sweden)

    Sarwat Nizamani

    2013-11-01

    Full Text Available In this paper we present a model for email authorship identification (EAI by employing a Cluster-based Classification (CCM technique. Traditionally, stylometric features have been successfully employed in various authorship analysis tasks; we extend the traditional feature set to include some more interesting and effective features for email authorship identification (e.g., the last punctuation mark used in an email, the tendency of an author to use capitalization at the start of an email, or the punctuation after a greeting or farewell. We also included Info Gain feature selection based content features. It is observed that the use of such features in the authorship identification process has a positive impact on the accuracy of the authorship identification task. We performed experiments to justify our arguments and compared the results with other base line models. Experimental results reveal that the proposed CCM-based email authorship identification model, along with the proposed feature set, outperforms the state-of-the-art support vector machine (SVM-based models, as well as the models proposed by Iqbal et al. (2010, 2013 [1,2]. The proposed model attains an accuracy rate of 94% for 10 authors, 89% for 25 authors, and 81% for 50 authors, respectively on Enron dataset, while 89.5% accuracy has been achieved on authors’ constructed real email dataset. The results on Enron dataset have been achieved on quite a large number of authors as compared to the models proposed by Iqbal et al. [1,2].

  20. Probabilistic forward model for electroencephalography source analysis

    International Nuclear Information System (INIS)

    Plis, Sergey M; George, John S; Jun, Sung C; Ranken, Doug M; Volegov, Petr L; Schmidt, David M

    2007-01-01

    Source localization by electroencephalography (EEG) requires an accurate model of head geometry and tissue conductivity. The estimation of source time courses from EEG or from EEG in conjunction with magnetoencephalography (MEG) requires a forward model consistent with true activity for the best outcome. Although MRI provides an excellent description of soft tissue anatomy, a high resolution model of the skull (the dominant resistive component of the head) requires CT, which is not justified for routine physiological studies. Although a number of techniques have been employed to estimate tissue conductivity, no present techniques provide the noninvasive 3D tomographic mapping of conductivity that would be desirable. We introduce a formalism for probabilistic forward modeling that allows the propagation of uncertainties in model parameters into possible errors in source localization. We consider uncertainties in the conductivity profile of the skull, but the approach is general and can be extended to other kinds of uncertainties in the forward model. We and others have previously suggested the possibility of extracting conductivity of the skull from measured electroencephalography data by simultaneously optimizing over dipole parameters and the conductivity values required by the forward model. Using Cramer-Rao bounds, we demonstrate that this approach does not improve localization results nor does it produce reliable conductivity estimates. We conclude that the conductivity of the skull has to be either accurately measured by an independent technique, or that the uncertainties in the conductivity values should be reflected in uncertainty in the source location estimates

  1. Prediction of selected Indian stock using a partitioning–interpolation based ARIMA–GARCH model

    Directory of Open Access Journals (Sweden)

    C. Narendra Babu

    2015-07-01

    Full Text Available Accurate long-term prediction of time series data (TSD is a very useful research challenge in diversified fields. As financial TSD are highly volatile, multi-step prediction of financial TSD is a major research problem in TSD mining. The two challenges encountered are, maintaining high prediction accuracy and preserving the data trend across the forecast horizon. The linear traditional models such as autoregressive integrated moving average (ARIMA and generalized autoregressive conditional heteroscedastic (GARCH preserve data trend to some extent, at the cost of prediction accuracy. Non-linear models like ANN maintain prediction accuracy by sacrificing data trend. In this paper, a linear hybrid model, which maintains prediction accuracy while preserving data trend, is proposed. A quantitative reasoning analysis justifying the accuracy of proposed model is also presented. A moving-average (MA filter based pre-processing, partitioning and interpolation (PI technique are incorporated by the proposed model. Some existing models and the proposed model are applied on selected NSE India stock market data. Performance results show that for multi-step ahead prediction, the proposed model outperforms the others in terms of both prediction accuracy and preserving data trend.

  2. FARIMA MODELING OF SOLAR FLARE ACTIVITY FROM EMPIRICAL TIME SERIES OF SOFT X-RAY SOLAR EMISSION

    International Nuclear Information System (INIS)

    Stanislavsky, A. A.; Burnecki, K.; Magdziarz, M.; Weron, A.; Weron, K.

    2009-01-01

    A time series of soft X-ray emission observed by the Geostationary Operational Environment Satellites from 1974 to 2007 is analyzed. We show that in the solar-maximum periods the energy distribution of soft X-ray solar flares for C, M, and X classes is well described by a fractional autoregressive integrated moving average model with Pareto noise. The model incorporates two effects detected in our empirical studies. One effect is a long-term dependence (long-term memory), and another corresponds to heavy-tailed distributions. The parameters of the model: self-similarity exponent H, tail index α, and memory parameter d are statistically stable enough during the periods 1977-1981, 1988-1992, 1999-2003. However, when the solar activity tends to minimum, the parameters vary. We discuss the possible causes of this evolution and suggest a statistically justified model for predicting the solar flare activity.

  3. Are our dynamic water quality models too complex? A comparison of a new parsimonious phosphorus model, SimplyP, and INCA-P

    Science.gov (United States)

    Jackson-Blake, L. A.; Sample, J. E.; Wade, A. J.; Helliwell, R. C.; Skeffington, R. A.

    2017-07-01

    Catchment-scale water quality models are increasingly popular tools for exploring the potential effects of land management, land use change and climate change on water quality. However, the dynamic, catchment-scale nutrient models in common usage are complex, with many uncertain parameters requiring calibration, limiting their usability and robustness. A key question is whether this complexity is justified. To explore this, we developed a parsimonious phosphorus model, SimplyP, incorporating a rainfall-runoff model and a biogeochemical model able to simulate daily streamflow, suspended sediment, and particulate and dissolved phosphorus dynamics. The model's complexity was compared to one popular nutrient model, INCA-P, and the performance of the two models was compared in a small rural catchment in northeast Scotland. For three land use classes, less than six SimplyP parameters must be determined through calibration, the rest may be based on measurements, while INCA-P has around 40 unmeasurable parameters. Despite substantially simpler process-representation, SimplyP performed comparably to INCA-P in both calibration and validation and produced similar long-term projections in response to changes in land management. Results support the hypothesis that INCA-P is overly complex for the study catchment. We hope our findings will help prompt wider model comparison exercises, as well as debate among the water quality modeling community as to whether today's models are fit for purpose. Simpler models such as SimplyP have the potential to be useful management and research tools, building blocks for future model development (prototype code is freely available), or benchmarks against which more complex models could be evaluated.

  4. A Scandinavian Public Transport Model? Reform Changes in Denmark, Sweden and Norway

    DEFF Research Database (Denmark)

    Hansson, Lisa; Lissandrello, Enza; Næss, Petter

    2017-01-01

    Scandinavian public transport, especially aspects of how the Scandinavian countries (i.e., Sweden, Denmark, and Norway) have created governing structures for a cohesive public transport system, is often cited positively in international research. Scandinavia is often treated as a homogeneous unit...... in public transport research, which sometimes refers to the “Scandinavian model of public transport”. It is not uncommon for conclusions regarding Scandinavian countries to be based on analyses of just one country. Is there actually such a thing as a Scandinavian model of public transport? All around Europe...... the public transport sector is changing, taking public transport governance in various directions. This paper provides an overview of the changes and similarities in public transport governance in Scandinavian countries from the 1970s to 2012, discussing whether it is justifiable to speak of a Scandinavian...

  5. Nonlinear Model Predictive Control for Oil Reservoirs Management

    DEFF Research Database (Denmark)

    Capolei, Andrea

    expensive gradient computation by using high-order ESDIRK (Explicit Singly Diagonally Implicit Runge-Kutta) temporal integration methods and continuous adjoints. The high order integration scheme allows larger time steps and therefore faster solution times. We compare gradient computation by the continuous...... gradient-based optimization and the required gradients are computed by the adjoint method. We propose the use of efficient high order implicit time integration methods for the solution of the forward and the adjoint equations of the dynamical model. The Ensemble Kalman filter is used for data assimilation...... equivalent strategy is not justified for the particular case studied in this paper. The third contribution of this thesis is a mean-variance method for risk mitigation in production optimization of oil reservoirs. We introduce a return-risk bicriterion objective function for the profit-risk tradeoff...

  6. Brand Cigarillos: Low Price but High Particulate Matter Levels-Is Their Favorable Taxation in the European Union Justified?

    Science.gov (United States)

    Wasel, Julia; Boll, Michael; Schulze, Michaela; Mueller, Daniel; Bundschuh, Matthias; Groneberg, David A; Gerber, Alexander

    2015-08-06

    taxation of cigarillos is not justifiable.

  7. Can High Bandwidth and Latency Justify Large Cache Blocks in Scalable Multiprocessors?

    Science.gov (United States)

    1994-01-01

    400 MB/second. 4 Dubnicki’s work used trace-driven simulation, with traces collected on an 8-processor machine. We would expect such small-scale...312 1 6 32 64 of odk Sb* Bad64.M Figure 17: Miss rate of Ind Blocked LU. Figure 18: MCPR of Ind Blocked LU. overall miss rate of TGauss is a factor of...easily. 17 (’his approach assunics that the model paramelers we collect from simulations with infinite band- width (such as the miss rate and the

  8. Falling chains as variable-mass systems: theoretical model and experimental analysis

    International Nuclear Information System (INIS)

    De Sousa, Célia A; Costa, Pedro; Gordo, Paulo M

    2012-01-01

    In this paper, we revisit, theoretically and experimentally, the fall of a folded U-chain and of a pile-chain. The model calculation implies the division of the whole system into two subsystems of variable mass, allowing us to explore the role of tensional contact forces at the boundary of the subsystems. This justifies, for instance, that the folded U-chain falls faster than the acceleration due to the gravitational force. This result, which matches quite well with the experimental data independently of the type of chain, implies that the falling chain is well described by energy conservation. We verify that these conclusions are not observed for the pile-chain motion. (paper)

  9. Development and validation of an elastic and inelastic calculation method for tubes, based on beam models and taking into account the thermal stresses on the wall

    International Nuclear Information System (INIS)

    Krakowiak, C.

    1989-11-01

    A simplified model for the elastic-plastic calculations of thin and flexible tubes submitted to thermal stresses is presented. The method is based on beam models and provides satisfactory results concerning the displacement of the whole tube system. These results can be justified by the fact that the modifications of the tube cross sections (from circular to elliptical), the flexibility of the elbow joints and the radial temperature profile are included in the calculations. The thermoplasticity analysis is performed by defining independent and general flow directions and determining the corresponding behavior laws. The model is limited to proportional monotonous charging, however the obtained results are promissing [fr

  10. Uncertainty-based calibration and prediction with a stormwater surface accumulation-washoff model based on coverage of sampled Zn, Cu, Pb and Cd field data

    DEFF Research Database (Denmark)

    Lindblom, Erik Ulfson; Ahlman, S.; Mikkelsen, Peter Steen

    2011-01-01

    allows identifying a range of behavioral model parameter sets. The small catchment size and nearness of the rain gauge justified excluding the hydrological model parameters from the uncertainty assessment. Uniform, closed prior distributions were heuristically specified for the dry and wet removal...... of accumulated metal available on the conceptual catchment surface. Forward Monte Carlo analysis based on the posterior parameter sets covered 95% of the observed event mean concentrations, and 95% prediction quantiles for site mean concentrations were estimated to 470 μg/l ±20% for Zn, 295 μg/l ±40% for Cu, 20...

  11. Agricultural and Environmental Input Parameters for the Biosphere Model

    International Nuclear Information System (INIS)

    Kaylie Rasmuson; Kurt Rautenstrauch

    2003-01-01

    This analysis is one of nine technical reports that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN) biosphere model. It documents input parameters for the biosphere model, and supports the use of the model to develop Biosphere Dose Conversion Factors (BDCF). The biosphere model is one of a series of process models supporting the Total System Performance Assessment (TSPA) for the repository at Yucca Mountain. The ERMYN provides the TSPA with the capability to perform dose assessments. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships between the major activities and their products (the analysis and model reports) that were planned in the biosphere Technical Work Plan (TWP, BSC 2003a). It should be noted that some documents identified in Figure 1-1 may be under development and therefore not available at the time this document is issued. The ''Biosphere Model Report'' (BSC 2003b) describes the ERMYN and its input parameters. This analysis report, ANL-MGR-MD-000006, ''Agricultural and Environmental Input Parameters for the Biosphere Model'', is one of the five reports that develop input parameters for the biosphere model. This report defines and justifies values for twelve parameters required in the biosphere model. These parameters are related to use of contaminated groundwater to grow crops. The parameter values recommended in this report are used in the soil, plant, and carbon-14 submodels of the ERMYN

  12. A simple model for the evolution of a non-Abelian cosmic string network

    Energy Technology Data Exchange (ETDEWEB)

    Cella, G. [Istituto Nazionale di Fisica Nucleare, sez. Pisa, Largo Bruno Pontecorvo 3, 56126 Pisa (Italy); Pieroni, M., E-mail: giancarlo.cella@pi.infn.it, E-mail: mauro.pieroni@apc.univ-paris7.fr [AstroParticule et Cosmologie, Université Paris Diderot, CNRS, CEA, Observatoire de Paris, Sorbonne Paris Cité, F-75205 Paris Cedex 13 (France)

    2016-06-01

    In this paper we present the results of numerical simulations intended to study the behavior of non-Abelian cosmic strings networks. In particular we are interested in discussing the variations in the asymptotic behavior of the system as we variate the number of generators for the topological defects. A simple model which allows for cosmic strings is presented and its lattice discretization is discussed. The evolution of the generated cosmic string networks is then studied for different values for the number of generators for the topological defects. Scaling solution appears to be approached in most cases and we present an argument to justify the lack of scaling for the residual cases.

  13. A model for the inverse 1-median problem on trees under uncertain costs

    Directory of Open Access Journals (Sweden)

    Kien Trung Nguyen

    2016-01-01

    Full Text Available We consider the problem of justifying vertex weights of a tree under uncertain costs so that a prespecified vertex become optimal and the total cost should be optimal in the uncertainty scenario. We propose a model which delivers the information about the optimal cost which respect to each confidence level \\(\\alpha \\in [0,1]\\. To obtain this goal, we first define an uncertain variable with respect to the minimum cost in each confidence level. If all costs are independently linear distributed, we present the inverse distribution function of this uncertain variable in \\(O(n^{2}\\log n\\ time, where \\(n\\ is the number of vertices in the tree.

  14. The role of mathematics in politics as an issue for mathematics teaching

    DEFF Research Database (Denmark)

    Sánchez, Mario; Blomhøj, Morten

    2010-01-01

    This paper presents analyses of some examples of mathematical models used in the Mexican society of today. We seek to justify why and illustrate how such examples can be included in mathematics teaching and in teacher education.......This paper presents analyses of some examples of mathematical models used in the Mexican society of today. We seek to justify why and illustrate how such examples can be included in mathematics teaching and in teacher education....

  15. Incoherent SSI Analysis of Reactor Building using 2007 Hard-Rock Coherency Model

    International Nuclear Information System (INIS)

    Kang, Joo-Hyung; Lee, Sang-Hoon

    2008-01-01

    Many strong earthquake recordings show the response motions at building foundations to be less intense than the corresponding free-field motions. To account for these phenomena, the concept of spatial variation, or wave incoherence was introduced. Several approaches for its application to practical analysis and design as part of soil-structure interaction (SSI) effect have been developed. However, conventional wave incoherency models didn't reflect the characteristics of earthquake data from hard-rock site, and their application to the practical nuclear structures on the hard-rock sites was not justified sufficiently. This paper is focused on the response impact of hard-rock coherency model proposed in 2007 on the incoherent SSI analysis results of nuclear power plant (NPP) structure. A typical reactor building of pressurized water reactor (PWR) type NPP is modeled classified into surface and embedded foundations. The model is also assumed to be located on medium-hard rock and hard-rock sites. The SSI analysis results are obtained and compared in case of coherent and incoherent input motions. The structural responses considering rocking and torsion effects are also investigated

  16. Applicability of Socio-Technical Model (STM in Working System of Modern Organizations

    Directory of Open Access Journals (Sweden)

    Rosmaini Tasmin

    2011-10-01

    Full Text Available Knowledge has been identified as one of the most important resources in organization that contributes to competitive advantages. Organizations around the world realize and put into practice an approach that bases on technological and sociological aspects to fill-up the gaps in their workplaces. The Socio-Technical Model (STM is an established organizational model introduced by Trist since 1960s at Tavistock Institute, London. It relates two most common components exist in all organizations, namely social systems (human and technological systems (information technology, machinery and equipment in organizations over many decades. This paper reviews the socio-technical model from various perspectives of its developmental stages and ideas written by researchers. Therefore, several literature reviews on socio-technical model have been compiled and discussed to justify whether its basic argument matches with required practices in Techno-Social environments. Through a socio-technical perspective on Knowledge Management, this paper highlights the interplay between social systems and technological system. It also suggests that management and leadership play critical roles in establishing the techno-social perspective for the effective assimilation of Knowledge Management practices.

  17. Compartmental modeling and tracer kinetics

    CERN Document Server

    Anderson, David H

    1983-01-01

    This monograph is concerned with mathematical aspects of compartmental an­ alysis. In particular, linear models are closely analyzed since they are fully justifiable as an investigative tool in tracer experiments. The objective of the monograph is to bring the reader up to date on some of the current mathematical prob­ lems of interest in compartmental analysis. This is accomplished by reviewing mathematical developments in the literature, especially over the last 10-15 years, and by presenting some new thoughts and directions for future mathematical research. These notes started as a series of lectures that I gave while visiting with the Division of Applied ~1athematics, Brown University, 1979, and have developed in­ to this collection of articles aimed at the reader with a beginning graduate level background in mathematics. The text can be used as a self-paced reading course. With this in mind, exercises have been appropriately placed throughout the notes. As an aid in reading the material, the e~d of a ...

  18. Examining DIF in the Context of CDMs When the Q-Matrix Is Misspecified

    Directory of Open Access Journals (Sweden)

    Dubravka Svetina

    2018-05-01

    Full Text Available The rise in popularity and use of cognitive diagnostic models (CDMs in educational research are partly motivated by the models’ ability to provide diagnostic information regarding students’ strengths and weaknesses in a variety of content areas. An important step to ensure appropriate interpretations from CDMs is to investigate differential item functioning (DIF. To this end, the current simulation study examined the performance of three methods to detect DIF in CDMs, with particular emphasis on the impact of Q-matrix misspecification on methods’ performance. Results illustrated that logistic regression and Mantel–Haenszel had better control of Type I error than the Wald test; however, high power rates were found using logistic regression and Wald methods, only. In addition to the tradeoff between Type I error control and acceptable power, our results suggested that Q-matrix complexity and item structures yield different results for different methods, presenting a more complex picture of the methods’ performance. Finally, implications and future directions are discussed.

  19. Phase II Trials for Heterogeneous Patient Populations with a Time-to-Event Endpoint.

    Science.gov (United States)

    Jung, Sin-Ho

    2017-07-01

    In this paper, we consider a single-arm phase II trial with a time-to-event end-point. We assume that the study population has multiple subpopulations with different prognosis, but the study treatment is expected to be similarly efficacious across the subpopulations. We review a stratified one-sample log-rank test and present its sample size calculation method under some practical design settings. Our sample size method requires specification of the prevalence of subpopulations. We observe that the power of the resulting sample size is not very sensitive to misspecification of the prevalence.

  20. A comparative study of spherical and flat-Earth geopotential modeling at satellite elevations

    Science.gov (United States)

    Parrott, M. H.; Hinze, W. J.; Braile, L. W.; Vonfrese, R. R. B.

    1985-01-01

    Flat-Earth modeling is a desirable alternative to the complex spherical-Earth modeling process. These methods were compared using 2 1/2 dimensional flat-earth and spherical modeling to compute gravity and scalar magnetic anomalies along profiles perpendicular to the strike of variably dimensioned rectangular prisms at altitudes of 150, 300, and 450 km. Comparison was achieved with percent error computations (spherical-flat/spherical) at critical anomaly points. At the peak gravity anomaly value, errors are less than + or - 5% for all prisms. At 1/2 and 1/10 of the peak, errors are generally less than 10% and 40% respectively, increasing to these values with longer and wider prisms at higher altitudes. For magnetics, the errors at critical anomaly points are less than -10% for all prisms, attaining these magnitudes with longer and wider prisms at higher altitudes. In general, in both gravity and magnetic modeling, errors increase greatly for prisms wider than 500 km, although gravity modeling is more sensitive than magnetic modeling to spherical-Earth effects. Preliminary modeling of both satellite gravity and magnetic anomalies using flat-Earth assumptions is justified considering the errors caused by uncertainties in isolating anomalies.

  1. Justifying Objective Bayesianism on Predicate Languages

    Directory of Open Access Journals (Sweden)

    Jürgen Landes

    2015-04-01

    Full Text Available Objective Bayesianism says that the strengths of one’s beliefs ought to be probabilities, calibrated to physical probabilities insofar as one has evidence of them, and otherwise sufficiently equivocal. These norms of belief are often explicated using the maximum entropy principle. In this paper we investigate the extent to which one can provide a unified justification of the objective Bayesian norms in the case in which the background language is a first-order predicate language, with a view to applying the resulting formalism to inductive logic. We show that the maximum entropy principle can be motivated largely in terms of minimising worst-case expected loss.

  2. Are segregated sports classes scientifically justified?

    OpenAIRE

    Lawson, Sian; Hall, Edward

    2014-01-01

    School sports classes are a key part of physical and mental development, yet in many countries these classes are gender segregated. Before institutionalised segregation can be condoned it is important to tackle assumptions and check for an evidence-based rationale. This presentation aims to analyse the key arguments for segregation given in comment-form response to a recent media article discussing mixed school sports (Lawson, 2013).\\ud \\ud The primary argument given was division for strength...

  3. Globalization and Employment: Is Anxiety Justified?

    Science.gov (United States)

    Lee, Eddy

    1996-01-01

    Despite concerns that globalization will increase unemployment and wage inequality, drive down wages and labor standards, and threaten national policy autonomy, it is clear that national policies still determine employment levels and labor standards. However, the need to protect those damaged by globalization still exists. (SK)

  4. Is the Pro-Network Bias Justified?

    Directory of Open Access Journals (Sweden)

    Rafael Pardo

    2013-07-01

    Full Text Available The academic literature, policy makers, and international organizations often emphasize the value of networks that, allegedly, may contribute to subcontractor upgrading, innovation, and economic welfare. By contrast, it is difficult to assess whether engagement in production outsourcing networks also accrues some advantages to outsourcers (contractors. To research differences between these organizations and vertically integrated organizations, we analyzed a sample of 1,031 industrial plants, statistically representative of firms with more than 50 employees in Spain’s manufacturing industry. We used t-tests, nonparametric tests, and chi-square tests, and hypotheses were tested for three subsets of companies, classified by the R&D intensity of the industry. In each set of industries, subcontracting is systematically associated with small batch production. By contrast, vertically integrated plants are more inclined to use mass production. In every type of industry, subcontracting is a form of governance especially efficient for the diffusion of new technology. Plants that subcontract production are more likely than integrated plants to adopt advanced manufacturing technology, whatever the R&D intensity of the industry. We conclude that outsourcers seem better prepared than vertically integrated organizations to meet customers’ requirements but employment of subcontracting do not lower necessarily their technology needs—a widespread “pro-network” argument.

  5. Integrating prior knowledge in multiple testing under dependence with applications to detecting differential DNA methylation.

    Science.gov (United States)

    Kuan, Pei Fen; Chiang, Derek Y

    2012-09-01

    DNA methylation has emerged as an important hallmark of epigenetics. Numerous platforms including tiling arrays and next generation sequencing, and experimental protocols are available for profiling DNA methylation. Similar to other tiling array data, DNA methylation data shares the characteristics of inherent correlation structure among nearby probes. However, unlike gene expression or protein DNA binding data, the varying CpG density which gives rise to CpG island, shore and shelf definition provides exogenous information in detecting differential methylation. This article aims to introduce a robust testing and probe ranking procedure based on a nonhomogeneous hidden Markov model that incorporates the above-mentioned features for detecting differential methylation. We revisit the seminal work of Sun and Cai (2009, Journal of the Royal Statistical Society: Series B (Statistical Methodology)71, 393-424) and propose modeling the nonnull using a nonparametric symmetric distribution in two-sided hypothesis testing. We show that this model improves probe ranking and is robust to model misspecification based on extensive simulation studies. We further illustrate that our proposed framework achieves good operating characteristics as compared to commonly used methods in real DNA methylation data that aims to detect differential methylation sites. © 2012, The International Biometric Society.

  6. Discrimination measures for survival outcomes: connection between the AUC and the predictiveness curve.

    Science.gov (United States)

    Viallon, Vivian; Latouche, Aurélien

    2011-03-01

    Finding out biomarkers and building risk scores to predict the occurrence of survival outcomes is a major concern of clinical epidemiology, and so is the evaluation of prognostic models. In this paper, we are concerned with the estimation of the time-dependent AUC--area under the receiver-operating curve--which naturally extends standard AUC to the setting of survival outcomes and enables to evaluate the discriminative power of prognostic models. We establish a simple and useful relation between the predictiveness curve and the time-dependent AUC--AUC(t). This relation confirms that the predictiveness curve is the key concept for evaluating calibration and discrimination of prognostic models. It also highlights that accurate estimates of the conditional absolute risk function should yield accurate estimates for AUC(t). From this observation, we derive several estimators for AUC(t) relying on distinct estimators of the conditional absolute risk function. An empirical study was conducted to compare our estimators with the existing ones and assess the effect of model misspecification--when estimating the conditional absolute risk function--on the AUC(t) estimation. We further illustrate the methodology on the Mayo PBC and the VA lung cancer data sets. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. Model features as the basis of preparation of boxers individualization principal level (elite

    Directory of Open Access Journals (Sweden)

    O.J. Pavelec

    2013-10-01

    Full Text Available Purpose to improve the system of training boxers of higher categories (elite. Individualization of the training process using the model characteristics special physical preparedness. Materials : The study was conducted during 2000-2010. Participated boxers national team of Ukraine in the amount of 43 people. Of those honored masters of sport 6, masters of sports of international class 16, masters of sports 21. The average age of the athletes 23.5 years. Results : justified and features a specially designed model of physical fitness boxing class. It is established that the boxers middle weight classes (64 75 kg have an advantage over other boxers weight categories (light and after a hard in the development of speed and strength endurance. The presented model characteristics can guide the professional fitness boxing (elite, as representatives of the sport. Conclusions : It is established that the structure of the special physical training boxers depends on many components, such as weight category, tactical fighter role, skill level, stage of preparation.

  8. Use of stratigraphic models as soft information to constrain stochastic modeling of rock properties: Development of the GSLIB-Lynx integration module

    International Nuclear Information System (INIS)

    Cromer, M.V.; Rautman, C.A.

    1995-10-01

    Rock properties in volcanic units at Yucca Mountain are controlled largely by relatively deterministic geologic processes related to the emplacement, cooling, and alteration history of the tuffaceous lithologic sequence. Differences in the lithologic character of the rocks have been used to subdivide the rock sequence into stratigraphic units, and the deterministic nature of the processes responsible for the character of the different units can be used to infer the rock material properties likely to exist in unsampled regions. This report proposes a quantitative, theoretically justified method of integrating interpretive geometric models, showing the three-dimensional distribution of different stratigraphic units, with numerical stochastic simulation techniques drawn from geostatistics. This integration of soft, constraining geologic information with hard, quantitative measurements of various material properties can produce geologically reasonable, spatially correlated models of rock properties that are free from stochastic artifacts for use in subsequent physical-process modeling, such as the numerical representation of ground-water flow and radionuclide transport. Prototype modeling conducted using the GSLIB-Lynx Integration Module computer program, known as GLINTMOD, has successfully demonstrated the proposed integration technique. The method involves the selection of stratigraphic-unit-specific material-property expected values that are then used to constrain the probability function from which a material property of interest at an unsampled location is simulated

  9. Modeling learning technology systems as business systems

    NARCIS (Netherlands)

    Avgeriou, Paris; Retalis, Symeon; Papaspyrou, Nikolaos

    2003-01-01

    The design of Learning Technology Systems, and the Software Systems that support them, is largely conducted on an intuitive, ad hoc basis, thus resulting in inefficient systems that defectively support the learning process. There is now justifiable, increasing effort in formalizing the engineering

  10. A dynamic P53-MDM2 model with time delay

    Energy Technology Data Exchange (ETDEWEB)

    Mihalas, Gh.I. [Department of Biophysics and Medical Informatics, University of Medicine and Pharmacy, Piata Eftimie Murgu, nr. 3, 300041 Timisoara (Romania)]. E-mail: mihalas@medinfo.umft.ro; Neamtu, M. [Department of Forecasting, Economic Analysis, Mathematics and Statistics, West University of Timisoara, Str. Pestalozzi, nr. 14A, 300115 Timisoara (Romania)]. E-mail: mihaela.neamtu@fse.uvt.ro; Opris, D. [Department of Applied Mathematics, West University of Timisoara, Bd. V. Parvan, nr. 4, 300223 Timisoara (Romania)]. E-mail: opris@math.uvt.ro; Horhat, R.F. [Department of Biophysics and Medical Informatics, University of Medicine and Pharmacy, Piata Eftimie Murgu, nr. 3, 300041 Timisoara (Romania)]. E-mail: rhorhat@yahoo.com

    2006-11-15

    Specific activator and repressor transcription factors which bind to specific regulator DNA sequences, play an important role in gene activity control. Interactions between genes coding such transcription factors should explain the different stable or sometimes oscillatory gene activities characteristic for different tissues. Starting with the model P53-MDM2 described into [Mihalas GI, Simon Z, Balea G, Popa E. Possible oscillatory behaviour in P53-MDM2 interaction computer simulation. J Biol Syst 2000;8(1):21-9] and the process described into [Kohn KW, Pommier Y. Molecular interaction map of P53 and MDM2 logic elements, which control the off-on switch of P53 in response to DNA damage. Biochem Biophys Res Commun 2005;331:816-27] we enveloped a new model of this interaction. Choosing the delay as a bifurcation parameter we study the direction and stability of the bifurcating periodic solutions. Some numerical examples are finally given for justifying the theoretical results.

  11. A dynamic P53-MDM2 model with time delay

    International Nuclear Information System (INIS)

    Mihalas, Gh.I.; Neamtu, M.; Opris, D.; Horhat, R.F.

    2006-01-01

    Specific activator and repressor transcription factors which bind to specific regulator DNA sequences, play an important role in gene activity control. Interactions between genes coding such transcription factors should explain the different stable or sometimes oscillatory gene activities characteristic for different tissues. Starting with the model P53-MDM2 described into [Mihalas GI, Simon Z, Balea G, Popa E. Possible oscillatory behaviour in P53-MDM2 interaction computer simulation. J Biol Syst 2000;8(1):21-9] and the process described into [Kohn KW, Pommier Y. Molecular interaction map of P53 and MDM2 logic elements, which control the off-on switch of P53 in response to DNA damage. Biochem Biophys Res Commun 2005;331:816-27] we enveloped a new model of this interaction. Choosing the delay as a bifurcation parameter we study the direction and stability of the bifurcating periodic solutions. Some numerical examples are finally given for justifying the theoretical results

  12. Bifurcation and category learning in network models of oscillating cortex

    Science.gov (United States)

    Baird, Bill

    1990-06-01

    A genetic model of oscillating cortex, which assumes “minimal” coupling justified by known anatomy, is shown to function as an associative memory, using previously developed theory. The network has explicit excitatory neurons with local inhibitory interneuron feedback that forms a set of nonlinear oscillators coupled only by long-range excitatory connections. Using a local Hebb-like learning rule for primary and higher-order synapses at the ends of the long-range connections, the system learns to store the kinds of oscillation amplitude patterns observed in olfactory and visual cortex. In olfaction, these patterns “emerge” during respiration by a pattern forming phase transition which we characterize in the model as a multiple Hopf bifurcation. We argue that these bifurcations play an important role in the operation of real digital computers and neural networks, and we use bifurcation theory to derive learning rules which analytically guarantee CAM storage of continuous periodic sequences-capacity: N/2 Fourier components for an N-node network-no “spurious” attractors.

  13. AN ACCURATE MODELING OF DELAY AND SLEW METRICS FOR ON-CHIP VLSI RC INTERCONNECTS FOR RAMP INPUTS USING BURR’S DISTRIBUTION FUNCTION

    Directory of Open Access Journals (Sweden)

    Rajib Kar

    2010-09-01

    Full Text Available This work presents an accurate and efficient model to compute the delay and slew metric of on-chip interconnect of high speed CMOS circuits foe ramp input. Our metric assumption is based on the Burr’s Distribution function. The Burr’s distribution is used to characterize the normalized homogeneous portion of the step response. We used the PERI (Probability distribution function Extension for Ramp Inputs technique that extends delay metrics and slew metric for step inputs to the more general and realistic non-step inputs. The accuracy of our models is justified with the results compared with that of SPICE simulations.

  14. An Empirical Assessment of a Technology Acceptance Model for Apps in Medical Education.

    Science.gov (United States)

    Briz-Ponce, Laura; García-Peñalvo, Francisco José

    2015-11-01

    The evolution and the growth of mobile applications ("apps") in our society is a reality. This general trend is still upward and the app use has also penetrated the medical education community. However, there is a lot of unawareness of the students' and professionals' point of view about introducing "apps" within Medical School curriculum. The aim of this research is to design, implement and verify that the Technology Acceptance Model (TAM) can be employed to measure and explain the acceptance of mobile technology and "apps" within Medical Education. The methodology was based on a survey distributed to students and medical professionals from University of Salamanca. This model explains 46.7% of behavioral intention to use mobile devise or "apps" for learning and will help us to justify and understand the current situation of introducing "apps" into the Medical School curriculum.

  15. Is the gravity effect of radiographic anatomic features enough to justify stone clearance or fragments retention following extracorporeal shock wave lithotripsy (SWL).

    Science.gov (United States)

    Mustafa, Mahmoud

    2012-08-01

    We determined whether the gravity effect of radiographic anatomic features on the preoperative urography (IVP) are enough to predict fragments clearance after shock wave lithotripsy (SWL). A Total of 282 patients with mean age 45.8 ± 13.2 years (189 male, 93 female), who underwent SWL due to renal calculi between October 2005 and August 2009 were enrolled. The mean calculi load was 155.72 ± 127.66 mm². The patients were stratified into three groups: patients with pelvis calculi (group 1); patients with upper or middle pole calculi (group 2) and patients with lower pole calculi (group 3). Three angles on the pretreatment IVP were measured: the inner angle between the axis of the lower pole infundibular and ureteropelvic axis (angle I); the inner angle between the lower pole infundibular axis and main axis of pelvis-ureteropelvic (UP) junction point (angle II) and the inner angle between the lower pole infundibular axis and perpendicular line (angle III). Multivariate analysis was used to define the significant predictors of stone clearance. The overall success rate was 85.81%. All angles, sessions number, shock waves number and stone burden were significant predictors of success in patients in group 1. However, in group 2 only angle II and in group 3 angles I and II had significant effect on stone clearance. Radiographic anatomic features have significant role in determining the stone-free rate following satisfactory fragmentation of renal stones with SWL. The measurement of infundibulopelvic angle in different manner helps to predict the stone-free status in patients with renal calculi located not only in lower pole, but also in renal pelvis and upper or middle pole. Gravity effect is not enough to justify the significant influence of the radiographic anatomic features on the stone clearance and fragments retention after SWL.

  16. Introduction into a two-dimensional model of the photochemistry of the stratosphere of precipitations of galactic and solar protons: case of the present terrestrial magnetic field and of field reversal

    International Nuclear Information System (INIS)

    Brard, D.

    1982-11-01

    In the aim of studying the climatic variations related to the reversal of the geomagnetic field, an analysis has been made of the effects of precipitations of galactic and solar protons, on oxide of nitrogen (NOsub(x) and NO) and ozone. Modifications are introduced into the one- and two-dimensional models which take into account the structure of the magnetic field. In situ measurements after the solar event of August 1972 enable changes due to the solar cycles to be introduced and the use of a 2D model to be justified [fr

  17. A simple kinematic model for the Lagrangian description of relevant nonlinear processes in the stratospheric polar vortex

    Directory of Open Access Journals (Sweden)

    V. J. García-Garrido

    2017-06-01

    Full Text Available In this work, we study the Lagrangian footprint of the planetary waves present in the Southern Hemisphere stratosphere during the exceptional sudden Stratospheric warming event that took place during September 2002. Our focus is on constructing a simple kinematic model that retains the fundamental mechanisms responsible for complex fluid parcel evolution, during the polar vortex breakdown and its previous stages. The construction of the kinematic model is guided by the Fourier decomposition of the geopotential field. The study of Lagrangian transport phenomena in the ERA-Interim reanalysis data highlights hyperbolic trajectories, and these trajectories are Lagrangian objects that are the kinematic mechanism for the observed filamentation phenomena. Our analysis shows that the breaking and splitting of the polar vortex is justified in our model by the sudden growth of a planetary wave and the decay of the axisymmetric flow.

  18. A Review of Models for Dose Assessment Employed by SKB in the Renewed Safety Assessment for SFR 1

    Energy Technology Data Exchange (ETDEWEB)

    Shaw, George [Imperial College of Science Technology and Medicine (United Kingdom)

    2002-09-01

    This document provides a critical review, on behalf of SSI, of the models employed by the Swedish Nuclear Fuel and Waste Management Co (SKB) for dose assessment in the renewed safety assessment for the final repository for radioactive operational waste (SFR 1) in Forsmark, Sweden. The main objective of the review is to examine the models used by SKB for radiological dose assessment in a series of evolving biotopes in the vicinity of the Forsmark repository within a time frame beginning in 3000 AD and extending beyond 7500 AD. Five biosphere models (for coasts, lakes, agriculture, mires and wells) are described in Report TR-01-04. The principal consideration of the review is to determine whether these models are fit for the purpose of dose evaluation over the time frames involved and in the evolving sequence of biotopes specified. As well as providing general observations and comments on the modelling approach taken, six specific questions are addressed, as follows. Are the assumptions underlying the models justifiable? Are all reasonably foreseeable environmental processes considered? Has parameter uncertainty been sufficiently and reasonably addressed? Have sufficient models been used to address all reasonably foreseeable biotopes? Are the transitions between biotopes modelled adequately (specifically, are initial conditions for developing biotopes adequately specified by calculations for subsiding biotopes)? Have all critical radionuclides been identified? It is concluded that, in general, the assumptions underlying most of the models are justifiable. The exceptions are a) the rather simplistic approach taken in the Coastal Model and b) the lack of consideration of wild foods and age-dependence when calculating exposures of humans to radionuclides via dietary pathways. Most foreseeable processes appear to have been accounted for within the constraints of the models used, although it is recommended that attention be paid to future climate states when considering

  19. Electromechanical modelling of tapered ionic polymer metal composites transducers

    Directory of Open Access Journals (Sweden)

    Rakesha Chandra Dash

    2016-09-01

    Full Text Available Ionic polymer metal composites (IPMCs are relatively new smart materials that exhibit a bidirectional electromechanical coupling. IPMCs have large number of important engineering applications such as micro robotics, biomedical devices, biomimetic robotics etc. This paper presents a comparison between tapered and uniform cantilevered Nafion based IPMCs transducer. Electromechanical modelling is done for the tapered beam. Thickness can be varied according to the requirement of force and deflection. Numerical results pertaining to the force and deflection characteristics of both type IPMCs transducer are obtained. It is shown that the desired amount of force and deflections for tapered IPMCs can be achieved for a given voltage. Different fixed end (t0 and free end (t1 thickness values have been taken to justify the results using MATLAB.

  20. Acoustic scaling: A re-evaluation of the acoustic model of Manchester Studio 7

    Science.gov (United States)

    Walker, R.

    1984-12-01

    The reasons for the reconstruction and re-evaluation of the acoustic scale mode of a large music studio are discussed. The design and construction of the model using mechanical and structural considerations rather than purely acoustic absorption criteria is described and the results obtained are given. The results confirm that structural elements within the studio gave rise to unexpected and unwanted low-frequency acoustic absorption. The results also show that at least for the relatively well understood mechanisms of sound energy absorption physical modelling of the structural and internal components gives an acoustically accurate scale model, within the usual tolerances of acoustic design. The poor reliability of measurements of acoustic absorption coefficients, is well illustrated. The conclusion is reached that such acoustic scale modelling is a valid and, for large scale projects, financially justifiable technique for predicting fundamental acoustic effects. It is not appropriate for the prediction of fine details because such small details are unlikely to be reproduced exactly at a different size without extensive measurements of the material's performance at both scales.