WorldWideScience

Sample records for bivariate probit model

  1. Semiparametric probit models with univariate and bivariate current-status data.

    Science.gov (United States)

    Liu, Hao; Qin, Jing

    2018-03-01

    Multivariate current-status data are frequently encountered in biomedical and public health studies. Semiparametric regression models have been extensively studied for univariate current-status data, but most existing estimation procedures are computationally intensive, involving either penalization or smoothing techniques. It becomes more challenging for the analysis of multivariate current-status data. In this article, we study the maximum likelihood estimations for univariate and bivariate current-status data under the semiparametric probit regression models. We present a simple computational procedure combining the expectation-maximization algorithm with the pool-adjacent-violators algorithm for solving the monotone constraint on the baseline function. Asymptotic properties of the maximum likelihood estimators are investigated, including the calculation of the explicit information bound for univariate current-status data, as well as the asymptotic consistency and convergence rate for bivariate current-status data. Extensive simulation studies showed that the proposed computational procedures performed well under small or moderate sample sizes. We demonstrate the estimation procedure with two real data examples in the areas of diabetic and HIV research. © 2017, The International Biometric Society.

  2. The Effect of Supplemental Instruction on Retention: A Bivariate Probit Model

    Science.gov (United States)

    Bowles, Tyler J.; Jones, Jason

    2004-01-01

    Single equation regression models have been used to test the effect of Supplemental Instruction (SI) on student retention. These models, however, fail to account for the two salient features of SI attendance and retention: (1) both SI attendance and retention are categorical variables, and (2) are jointly determined endogenous variables. Adopting…

  3. Evaluation of Factors Affecting E-Bike Involved Crash and E-Bike License Plate Use in China Using a Bivariate Probit Model

    OpenAIRE

    Guo, Yanyong; Zhou, Jibiao; Wu, Yao; Chen, Jingxu

    2017-01-01

    The primary objective of this study is to evaluate factors affecting e-bike involved crash and license plate use in China. E-bike crashes data were collected from police database and completed through a telephone interview. Noncrash samples were collected by a questionnaire survey. A bivariate probit (BP) model was developed to simultaneously examine the significant factors associated with e-bike involved crash and e-bike license plate and to account for the correlations between them. Margina...

  4. Effectiveness of enforcement levels of speed limit and drink driving laws and associated factors – Exploratory empirical analysis using a bivariate ordered probit model

    Directory of Open Access Journals (Sweden)

    Behram Wali

    2017-06-01

    Full Text Available The contemporary traffic safety research comprises little information on quantifying the simultaneous association between drink driving and speeding among fatally injured drivers. Potential correlation between driver's drink driving and speeding behavior poses a substantial methodological concern which needs investigation. This study therefore focused on investigating the simultaneous impact of socioeconomic factors, fatalities, vehicle ownership, health services and highway agency road safety policies on enforcement levels of speed limit and drink driving laws. The effectiveness of enforcement levels of speed limit and drink driving laws has been investigated through development of bivariate ordered probit model using data extricated from WHO's global status report on road safety in 2013. The consistent and intuitive parameter estimates along with statistically significant correlation between response outcomes validates the statistical supremacy of bivariate ordered probit model. The results revealed that fatalities per thousand registered vehicles, hospital beds per hundred thousand population and road safety policies are associated with a likely medium or high effectiveness of enforcement levels of speed limit and drink driving laws, respectively. Also, the model encapsulates the effect of several other agency related variables and socio-economic status on the response outcomes. Marginal effects are reported for analyzing the impact of such factors on intermediate categories of response outcomes. The results of this study are expected to provide necessary insights to elemental enforcement programs. Also, marginal effects of explanatory variables may provide useful directions for formulating effective policy countermeasures for overcoming driver's speeding and drink driving behavior.

  5. Modelling the vicious circle between obesity and physical activity in children and adolescents using a bivariate probit model with endogenous regressors.

    Science.gov (United States)

    Yeh, C-Y; Chen, L-J; Ku, P-W; Chen, C-M

    2015-01-01

    The increasing prevalence of obesity in children and adolescents has become one of the most important public health issues around the world. Lack of physical activity is a risk factor for obesity, while being obese could reduce the likelihood of participating in physical activity. Failing to account for the endogeneity between obesity and physical activity would result in biased estimation. This study investigates the relationship between overweight and physical activity by taking endogeneity into consideration. It develops an endogenous bivariate probit model estimated by the maximum likelihood method. The data included 4008 boys and 4197 girls in the 5th-9th grades in Taiwan in 2007-2008. The relationship between overweight and physical activity is significantly negative in the endogenous model, but insignificant in the comparative exogenous model. This endogenous relationship presents a vicious circle in which lower levels of physical activity lead to overweight, while those who are already overweight engage in less physical activity. The results not only reveal the importance of endogenous treatment, but also demonstrate the robust negative relationship between these two factors. An emphasis should be put on overweight and obese children and adolescents in order to break the vicious circle. Promotion of physical activity by appropriate counselling programmes and peer support could be effective in reducing the prevalence of obesity in children and adolescents.

  6. Evaluation of Factors Affecting E-Bike Involved Crash and E-Bike License Plate Use in China Using a Bivariate Probit Model

    Directory of Open Access Journals (Sweden)

    Yanyong Guo

    2017-01-01

    Full Text Available The primary objective of this study is to evaluate factors affecting e-bike involved crash and license plate use in China. E-bike crashes data were collected from police database and completed through a telephone interview. Noncrash samples were collected by a questionnaire survey. A bivariate probit (BP model was developed to simultaneously examine the significant factors associated with e-bike involved crash and e-bike license plate and to account for the correlations between them. Marginal effects for contributory factors were calculated to quantify their impacts on the outcomes. The results show that several contributory factors, including gender, age, education level, driver license, car in household, experiences in using e-bike, law compliance, and aggressive driving behaviors, are found to have significant impacts on both e-bike involved crash and license plate use. Moreover, type of e-bike, frequency of using e-bike, impulse behavior, degree of riding experience, and risk perception scale are found to be associated with e-bike involved crash. It is also found that e-bike involved crash and e-bike license plate use are strongly correlated and are negative in direction. The result enhanced our comprehension of the factors related to e-bike involved crash and e-bike license plate use.

  7. The Role of Wealth and Health in Insurance Choice: Bivariate Probit Analysis in China

    Directory of Open Access Journals (Sweden)

    Yiding Yue

    2014-01-01

    Full Text Available This paper captures the correlation between the choices of health insurance and pension insurance using the bivariate probit model and then studies the effect of wealth and health on insurance choice. Our empirical evidence shows that people who participate in a health care program are more likely to participate in a pension plan at the same time, while wealth and health have different effects on the choices of the health care program and the pension program. Generally, the higher an individual’s wealth level is, the more likelihood he will participate in a health care program; but wealth has no effect on the participation of pension. Health status has opposite effects on choices of health care programs and pension plans; the poorer an individual’s health is, the more likely he is to participate in health care programs, while the better health he enjoys, the more likely he is to participate in pension plans. When the investigation scope narrows down to commercial insurance, there is only a significant effect of health status on commercial health insurance. The commercial insurance choice and the insurance choice of the agricultural population are more complicated.

  8. A Multinomial Probit Model with Latent Factors

    DEFF Research Database (Denmark)

    Piatek, Rémi; Gensowski, Miriam

    2017-01-01

    be meaningfully linked to an economic model. We provide sufficient conditions that make this structure identified and interpretable. For inference, we design a Markov chain Monte Carlo sampler based on marginal data augmentation. A simulation exercise shows the good numerical performance of our sampler......We develop a parametrization of the multinomial probit model that yields greater insight into the underlying decision-making process, by decomposing the error terms of the utilities into latent factors and noise. The latent factors are identified without a measurement system, and they can...

  9. Testing for spatial error dependence in probit models

    NARCIS (Netherlands)

    Amaral, P. V.; Anselin, L.; Arribas-Bel, D.

    2013-01-01

    In this note, we compare three test statistics that have been suggested to assess the presence of spatial error autocorrelation in probit models. We highlight the differences between the tests proposed by Pinkse and Slade (J Econom 85(1):125-254, 1998), Pinkse (Asymptotics of the Moran test and a

  10. Nonparametric Bayesian models through probit stick-breaking processes.

    Science.gov (United States)

    Rodríguez, Abel; Dunson, David B

    2011-03-01

    We describe a novel class of Bayesian nonparametric priors based on stick-breaking constructions where the weights of the process are constructed as probit transformations of normal random variables. We show that these priors are extremely flexible, allowing us to generate a great variety of models while preserving computational simplicity. Particular emphasis is placed on the construction of rich temporal and spatial processes, which are applied to two problems in finance and ecology.

  11. Statistical Modeling of Bivariate Data.

    Science.gov (United States)

    1982-08-01

    to one. Following Crain (1974), one may consider order m approximators m log f111(X) - k k (x) - c(e), asx ;b. (4.4.5) k,-r A m and attempt to find...literature. Consider the approximate model m log fn (x) = 7 ekk(x) + a G(x), aSx ;b, (44.8) " k=-Mn ’ where G(x) is a Gaussian process and n is a

  12. A Probit Model for the State of the Greek GDP Growth

    Directory of Open Access Journals (Sweden)

    Stavros Degiannakis

    2015-08-01

    Full Text Available The paper provides probability estimates of the state of the GDP growth. A regime-switching model defines the probability of the Greek GDP being in boom or recession. Then probit models extract the predictive information of a set of explanatory (economic and financial variables regarding the state of the GDP growth. A contemporaneous, as well as a lagged, relationship between the explanatory variables and the state of the GDP growth is conducted. The mean absolute distance (MAD between the probability of not being in recession and the probability estimated by the probit model is the function that evaluates the performance of the models. The probit model with the industrial production index and the realized volatility as the explanatory variables has the lowest MAD value of 6.43% (7.94% in the contemporaneous (lagged relationship.

  13. Another Look at the Method of Y-Standardization in Logit and Probit Models

    DEFF Research Database (Denmark)

    Karlson, Kristian Bernt

    2015-01-01

    This paper takes another look at the derivation of the method of Y-standardization used in sociological analysis involving comparisons of coefficients across logit or probit models. It shows that the method can be derived under less restrictive assumptions than hitherto suggested. Rather than...... assuming that the logit or probit fixes the variance of the latent error at a known constant, it suffices to assume that the variance of the error is unknown. A further result suggests that using Y-standardization for cross-model comparisons is likely to be biased by model differences in the fit...

  14. The Use of a Probit Model for the Validation of Selection Procedures.

    Science.gov (United States)

    Dagenais, Denyse L.

    1984-01-01

    After a review of the disadvantages of linear models for estimating the probability of academic success from previous school records and admission test results, the use of a probit model is proposed. The model is illustrated with admissions data from the Ecole des Hautes Etudes Commerciales in Montreal. (Author/BW)

  15. Using Heteroskedastic Ordered Probit Models to Recover Moments of Continuous Test Score Distributions from Coarsened Data

    Science.gov (United States)

    Reardon, Sean F.; Shear, Benjamin R.; Castellano, Katherine E.; Ho, Andrew D.

    2017-01-01

    Test score distributions of schools or demographic groups are often summarized by frequencies of students scoring in a small number of ordered proficiency categories. We show that heteroskedastic ordered probit (HETOP) models can be used to estimate means and standard deviations of multiple groups' test score distributions from such data. Because…

  16. PENERAPAN REGRESI PROBIT BIVARIAT UNTUK MENDUGA FAKTOR-FAKTOR YANG MEMENGARUHI KELULUSAN MAHASISWA (Studi Kasus: Mahasiswa Fakultas MIPA Unversitas Udayana

    Directory of Open Access Journals (Sweden)

    NI GUSTI KETUT TRISNA PRADNYANTARI

    2015-06-01

    Full Text Available The aim of this research to estimate the factors that affect students graduation using bivariate probit regression. Bivariate probit regression is a statistical method that involves two response variables which are qualitative and the independent variables are qualitative, quantitative, or a combination of both. In bivariate probit regression model, the result obtained is the probability of the response variable. The result of this research are the factors that affect significantly for students graduation based on study period are majors, sex, and duration of the thesis, while the factors that significantly for students graduation based on GPA are the entry system, duration of the thesis and the number of parents’ dependents.

  17. Bivariate Random Effects Meta-analysis of Diagnostic Studies Using Generalized Linear Mixed Models

    Science.gov (United States)

    GUO, HONGFEI; ZHOU, YIJIE

    2011-01-01

    Bivariate random effect models are currently one of the main methods recommended to synthesize diagnostic test accuracy studies. However, only the logit-transformation on sensitivity and specificity has been previously considered in the literature. In this paper, we consider a bivariate generalized linear mixed model to jointly model the sensitivities and specificities, and discuss the estimation of the summary receiver operating characteristic curve (ROC) and the area under the ROC curve (AUC). As the special cases of this model, we discuss the commonly used logit, probit and complementary log-log transformations. To evaluate the impact of misspecification of the link functions on the estimation, we present two case studies and a set of simulation studies. Our study suggests that point estimation of the median sensitivity and specificity, and AUC is relatively robust to the misspecification of the link functions. However, the misspecification of link functions has a noticeable impact on the standard error estimation and the 95% confidence interval coverage, which emphasizes the importance of choosing an appropriate link function to make statistical inference. PMID:19959794

  18. Fitting statistical models in bivariate allometry.

    Science.gov (United States)

    Packard, Gary C; Birchard, Geoffrey F; Boardman, Thomas J

    2011-08-01

    Several attempts have been made in recent years to formulate a general explanation for what appear to be recurring patterns of allometric variation in morphology, physiology, and ecology of both plants and animals (e.g. the Metabolic Theory of Ecology, the Allometric Cascade, the Metabolic-Level Boundaries hypothesis). However, published estimates for parameters in allometric equations often are inaccurate, owing to undetected bias introduced by the traditional method for fitting lines to empirical data. The traditional method entails fitting a straight line to logarithmic transformations of the original data and then back-transforming the resulting equation to the arithmetic scale. Because of fundamental changes in distributions attending transformation of predictor and response variables, the traditional practice may cause influential outliers to go undetected, and it may result in an underparameterized model being fitted to the data. Also, substantial bias may be introduced by the insidious rotational distortion that accompanies regression analyses performed on logarithms. Consequently, the aforementioned patterns of allometric variation may be illusions, and the theoretical explanations may be wide of the mark. Problems attending the traditional procedure can be largely avoided in future research simply by performing preliminary analyses on arithmetic values and by validating fitted equations in the arithmetic domain. The goal of most allometric research is to characterize relationships between biological variables and body size, and this is done most effectively with data expressed in the units of measurement. Back-transforming from a straight line fitted to logarithms is not a generally reliable way to estimate an allometric equation in the original scale. © 2010 The Authors. Biological Reviews © 2010 Cambridge Philosophical Society.

  19. Bayesian multinomial probit modeling of daily windows of susceptibility for maternal PM2.5 exposure and congenital heart defects

    Science.gov (United States)

    Past epidemiologic studies suggest maternal ambient air pollution exposure during critical periods of the pregnancy is associated with fetal development. We introduce a multinomial probit model that allows for the joint identification of susceptible daily periods during the pregn...

  20. Evaluating the performance of simple estimators for probit models with two dummy endogenous regressors

    DEFF Research Database (Denmark)

    Holm, Anders; Nielsen, Jacob Arendt

    2013-01-01

    This study considers the small sample performance of approximate but simple two-stage estimators for probit models with two endogenous binary covariates. Monte Carlo simulations showthat all the considered estimators, including the simulated maximum-likelihood (SML) estimation, of the trivariate...... for testing the exogeneity of binary covariates. The methods are used to estimate the impact of employment-based health insurance and health care (HC) on HC use, where the approximations seem to work at least as well as the SML and in some cases better....

  1. Extended probit mortality model for zooplankton against transient change of PCO(2).

    Science.gov (United States)

    Sato, Toru; Watanabe, Yuji; Toyota, Koji; Ishizaka, Joji

    2005-09-01

    The direct injection of CO(2) in the deep ocean is a promising way to mitigate global warming. One of the uncertainties in this method, however, is its impact on marine organisms in the near field. Since the concentration of CO(2), which organisms experience in the ocean, changes with time, it is required to develop a biological impact model for the organisms against the unsteady change of CO(2) concentration. In general, the LC(50) concept is widely applied for testing a toxic agent for the acute mortality. Here, we regard the probit-transformed mortality as a linear function not only of the concentration of CO(2) but also of exposure time. A simple mathematical transform of the function gives a damage-accumulation mortality model for zooplankton. In this article, this model was validated by the mortality test of Metamphiascopsis hirsutus against the transient change of CO(2) concentration.

  2. Bivariate Kumaraswamy Models via Modified FGM Copulas: Properties and Applications

    Directory of Open Access Journals (Sweden)

    Indranil Ghosh

    2017-11-01

    Full Text Available A copula is a useful tool for constructing bivariate and/or multivariate distributions. In this article, we consider a new modified class of FGM (Farlie–Gumbel–Morgenstern bivariate copula for constructing several different bivariate Kumaraswamy type copulas and discuss their structural properties, including dependence structures. It is established that construction of bivariate distributions by this method allows for greater flexibility in the values of Spearman’s correlation coefficient, ρ and Kendall’s τ .

  3. Efficient estimation of semiparametric copula models for bivariate survival data

    KAUST Repository

    Cheng, Guang

    2014-01-01

    A semiparametric copula model for bivariate survival data is characterized by a parametric copula model of dependence and nonparametric models of two marginal survival functions. Efficient estimation for the semiparametric copula model has been recently studied for the complete data case. When the survival data are censored, semiparametric efficient estimation has only been considered for some specific copula models such as the Gaussian copulas. In this paper, we obtain the semiparametric efficiency bound and efficient estimation for general semiparametric copula models for possibly censored data. We construct an approximate maximum likelihood estimator by approximating the log baseline hazard functions with spline functions. We show that our estimates of the copula dependence parameter and the survival functions are asymptotically normal and efficient. Simple consistent covariance estimators are also provided. Numerical results are used to illustrate the finite sample performance of the proposed estimators. © 2013 Elsevier Inc.

  4. Brand Choice Modeling Modeling Toothpaste Brand Choice: An Empirical Comparison of Artificial Neural Networks and Multinomial Probit Model

    Directory of Open Access Journals (Sweden)

    Tolga Kaya

    2010-11-01

    Full Text Available The purpose of this study is to compare the performances of Artificial Neural Networks (ANN and Multinomial Probit (MNP approaches in modeling the choice decision within fast moving consumer goods sector. To do this, based on 2597 toothpaste purchases of a panel sample of 404 households, choice models are built and their performances are compared on the 861 purchases of a test sample of 135 households. Results show that ANN's predictions are better while MNP is useful in providing marketing insight.

  5. A Truncated-Probit Item Response Model for Estimating Psychophysical Thresholds

    Science.gov (United States)

    Morey, Richard D.; Rouder, Jeffrey N.; Speckman, Paul L.

    2009-01-01

    Human abilities in perceptual domains have conventionally been described with reference to a threshold that may be defined as the maximum amount of stimulation which leads to baseline performance. Traditional psychometric links, such as the probit, logit, and "t", are incompatible with a threshold as there are no true scores corresponding to…

  6. Measuring public understanding on Tenaga Nasional Berhad (TNB) electricity bills using ordered probit model

    Science.gov (United States)

    Zainudin, WNRA; Ramli, NA

    2017-09-01

    In 2016, Tenaga Nasional Berhad (TNB) had introduced an upgrade in its Billing and Customer Relationship Management (BCRM) as part of its long-term initiative to provide its customers with greater access to billing information. This includes information on real and suggested power consumption by the customers and further details in their billing charges. This information is useful to help TNB customers to gain better understanding on their electricity usage patterns and items involved in their billing charges. Up to date, there are not many studies done to measure public understanding on current electricity bills and whether this understanding could contribute towards positive impacts. The purpose of this paper is to measure public understanding on current TNB electricity bills and whether their satisfaction towards energy-related services, electricity utility services, and their awareness on the amount of electricity consumed by various appliances and equipment in their home could improve this understanding on the electricity bills. Both qualitative and quantitative research methods are used to achieve these objectives. A total of 160 respondents from local universities in Malaysia participated in a survey used to collect relevant information. Using Ordered Probit model, this paper finds respondents that are highly satisfied with the electricity utility services tend to understand their electricity bills better. The electric utility services include management of electricity bills and the information obtained from utility or non-utility supplier to help consumers manage their energy usage or bills. Based on the results, this paper concludes that the probability to understand the components in the monthly electricity bill increases as respondents are more satisfied with their electric utility services and are more capable to value the energy-related services.

  7. Measuring public acceptance on renewable energy (RE) development in Malaysia using ordered probit model

    Science.gov (United States)

    Zainudin, W. N. R. A.; Ishak, W. W. M.

    2017-09-01

    In 2009, government of Malaysia has announced a National Renewable Energy Policy and Action Plan as part of their commitment to accelerate the growth in renewable energies (RE). However, an adoption of RE as a main source of energy is still at an early stage due to lack of public awareness and acceptance on RE. Up to date, there are insufficient studies done on the reasons behind this lack of awareness and acceptance. Therefore, this paper is interested to investigate the public acceptance towards development of RE by measuring their willingness to pay slightly more for energy generated from RE sources, denote as willingness level and whether the importance for the electricity to be supplied at absolute lowest possible cost regardless of source and environmental impact, denote as importance level and other socio-economic factors could improve their willingness level. Both qualitative and quantitative research methods are used to achieve the research objectives. A total of 164 respondents from local universities in Malaysia participated in a survey to collect this relevant information. Using Ordered Probit model, the study shows that among the relevant socio-economic factors, age seems to be an important factor to influence the willingness level of the respondents. This paper concludes that younger generation are more willing to pay slightly more for energy generated from RE sources as compared to older generation. One of the possible reason may due to better information access by the younger generation on the RE issues and its positive implication to the world. Finding from this paper is useful to help policy maker in designing RE advocacy programs that would be able to secure public participation. These efforts are important to ensure future success of the RE policy.

  8. Modelling of Uncertainty and Bi-Variable Maps

    Science.gov (United States)

    Nánásiová, Ol'ga; Pykacz, Jarosław

    2016-05-01

    The paper gives an overview and compares various bi-varilable maps from orthomodular lattices into unit interval. It focuses mainly on such bi-variable maps that may be used for constructing joint probability distributions for random variables which are not defined on the same Boolean algebra.

  9. Model Probit Spasial pada Faktor-Faktor yang Mempengaruhi Klasifikasi IPM di Pulau Jawa

    Directory of Open Access Journals (Sweden)

    Feni Ira Puspita

    2013-05-01

    Full Text Available Human pembagunan Index (HDI is a composite index that includes three basic dimensions of human development is considered to reflect the status of the population's basic abilities of health, educational attainment, and purchasing power. Data IPM classified into four, namely low, lower middle, upper middle, and high. Therefore, determining the HDI can be done with the data probit regression approach. In the data found that there were similarities between the HDI value of geographically adjacent regions which resulted in the classification of the adjacent HDI same region. This is presumably due to the inter-regional dependency. This phenomenon is suspected because of the spatial dependencies can be described through spatial methods. From the explanation above, the HDI of data in this study are based on the spatial probit regression method are compared with the probit method. This study aims to assess the predictors for estimating parameters and testing parameters were applied to the data classification IPM in Java. MCMC is used as a method of estimating the parameters in the assessment. While the assessment test parameters used are 25% to 75% of the estimated parameters

  10. Comparison of Model Reliabilities from Single-Step and Bivariate Blending Methods

    DEFF Research Database (Denmark)

    Taskinen, Matti; Mäntysaari, Esa; Lidauer, Martin

    2013-01-01

    the production trait evaluation of Nordic Red dairy cattle. Genotyped bulls with daughters are used as training animals, and genotyped bulls and producing cows as candidate animals. For simplicity, size of the data is chosen so that the full inverses of the mixed model equation coefficient matrices can......Model based reliabilities in genetic evaluation are compared between three methods: animal model BLUP, single-step BLUP, and bivariate blending after genomic BLUP. The original bivariate blending is revised in this work to better account animal models. The study data is extracted from...... be calculated. Model reliabilities by the single-step and the bivariate blending methods were higher than by animal model due to genomic information. Compared to the single-step method, the bivariate blending method reliability estimates were, in general, lower. Computationally bivariate blending method was...

  11. A Spatial Probit Econometric Model of Land Change: The Case of Infrastructure Development in Western Amazonia, Peru.

    Science.gov (United States)

    Arima, E Y

    2016-01-01

    Tropical forests are now at the center stage of climate mitigation policies worldwide given their roles as sources of carbon emissions resulting from deforestation and forest degradation. Although the international community has created mechanisms such as REDD+ to reduce those emissions, developing tropical countries continue to invest in infrastructure development in an effort to spur economic growth. Construction of roads in particular is known to be an important driver of deforestation. This article simulates the impact of road construction on deforestation in Western Amazonia, Peru, and quantifies the amount of carbon emissions associated with projected deforestation. To accomplish this objective, the article adopts a Bayesian probit land change model in which spatial dependencies are defined between regions or groups of pixels instead of between individual pixels, thereby reducing computational requirements. It also compares and contrasts the patterns of deforestation predicted by both spatial and non-spatial probit models. The spatial model replicates complex patterns of deforestation whereas the non-spatial model fails to do so. In terms of policy, both models suggest that road construction will increase deforestation by a modest amount, between 200-300 km2. This translates into aboveground carbon emissions of 1.36 and 1.85 x 106 tons. However, recent introduction of palm oil in the region serves as a cautionary example that the models may be underestimating the impact of roads.

  12. A Spatial Probit Econometric Model of Land Change: The Case of Infrastructure Development in Western Amazonia, Peru.

    Directory of Open Access Journals (Sweden)

    E Y Arima

    Full Text Available Tropical forests are now at the center stage of climate mitigation policies worldwide given their roles as sources of carbon emissions resulting from deforestation and forest degradation. Although the international community has created mechanisms such as REDD+ to reduce those emissions, developing tropical countries continue to invest in infrastructure development in an effort to spur economic growth. Construction of roads in particular is known to be an important driver of deforestation. This article simulates the impact of road construction on deforestation in Western Amazonia, Peru, and quantifies the amount of carbon emissions associated with projected deforestation. To accomplish this objective, the article adopts a Bayesian probit land change model in which spatial dependencies are defined between regions or groups of pixels instead of between individual pixels, thereby reducing computational requirements. It also compares and contrasts the patterns of deforestation predicted by both spatial and non-spatial probit models. The spatial model replicates complex patterns of deforestation whereas the non-spatial model fails to do so. In terms of policy, both models suggest that road construction will increase deforestation by a modest amount, between 200-300 km2. This translates into aboveground carbon emissions of 1.36 and 1.85 x 106 tons. However, recent introduction of palm oil in the region serves as a cautionary example that the models may be underestimating the impact of roads.

  13. A Spatial Probit Econometric Model of Land Change: The Case of Infrastructure Development in Western Amazonia, Peru

    Science.gov (United States)

    Arima, E. Y.

    2016-01-01

    Tropical forests are now at the center stage of climate mitigation policies worldwide given their roles as sources of carbon emissions resulting from deforestation and forest degradation. Although the international community has created mechanisms such as REDD+ to reduce those emissions, developing tropical countries continue to invest in infrastructure development in an effort to spur economic growth. Construction of roads in particular is known to be an important driver of deforestation. This article simulates the impact of road construction on deforestation in Western Amazonia, Peru, and quantifies the amount of carbon emissions associated with projected deforestation. To accomplish this objective, the article adopts a Bayesian probit land change model in which spatial dependencies are defined between regions or groups of pixels instead of between individual pixels, thereby reducing computational requirements. It also compares and contrasts the patterns of deforestation predicted by both spatial and non-spatial probit models. The spatial model replicates complex patterns of deforestation whereas the non-spatial model fails to do so. In terms of policy, both models suggest that road construction will increase deforestation by a modest amount, between 200–300 km2. This translates into aboveground carbon emissions of 1.36 and 1.85 x 106 tons. However, recent introduction of palm oil in the region serves as a cautionary example that the models may be underestimating the impact of roads. PMID:27010739

  14. Modeling vehicle operating speed on urban roads in Montreal: a panel mixed ordered probit fractional split model.

    Science.gov (United States)

    Eluru, Naveen; Chakour, Vincent; Chamberlain, Morgan; Miranda-Moreno, Luis F

    2013-10-01

    that the proposed panel mixed ordered probit fractional split model offers promise for modeling such proportional ordinal variables. Copyright © 2013 Elsevier Ltd. All rights reserved.

  15. A Graphical Interpretation of Probit Coefficients.

    Science.gov (United States)

    Becker, William E.; Waldman, Donald M.

    1989-01-01

    Contends that, when discrete choice models are taught, particularly the probit model, it is the method rather than the interpretation of the results that is emphasized. This article provides a graphical technique for interpretation of an estimated probit coefficient that will be useful in statistics and econometrics courses. (GG)

  16. PROBIT REGRESSION IN PREDICTION ANALYSIS

    African Journals Online (AJOL)

    Admin

    2008-12-12

    Dec 12, 2008 ... For some dichotomous variables, the response y is actually a proxy for a variable that is continuous (Newsom, 2005). ... denotes optional weights. The probit link function is one of the link functions in generalized linear models. Others are (Fox, 1997 & McCullagh, 1992):. Log link: µ ln. Inverse link: µ. 1.

  17. Logit and probit model in toll sensitivity analysis of Solo-Ngawi, Kartasura-Palang Joglo segment based on Willingness to Pay (WTP)

    Science.gov (United States)

    Handayani, Dewi; Cahyaning Putri, Hera; Mahmudah, AMH

    2017-12-01

    Solo-Ngawi toll road project is part of the mega project of the Trans Java toll road development initiated by the government and is still under construction until now. PT Solo Ngawi Jaya (SNJ) as the Solo-Ngawi toll management company needs to determine the toll fare that is in accordance with the business plan. The determination of appropriate toll rates will affect progress in regional economic sustainability and decrease the traffic congestion. These policy instruments is crucial for achieving environmentally sustainable transport. Therefore, the objective of this research is to find out how the toll fare sensitivity of Solo-Ngawi toll road based on Willingness To Pay (WTP). Primary data was obtained by distributing stated preference questionnaires to four wheeled vehicle users in Kartasura-Palang Joglo artery road segment. Further data obtained will be analysed with logit and probit model. Based on the analysis, it is found that the effect of fare change on the amount of WTP on the binomial logit model is more sensitive than the probit model on the same travel conditions. The range of tariff change against values of WTP on the binomial logit model is 20% greater than the range of values in the probit model . On the other hand, the probability results of the binomial logit model and the binary probit have no significant difference (less than 1%).

  18. Meta-analysis of studies with bivariate binary outcomes: a marginal beta-binomial model approach.

    Science.gov (United States)

    Chen, Yong; Hong, Chuan; Ning, Yang; Su, Xiao

    2016-01-15

    When conducting a meta-analysis of studies with bivariate binary outcomes, challenges arise when the within-study correlation and between-study heterogeneity should be taken into account. In this paper, we propose a marginal beta-binomial model for the meta-analysis of studies with binary outcomes. This model is based on the composite likelihood approach and has several attractive features compared with the existing models such as bivariate generalized linear mixed model (Chu and Cole, 2006) and Sarmanov beta-binomial model (Chen et al., 2012). The advantages of the proposed marginal model include modeling the probabilities in the original scale, not requiring any transformation of probabilities or any link function, having closed-form expression of likelihood function, and no constraints on the correlation parameter. More importantly, because the marginal beta-binomial model is only based on the marginal distributions, it does not suffer from potential misspecification of the joint distribution of bivariate study-specific probabilities. Such misspecification is difficult to detect and can lead to biased inference using currents methods. We compare the performance of the marginal beta-binomial model with the bivariate generalized linear mixed model and the Sarmanov beta-binomial model by simulation studies. Interestingly, the results show that the marginal beta-binomial model performs better than the Sarmanov beta-binomial model, whether or not the true model is Sarmanov beta-binomial, and the marginal beta-binomial model is more robust than the bivariate generalized linear mixed model under model misspecifications. Two meta-analyses of diagnostic accuracy studies and a meta-analysis of case-control studies are conducted for illustration. Copyright © 2015 John Wiley & Sons, Ltd.

  19. Parameter estimation and statistical test of geographically weighted bivariate Poisson inverse Gaussian regression models

    Science.gov (United States)

    Amalia, Junita; Purhadi, Otok, Bambang Widjanarko

    2017-11-01

    Poisson distribution is a discrete distribution with count data as the random variables and it has one parameter defines both mean and variance. Poisson regression assumes mean and variance should be same (equidispersion). Nonetheless, some case of the count data unsatisfied this assumption because variance exceeds mean (over-dispersion). The ignorance of over-dispersion causes underestimates in standard error. Furthermore, it causes incorrect decision in the statistical test. Previously, paired count data has a correlation and it has bivariate Poisson distribution. If there is over-dispersion, modeling paired count data is not sufficient with simple bivariate Poisson regression. Bivariate Poisson Inverse Gaussian Regression (BPIGR) model is mix Poisson regression for modeling paired count data within over-dispersion. BPIGR model produces a global model for all locations. In another hand, each location has different geographic conditions, social, cultural and economic so that Geographically Weighted Regression (GWR) is needed. The weighting function of each location in GWR generates a different local model. Geographically Weighted Bivariate Poisson Inverse Gaussian Regression (GWBPIGR) model is used to solve over-dispersion and to generate local models. Parameter estimation of GWBPIGR model obtained by Maximum Likelihood Estimation (MLE) method. Meanwhile, hypothesis testing of GWBPIGR model acquired by Maximum Likelihood Ratio Test (MLRT) method.

  20. Modelling the potential supply of energy crops in Ireland: results from a probit model examining the factors affecting willingness to adopt

    OpenAIRE

    James Breen; Darragh Clancy; Brian Moran; Fiona Thorne

    2009-01-01

    Numerous studies exist reporting estimates of the theoretical potential for growing energy crops in the Ireland; however a knowledge gap exists on the extent to which Irish farmers would actually choose to grow these crops. We investigated the influence of selected individual and farm characteristics on willingness to consider growing energy crops among farm operators in Ireland. A sample of 958 operators selected by stratified sampling technique was used. A probit model was used to determine...

  1. A bivariate model for analyzing recurrent multi-type automobile failures

    Science.gov (United States)

    Sunethra, A. A.; Sooriyarachchi, M. R.

    2017-09-01

    The failure mechanism in an automobile can be defined as a system of multi-type recurrent failures where failures can occur due to various multi-type failure modes and these failures are repetitive such that more than one failure can occur from each failure mode. In analysing such automobile failures, both the time and type of the failure serve as response variables. However, these two response variables are highly correlated with each other since the timing of failures has an association with the mode of the failure. When there are more than one correlated response variables, the fitting of a multivariate model is more preferable than separate univariate models. Therefore, a bivariate model of time and type of failure becomes appealing for such automobile failure data. When there are multiple failure observations pertaining to a single automobile, such data cannot be treated as independent data because failure instances of a single automobile are correlated with each other while failures among different automobiles can be treated as independent. Therefore, this study proposes a bivariate model consisting time and type of failure as responses adjusted for correlated data. The proposed model was formulated following the approaches of shared parameter models and random effects models for joining the responses and for representing the correlated data respectively. The proposed model is applied to a sample of automobile failures with three types of failure modes and up to five failure recurrences. The parametric distributions that were suitable for the two responses of time to failure and type of failure were Weibull distribution and multinomial distribution respectively. The proposed bivariate model was programmed in SAS Procedure Proc NLMIXED by user programming appropriate likelihood functions. The performance of the bivariate model was compared with separate univariate models fitted for the two responses and it was identified that better performance is secured by

  2. Analysis of Blood Transfusion Data Using Bivariate Zero-Inflated Poisson Model: A Bayesian Approach.

    Science.gov (United States)

    Mohammadi, Tayeb; Kheiri, Soleiman; Sedehi, Morteza

    2016-01-01

    Recognizing the factors affecting the number of blood donation and blood deferral has a major impact on blood transfusion. There is a positive correlation between the variables "number of blood donation" and "number of blood deferral": as the number of return for donation increases, so does the number of blood deferral. On the other hand, due to the fact that many donors never return to donate, there is an extra zero frequency for both of the above-mentioned variables. In this study, in order to apply the correlation and to explain the frequency of the excessive zero, the bivariate zero-inflated Poisson regression model was used for joint modeling of the number of blood donation and number of blood deferral. The data was analyzed using the Bayesian approach applying noninformative priors at the presence and absence of covariates. Estimating the parameters of the model, that is, correlation, zero-inflation parameter, and regression coefficients, was done through MCMC simulation. Eventually double-Poisson model, bivariate Poisson model, and bivariate zero-inflated Poisson model were fitted on the data and were compared using the deviance information criteria (DIC). The results showed that the bivariate zero-inflated Poisson regression model fitted the data better than the other models.

  3. The Effects of Selection Strategies for Bivariate Loglinear Smoothing Models on NEAT Equating Functions

    Science.gov (United States)

    Moses, Tim; Holland, Paul W.

    2010-01-01

    In this study, eight statistical strategies were evaluated for selecting the parameterizations of loglinear models for smoothing the bivariate test score distributions used in nonequivalent groups with anchor test (NEAT) equating. Four of the strategies were based on significance tests of chi-square statistics (Likelihood Ratio, Pearson,…

  4. Interpreting and Understanding Logits, Probits, and other Non-Linear Probability Models

    DEFF Research Database (Denmark)

    Breen, Richard; Karlson, Kristian Bernt; Holm, Anders

    2018-01-01

    guidelines take little or no account of a body of work that, over the past 30 years, has pointed to problematic aspects of these nonlinear probability models and, particularly, to difficulties in interpreting their parameters. In this chapterreview, we draw on that literature to explain the problems, show...

  5. Bivariate functional data clustering: grouping streams based on a varying coefficient model of the stream water and air temperature relationship

    Science.gov (United States)

    H. Li; X. Deng; Andy Dolloff; E. P. Smith

    2015-01-01

    A novel clustering method for bivariate functional data is proposed to group streams based on their water–air temperature relationship. A distance measure is developed for bivariate curves by using a time-varying coefficient model and a weighting scheme. This distance is also adjusted by spatial correlation of streams via the variogram. Therefore, the proposed...

  6. Comparison of Weibull and Probit Analysis in Toxicity Testing of ...

    African Journals Online (AJOL)

    HP

    relationships to assess the toxic effects of chemical substances. These models range from very simple models to extremely complicated models for which the eventual functional forms cannot be easily expressed as single equations. Specifically, these models are (i) tolerance distribution models: log-probit, probit, Weibull, ...

  7. An integrated user-friendly ArcMAP tool for bivariate statistical modeling in geoscience applications

    Science.gov (United States)

    Jebur, M. N.; Pradhan, B.; Shafri, H. Z. M.; Yusof, Z.; Tehrany, M. S.

    2014-10-01

    Modeling and classification difficulties are fundamental issues in natural hazard assessment. A geographic information system (GIS) is a domain that requires users to use various tools to perform different types of spatial modeling. Bivariate statistical analysis (BSA) assists in hazard modeling. To perform this analysis, several calculations are required and the user has to transfer data from one format to another. Most researchers perform these calculations manually by using Microsoft Excel or other programs. This process is time consuming and carries a degree of uncertainty. The lack of proper tools to implement BSA in a GIS environment prompted this study. In this paper, a user-friendly tool, BSM (bivariate statistical modeler), for BSA technique is proposed. Three popular BSA techniques such as frequency ratio, weights-of-evidence, and evidential belief function models are applied in the newly proposed ArcMAP tool. This tool is programmed in Python and is created by a simple graphical user interface, which facilitates the improvement of model performance. The proposed tool implements BSA automatically, thus allowing numerous variables to be examined. To validate the capability and accuracy of this program, a pilot test area in Malaysia is selected and all three models are tested by using the proposed program. Area under curve is used to measure the success rate and prediction rate. Results demonstrate that the proposed program executes BSA with reasonable accuracy. The proposed BSA tool can be used in numerous applications, such as natural hazard, mineral potential, hydrological, and other engineering and environmental applications.

  8. A probit- log- skew-normal mixture model for repeated measures data with excess zeros, with application to a cohort study of paediatric respiratory symptoms

    Directory of Open Access Journals (Sweden)

    Johnston Neil W

    2010-06-01

    Full Text Available Abstract Background A zero-inflated continuous outcome is characterized by occurrence of "excess" zeros that more than a single distribution can explain, with the positive observations forming a skewed distribution. Mixture models are employed for regression analysis of zero-inflated data. Moreover, for repeated measures zero-inflated data the clustering structure should also be modeled for an adequate analysis. Methods Diary of Asthma and Viral Infections Study (DAVIS was a one year (2004 cohort study conducted at McMaster University to monitor viral infection and respiratory symptoms in children aged 5-11 years with and without asthma. Respiratory symptoms were recorded daily using either an Internet or paper-based diary. Changes in symptoms were assessed by study staff and led to collection of nasal fluid specimens for virological testing. The study objectives included investigating the response of respiratory symptoms to respiratory viral infection in children with and without asthma over a one year period. Due to sparse data daily respiratory symptom scores were aggregated into weekly average scores. More than 70% of the weekly average scores were zero, with the positive scores forming a skewed distribution. We propose a random effects probit/log-skew-normal mixture model to analyze the DAVIS data. The model parameters were estimated using a maximum marginal likelihood approach. A simulation study was conducted to assess the performance of the proposed mixture model if the underlying distribution of the positive response is different from log-skew normal. Results Viral infection status was highly significant in both probit and log-skew normal model components respectively. The probability of being symptom free was much lower for the week a child was viral positive relative to the week she/he was viral negative. The severity of the symptoms was also greater for the week a child was viral positive. The probability of being symptom free was

  9. A bivariate measurement error model for semicontinuous and continuous variables: Application to nutritional epidemiology.

    Science.gov (United States)

    Kipnis, Victor; Freedman, Laurence S; Carroll, Raymond J; Midthune, Douglas

    2016-03-01

    Semicontinuous data in the form of a mixture of a large portion of zero values and continuously distributed positive values frequently arise in many areas of biostatistics. This article is motivated by the analysis of relationships between disease outcomes and intakes of episodically consumed dietary components. An important aspect of studies in nutritional epidemiology is that true diet is unobservable and commonly evaluated by food frequency questionnaires with substantial measurement error. Following the regression calibration approach for measurement error correction, unknown individual intakes in the risk model are replaced by their conditional expectations given mismeasured intakes and other model covariates. Those regression calibration predictors are estimated using short-term unbiased reference measurements in a calibration substudy. Since dietary intakes are often "energy-adjusted," e.g., by using ratios of the intake of interest to total energy intake, the correct estimation of the regression calibration predictor for each energy-adjusted episodically consumed dietary component requires modeling short-term reference measurements of the component (a semicontinuous variable), and energy (a continuous variable) simultaneously in a bivariate model. In this article, we develop such a bivariate model, together with its application to regression calibration. We illustrate the new methodology using data from the NIH-AARP Diet and Health Study (Schatzkin et al., 2001, American Journal of Epidemiology 154, 1119-1125), and also evaluate its performance in a simulation study. © 2015, The International Biometric Society.

  10. Probit regression in prediction analysis | Nja | Global Journal of ...

    African Journals Online (AJOL)

    ... having nodal involvement as response variable. Within the framework of the probit regression model, the level of nodal involvement is predicted and the probability of nodal involvement obtained. KEYWORDS: Probit model, Nodal involvement, Standard cumulative normal distribution, Latent variable, Logistic regression.

  11. An integrated user-friendly ArcMAP tool for bivariate statistical modelling in geoscience applications

    Science.gov (United States)

    Jebur, M. N.; Pradhan, B.; Shafri, H. Z. M.; Yusoff, Z. M.; Tehrany, M. S.

    2015-03-01

    Modelling and classification difficulties are fundamental issues in natural hazard assessment. A geographic information system (GIS) is a domain that requires users to use various tools to perform different types of spatial modelling. Bivariate statistical analysis (BSA) assists in hazard modelling. To perform this analysis, several calculations are required and the user has to transfer data from one format to another. Most researchers perform these calculations manually by using Microsoft Excel or other programs. This process is time-consuming and carries a degree of uncertainty. The lack of proper tools to implement BSA in a GIS environment prompted this study. In this paper, a user-friendly tool, bivariate statistical modeler (BSM), for BSA technique is proposed. Three popular BSA techniques, such as frequency ratio, weight-of-evidence (WoE), and evidential belief function (EBF) models, are applied in the newly proposed ArcMAP tool. This tool is programmed in Python and created by a simple graphical user interface (GUI), which facilitates the improvement of model performance. The proposed tool implements BSA automatically, thus allowing numerous variables to be examined. To validate the capability and accuracy of this program, a pilot test area in Malaysia is selected and all three models are tested by using the proposed program. Area under curve (AUC) is used to measure the success rate and prediction rate. Results demonstrate that the proposed program executes BSA with reasonable accuracy. The proposed BSA tool can be used in numerous applications, such as natural hazard, mineral potential, hydrological, and other engineering and environmental applications.

  12. Probabilistic modeling using bivariate normal distributions for identification of flow and displacement intervals in longwall overburden

    Energy Technology Data Exchange (ETDEWEB)

    Karacan, C.O.; Goodman, G.V.R. [NIOSH, Pittsburgh, PA (United States). Off Mine Safety & Health Research

    2011-01-15

    Gob gas ventholes (GGV) are used to control methane emissions in longwall mines by capturing it within the overlying fractured strata before it enters the work environment. In order for GGVs to effectively capture more methane and less mine air, the length of the slotted sections and their proximity to top of the coal bed should be designed based on the potential gas sources and their locations, as well as the displacements in the overburden that will create potential flow paths for the gas. In this paper, an approach to determine the conditional probabilities of depth-displacement, depth-flow percentage, depth-formation and depth-gas content of the formations was developed using bivariate normal distributions. The flow percentage, displacement and formation data as a function of distance from coal bed used in this study were obtained from a series of borehole experiments contracted by the former US Bureau of Mines as part of a research project. Each of these parameters was tested for normality and was modeled using bivariate normal distributions to determine all tail probabilities. In addition, the probability of coal bed gas content as a function of depth was determined using the same techniques. The tail probabilities at various depths were used to calculate conditional probabilities for each of the parameters. The conditional probabilities predicted for various values of the critical parameters can be used with the measurements of flow and methane percentage at gob gas ventholes to optimize their performance.

  13. Using hierarchical Bayesian binary probit models to analyze crash injury severity on high speed facilities with real-time traffic data.

    Science.gov (United States)

    Yu, Rongjie; Abdel-Aty, Mohamed

    2014-01-01

    Severe crashes are causing serious social and economic loss, and because of this, reducing crash injury severity has become one of the key objectives of the high speed facilities' (freeway and expressway) management. Traditional crash injury severity analysis utilized data mainly from crash reports concerning the crash occurrence information, drivers' characteristics and roadway geometric related variables. In this study, real-time traffic and weather data were introduced to analyze the crash injury severity. The space mean speeds captured by the Automatic Vehicle Identification (AVI) system on the two roadways were used as explanatory variables in this study; and data from a mountainous freeway (I-70 in Colorado) and an urban expressway (State Road 408 in Orlando) have been used to identify the analysis result's consistence. Binary probit (BP) models were estimated to classify the non-severe (property damage only) crashes and severe (injury and fatality) crashes. Firstly, Bayesian BP models' results were compared to the results from Maximum Likelihood Estimation BP models and it was concluded that Bayesian inference was superior with more significant variables. Then different levels of hierarchical Bayesian BP models were developed with random effects accounting for the unobserved heterogeneity at segment level and crash individual level, respectively. Modeling results from both studied locations demonstrate that large variations of speed prior to the crash occurrence would increase the likelihood of severe crash occurrence. Moreover, with considering unobserved heterogeneity in the Bayesian BP models, the model goodness-of-fit has improved substantially. Finally, possible future applications of the model results and the hierarchical Bayesian probit models were discussed. Copyright © 2013 Elsevier Ltd. All rights reserved.

  14. Semiparametric bivariate zero-inflated Poisson models with application to studies of abundance for multiple species

    Science.gov (United States)

    Arab, Ali; Holan, Scott H.; Wikle, Christopher K.; Wildhaber, Mark L.

    2012-01-01

    Ecological studies involving counts of abundance, presence–absence or occupancy rates often produce data having a substantial proportion of zeros. Furthermore, these types of processes are typically multivariate and only adequately described by complex nonlinear relationships involving externally measured covariates. Ignoring these aspects of the data and implementing standard approaches can lead to models that fail to provide adequate scientific understanding of the underlying ecological processes, possibly resulting in a loss of inferential power. One method of dealing with data having excess zeros is to consider the class of univariate zero-inflated generalized linear models. However, this class of models fails to address the multivariate and nonlinear aspects associated with the data usually encountered in practice. Therefore, we propose a semiparametric bivariate zero-inflated Poisson model that takes into account both of these data attributes. The general modeling framework is hierarchical Bayes and is suitable for a broad range of applications. We demonstrate the effectiveness of our model through a motivating example on modeling catch per unit area for multiple species using data from the Missouri River Benthic Fishes Study, implemented by the United States Geological Survey.

  15. Bivariate spatial analysis of temperature and precipitation from general circulation models and observation proxies

    KAUST Repository

    Philbin, R.

    2015-05-22

    This study validates the near-surface temperature and precipitation output from decadal runs of eight atmospheric ocean general circulation models (AOGCMs) against observational proxy data from the National Centers for Environmental Prediction/National Center for Atmospheric Research (NCEP/NCAR) reanalysis temperatures and Global Precipitation Climatology Project (GPCP) precipitation data. We model the joint distribution of these two fields with a parsimonious bivariate Matérn spatial covariance model, accounting for the two fields\\' spatial cross-correlation as well as their own smoothnesses. We fit output from each AOGCM (30-year seasonal averages from 1981 to 2010) to a statistical model on each of 21 land regions. Both variance and smoothness values agree for both fields over all latitude bands except southern mid-latitudes. Our results imply that temperature fields have smaller smoothness coefficients than precipitation fields, while both have decreasing smoothness coefficients with increasing latitude. Models predict fields with smaller smoothness coefficients than observational proxy data for the tropics. The estimated spatial cross-correlations of these two fields, however, are quite different for most GCMs in mid-latitudes. Model correlation estimates agree well with those for observational proxy data for Australia, at high northern latitudes across North America, Europe and Asia, as well as across the Sahara, India, and Southeast Asia, but elsewhere, little consistent agreement exists.

  16. Improved deadzone modeling for bivariate wavelet shrinkage-based image denoising

    Science.gov (United States)

    DelMarco, Stephen

    2016-05-01

    Modern image processing performed on-board low Size, Weight, and Power (SWaP) platforms, must provide high- performance while simultaneously reducing memory footprint, power consumption, and computational complexity. Image preprocessing, along with downstream image exploitation algorithms such as object detection and recognition, and georegistration, place a heavy burden on power and processing resources. Image preprocessing often includes image denoising to improve data quality for downstream exploitation algorithms. High-performance image denoising is typically performed in the wavelet domain, where noise generally spreads and the wavelet transform compactly captures high information-bearing image characteristics. In this paper, we improve modeling fidelity of a previously-developed, computationally-efficient wavelet-based denoising algorithm. The modeling improvements enhance denoising performance without significantly increasing computational cost, thus making the approach suitable for low-SWAP platforms. Specifically, this paper presents modeling improvements to the Sendur-Selesnick model (SSM) which implements a bivariate wavelet shrinkage denoising algorithm that exploits interscale dependency between wavelet coefficients. We formulate optimization problems for parameters controlling deadzone size which leads to improved denoising performance. Two formulations are provided; one with a simple, closed form solution which we use for numerical result generation, and the second as an integral equation formulation involving elliptic integrals. We generate image denoising performance results over different image sets drawn from public domain imagery, and investigate the effect of wavelet filter tap length on denoising performance. We demonstrate denoising performance improvement when using the enhanced modeling over performance obtained with the baseline SSM model.

  17. Bivariate Gaussian bridges: directional factorization of diffusion in Brownian bridge models.

    Science.gov (United States)

    Kranstauber, Bart; Safi, Kamran; Bartumeus, Frederic

    2014-01-01

    In recent years high resolution animal tracking data has become the standard in movement ecology. The Brownian Bridge Movement Model (BBMM) is a widely adopted approach to describe animal space use from such high resolution tracks. One of the underlying assumptions of the BBMM is isotropic diffusive motion between consecutive locations, i.e. invariant with respect to the direction. Here we propose to relax this often unrealistic assumption by separating the Brownian motion variance into two directional components, one parallel and one orthogonal to the direction of the motion. Our new model, the Bivariate Gaussian bridge (BGB), tracks movement heterogeneity across time. Using the BGB and identifying directed and non-directed movement within a trajectory resulted in more accurate utilisation distributions compared to dynamic Brownian bridges, especially for trajectories with a non-isotropic diffusion, such as directed movement or Lévy like movements. We evaluated our model with simulated trajectories and observed tracks, demonstrating that the improvement of our model scales with the directional correlation of a correlated random walk. We find that many of the animal trajectories do not adhere to the assumptions of the BBMM. The proposed model improves accuracy when describing the space use both in simulated correlated random walks as well as observed animal tracks. Our novel approach is implemented and available within the "move" package for R.

  18. Neural Systems with Numerically Matched Input-Output Statistic: Isotonic Bivariate Statistical Modeling

    Directory of Open Access Journals (Sweden)

    Simone Fiori

    2007-07-01

    Full Text Available Bivariate statistical modeling from incomplete data is a useful statistical tool that allows to discover the model underlying two data sets when the data in the two sets do not correspond in size nor in ordering. Such situation may occur when the sizes of the two data sets do not match (i.e., there are “holes” in the data or when the data sets have been acquired independently. Also, statistical modeling is useful when the amount of available data is enough to show relevant statistical features of the phenomenon underlying the data. We propose to tackle the problem of statistical modeling via a neural (nonlinear system that is able to match its input-output statistic to the statistic of the available data sets. A key point of the new implementation proposed here is that it is based on look-up-table (LUT neural systems, which guarantee a computationally advantageous way of implementing neural systems. A number of numerical experiments, performed on both synthetic and real-world data sets, illustrate the features of the proposed modeling procedure.

  19. Modeling both of the number of pausibacillary and multibacillary leprosy patients by using bivariate poisson regression

    Science.gov (United States)

    Winahju, W. S.; Mukarromah, A.; Putri, S.

    2015-03-01

    Leprosy is a chronic infectious disease caused by bacteria of leprosy (Mycobacterium leprae). Leprosy has become an important thing in Indonesia because its morbidity is quite high. Based on WHO data in 2014, in 2012 Indonesia has the highest number of new leprosy patients after India and Brazil with a contribution of 18.994 people (8.7% of the world). This number makes Indonesia automatically placed as the country with the highest number of leprosy morbidity of ASEAN countries. The province that most contributes to the number of leprosy patients in Indonesia is East Java. There are two kind of leprosy. They consist of pausibacillary and multibacillary. The morbidity of multibacillary leprosy is higher than pausibacillary leprosy. This paper will discuss modeling both of the number of multibacillary and pausibacillary leprosy patients as responses variables. These responses are count variables, so modeling will be conducted by using bivariate poisson regression method. Unit experiment used is in East Java, and predictors involved are: environment, demography, and poverty. The model uses data in 2012, and the result indicates that all predictors influence significantly.

  20. Meta-analysis for diagnostic accuracy studies: a new statistical model using beta-binomial distributions and bivariate copulas.

    Science.gov (United States)

    Kuss, Oliver; Hoyer, Annika; Solms, Alexander

    2014-01-15

    There are still challenges when meta-analyzing data from studies on diagnostic accuracy. This is mainly due to the bivariate nature of the response where information on sensitivity and specificity must be summarized while accounting for their correlation within a single trial. In this paper, we propose a new statistical model for the meta-analysis for diagnostic accuracy studies. This model uses beta-binomial distributions for the marginal numbers of true positives and true negatives and links these margins by a bivariate copula distribution. The new model comes with all the features of the current standard model, a bivariate logistic regression model with random effects, but has the additional advantages of a closed likelihood function and a larger flexibility for the correlation structure of sensitivity and specificity. In a simulation study, which compares three copula models and two implementations of the standard model, the Plackett and the Gauss copula do rarely perform worse but frequently better than the standard model. We use an example from a meta-analysis to judge the diagnostic accuracy of telomerase (a urinary tumor marker) for the diagnosis of primary bladder cancer for illustration. Copyright © 2013 John Wiley & Sons, Ltd.

  1. Predicting Lung Radiotherapy-Induced Pneumonitis Using a Model Combining Parametric Lyman Probit With Nonparametric Decision Trees

    International Nuclear Information System (INIS)

    Das, Shiva K.; Zhou Sumin; Zhang, Junan; Yin, F.-F.; Dewhirst, Mark W.; Marks, Lawrence B.

    2007-01-01

    Purpose: To develop and test a model to predict for lung radiation-induced Grade 2+ pneumonitis. Methods and Materials: The model was built from a database of 234 lung cancer patients treated with radiotherapy (RT), of whom 43 were diagnosed with pneumonitis. The model augmented the predictive capability of the parametric dose-based Lyman normal tissue complication probability (LNTCP) metric by combining it with weighted nonparametric decision trees that use dose and nondose inputs. The decision trees were sequentially added to the model using a 'boosting' process that enhances the accuracy of prediction. The model's predictive capability was estimated by 10-fold cross-validation. To facilitate dissemination, the cross-validation result was used to extract a simplified approximation to the complicated model architecture created by boosting. Application of the simplified model is demonstrated in two example cases. Results: The area under the model receiver operating characteristics curve for cross-validation was 0.72, a significant improvement over the LNTCP area of 0.63 (p = 0.005). The simplified model used the following variables to output a measure of injury: LNTCP, gender, histologic type, chemotherapy schedule, and treatment schedule. For a given patient RT plan, injury prediction was highest for the combination of pre-RT chemotherapy, once-daily treatment, female gender and lowest for the combination of no pre-RT chemotherapy and nonsquamous cell histologic type. Application of the simplified model to the example cases revealed that injury prediction for a given treatment plan can range from very low to very high, depending on the settings of the nondose variables. Conclusions: Radiation pneumonitis prediction was significantly enhanced by decision trees that added the influence of nondose factors to the LNTCP formulation

  2. Predicting lung radiotherapy-induced pneumonitis using a model combining parametric Lyman probit with nonparametric decision trees.

    Science.gov (United States)

    Das, Shiva K; Zhou, Sumin; Zhang, Junan; Yin, Fang-Fang; Dewhirst, Mark W; Marks, Lawrence B

    2007-07-15

    To develop and test a model to predict for lung radiation-induced Grade 2+ pneumonitis. The model was built from a database of 234 lung cancer patients treated with radiotherapy (RT), of whom 43 were diagnosed with pneumonitis. The model augmented the predictive capability of the parametric dose-based Lyman normal tissue complication probability (LNTCP) metric by combining it with weighted nonparametric decision trees that use dose and nondose inputs. The decision trees were sequentially added to the model using a "boosting" process that enhances the accuracy of prediction. The model's predictive capability was estimated by 10-fold cross-validation. To facilitate dissemination, the cross-validation result was used to extract a simplified approximation to the complicated model architecture created by boosting. Application of the simplified model is demonstrated in two example cases. The area under the model receiver operating characteristics curve for cross-validation was 0.72, a significant improvement over the LNTCP area of 0.63 (p = 0.005). The simplified model used the following variables to output a measure of injury: LNTCP, gender, histologic type, chemotherapy schedule, and treatment schedule. For a given patient RT plan, injury prediction was highest for the combination of pre-RT chemotherapy, once-daily treatment, female gender and lowest for the combination of no pre-RT chemotherapy and nonsquamous cell histologic type. Application of the simplified model to the example cases revealed that injury prediction for a given treatment plan can range from very low to very high, depending on the settings of the nondose variables. Radiation pneumonitis prediction was significantly enhanced by decision trees that added the influence of nondose factors to the LNTCP formulation.

  3. Comparing Johnson’s SBB, Weibull and Logit-Logistic bivariate distributions for modeling tree diameters and heights using copulas

    Directory of Open Access Journals (Sweden)

    Jose Javier Gorgoso-Varela

    2016-04-01

    Full Text Available Aim of study: In this study we compare the accuracy of three bivariate distributions: Johnson’s SBB, Weibull-2P and LL-2P functions for characterizing the joint distribution of tree diameters and heights.Area of study: North-West of Spain.Material and methods: Diameter and height measurements of 128 plots of pure and even-aged Tasmanian blue gum (Eucalyptus globulus Labill. stands located in the North-west of Spain were considered in the present study. The SBB bivariate distribution was obtained from SB marginal distributions using a Normal Copula based on a four-parameter logistic transformation. The Plackett Copula was used to obtain the bivariate models from the Weibull and Logit-logistic univariate marginal distributions. The negative logarithm of the maximum likelihood function was used to compare the results and the Wilcoxon signed-rank test was used to compare the related samples of these logarithms calculated for each sample plot and each distribution.Main results: The best results were obtained by using the Plackett copula and the best marginal distribution was the Logit-logistic.Research highlights: The copulas used in this study have shown a good performance for modeling the joint distribution of tree diameters and heights. They could be easily extended for modelling multivariate distributions involving other tree variables, such as tree volume or biomass.

  4. Comparing Johnson’s SBB, Weibull and Logit-Logistic bivariate distributions for modeling tree diameters and heights using copulas

    Energy Technology Data Exchange (ETDEWEB)

    Cardil Forradellas, A.; Molina Terrén, D.M.; Oliveres, J.; Castellnou, M.

    2016-07-01

    Aim of study: In this study we compare the accuracy of three bivariate distributions: Johnson’s SBB, Weibull-2P and LL-2P functions for characterizing the joint distribution of tree diameters and heights. Area of study: North-West of Spain. Material and methods: Diameter and height measurements of 128 plots of pure and even-aged Tasmanian blue gum (Eucalyptus globulus Labill.) stands located in the North-west of Spain were considered in the present study. The SBB bivariate distribution was obtained from SB marginal distributions using a Normal Copula based on a four-parameter logistic transformation. The Plackett Copula was used to obtain the bivariate models from the Weibull and Logit-logistic univariate marginal distributions. The negative logarithm of the maximum likelihood function was used to compare the results and the Wilcoxon signed-rank test was used to compare the related samples of these logarithms calculated for each sample plot and each distribution. Main results: The best results were obtained by using the Plackett copula and the best marginal distribution was the Logit-logistic. Research highlights: The copulas used in this study have shown a good performance for modeling the joint distribution of tree diameters and heights. They could be easily extended for modelling multivariate distributions involving other tree variables, such as tree volume or biomass. (Author)

  5. Bivariate Extension of the Quadrature Method of Moments for Modeling Simultaneous Coagulation and Sintering of Particle Populations.

    Science.gov (United States)

    Wright, Douglas L.; McGraw, Robert; Rosner, Daniel E.

    2001-04-15

    We extendthe application of moment methods to multivariate suspended particle population problems-those for which size alone is insufficient to specify the state of a particle in the population. Specifically, a bivariate extension of the quadrature method of moments (QMOM) (R. McGraw, Aerosol Sci. Technol. 27, 255 (1997)) is presented for efficiently modeling the dynamics of a population of inorganic nanoparticles undergoing simultaneous coagulation and particle sintering. Continuum regime calculations are presented for the Koch-Friedlander-Tandon-Rosner model, which includes coagulation by Brownian diffusion (evaluated for particle fractal dimensions, D(f), in the range 1.8-3) and simultaneous sintering of the resulting aggregates (P. Tandon and D. E. Rosner, J. Colloid Interface Sci. 213, 273 (1999)). For evaluation purposes, and to demonstrate the computational efficiency of the bivariate QMOM, benchmark calculations are carried out using a high-resolution discrete method to evolve the particle distribution function n(nu, a) for short to intermediate times (where nu and a are particle volume and surface area, respectively). Time evolution of a selected set of 36 low-order mixed moments is obtained by integration of the full bivariate distribution and compared with the corresponding moments obtained directly using two different extensions of the QMOM. With the more extensive treatment, errors of less than 1% are obtained over substantial aerosol evolution, while requiring only a few minutes (rather than days) of CPU time. Longer time QMOM simulations lend support to the earlier finding of a self-preserving limit for the dimensionless joint (nu, a) particle distribution function under simultaneous coagulation and sintering (Tandon and Rosner, 1999; D. E. Rosner and S. Yu, AIChE J., 47 (2001)). We demonstrate that, even in the bivariate case, it is possible to use the QMOM to rapidly model the approach to asymptotic behavior, allowing an immediate assessment of

  6. Inland dissolved salt chemistry: statistical evaluation of bivariate and ternary diagram models for surface and subsurface waters

    Directory of Open Access Journals (Sweden)

    Stephen T. THRELKELD

    2000-08-01

    Full Text Available We compared the use of ternary and bivariate diagrams to distinguish the effects of atmospheric precipitation, rock weathering, and evaporation on inland surface and subsurface water chemistry. The three processes could not be statistically differentiated using bivariate models even if large water bodies were evaluated separate from small water bodies. Atmospheric precipitation effects were identified using ternary diagrams in water with total dissolved salts (TDS 1000 mg l-1. A principal components analysis showed that the variability in the relative proportions of the major ions was related to atmospheric precipitation, weathering, and evaporation. About half of the variation in the distribution of inorganic ions was related to rock weathering. By considering most of the important inorganic ions, ternary diagrams are able to distinguish the contributions of atmospheric precipitation, rock weathering, and evaporation to inland water chemistry.

  7. Ordinal Bivariate Inequality

    DEFF Research Database (Denmark)

    Sonne-Schmidt, Christoffer Scavenius; Tarp, Finn; Østerdal, Lars Peter Raahave

    2016-01-01

    This paper introduces a concept of inequality comparisons with ordinal bivariate categorical data. In our model, one population is more unequal than another when they have common arithmetic median outcomes and the first can be obtained from the second by correlation-increasing switches and....../or median-preserving spreads. For the canonical 2 × 2 case (with two binary indicators), we derive a simple operational procedure for checking ordinal inequality relations in practice. As an illustration, we apply the model to childhood deprivation in Mozambique....

  8. Ordinal bivariate inequality

    DEFF Research Database (Denmark)

    Sonne-Schmidt, Christoffer Scavenius; Tarp, Finn; Østerdal, Lars Peter Raahave

    This paper introduces a concept of inequality comparisons with ordinal bivariate categorical data. In our model, one population is more unequal than another when they have common arithmetic median outcomes and the first can be obtained from the second by correlationincreasing switches and/or median......-preserving spreads. For the canonical 2x2 case (with two binary indicators), we derive a simple operational procedure for checking ordinal inequality relations in practice. As an illustration, we apply the model to childhood deprivation in Mozambique....

  9. Collective estimation of multiple bivariate density functions with application to angular-sampling-based protein loop modeling

    KAUST Repository

    Maadooliat, Mehdi

    2015-10-21

    This paper develops a method for simultaneous estimation of density functions for a collection of populations of protein backbone angle pairs using a data-driven, shared basis that is constructed by bivariate spline functions defined on a triangulation of the bivariate domain. The circular nature of angular data is taken into account by imposing appropriate smoothness constraints across boundaries of the triangles. Maximum penalized likelihood is used to fit the model and an alternating blockwise Newton-type algorithm is developed for computation. A simulation study shows that the collective estimation approach is statistically more efficient than estimating the densities individually. The proposed method was used to estimate neighbor-dependent distributions of protein backbone dihedral angles (i.e., Ramachandran distributions). The estimated distributions were applied to protein loop modeling, one of the most challenging open problems in protein structure prediction, by feeding them into an angular-sampling-based loop structure prediction framework. Our estimated distributions compared favorably to the Ramachandran distributions estimated by fitting a hierarchical Dirichlet process model; and in particular, our distributions showed significant improvements on the hard cases where existing methods do not work well.

  10. Improving the modelling of redshift-space distortions - I. A bivariate Gaussian description for the galaxy pairwise velocity distributions

    Science.gov (United States)

    Bianchi, Davide; Chiesa, Matteo; Guzzo, Luigi

    2015-01-01

    As a step towards a more accurate modelling of redshift-space distortions (RSD) in galaxy surveys, we develop a general description of the probability distribution function of galaxy pairwise velocities within the framework of the so-called streaming model. For a given galaxy separation r, such function can be described as a superposition of virtually infinite local distributions. We characterize these in terms of their moments and then consider the specific case in which they are Gaussian functions, each with its own mean μ and dispersion σ. Based on physical considerations, we make the further crucial assumption that these two parameters are in turn distributed according to a bivariate Gaussian, with its own mean and covariance matrix. Tests using numerical simulations explicitly show that with this compact description one can correctly model redshift-space distortions on all scales, fully capturing the overall linear and non-linear dynamics of the galaxy flow at different separations. In particular, we naturally obtain Gaussian/exponential, skewed/unskewed distribution functions, depending on separation as observed in simulations and data. Also, the recently proposed single-Gaussian description of RSD is included in this model as a limiting case, when the bivariate Gaussian is collapsed to a two-dimensional Dirac delta function. We also show how this description naturally allows for the Taylor expansion of 1 + ξS(s) around 1 + ξR(r), which leads to the Kaiser linear formula when truncated to second order, explicating its connection with the moments of the velocity distribution functions. More work is needed, but these results indicate a very promising path to make definitive progress in our programme to improve RSD estimators.

  11. Modeling Bivariate Change in Individual Differences: Prospective Associations Between Personality and Life Satisfaction.

    Science.gov (United States)

    Hounkpatin, Hilda Osafo; Boyce, Christopher J; Dunn, Graham; Wood, Alex M

    2017-09-18

    A number of structural equation models have been developed to examine change in 1 variable or the longitudinal association between 2 variables. The most common of these are the latent growth model, the autoregressive cross-lagged model, the autoregressive latent trajectory model, and the latent change score model. The authors first overview each of these models through evaluating their different assumptions surrounding the nature of change and how these assumptions may result in different data interpretations. They then, to elucidate these issues in an empirical example, examine the longitudinal association between personality traits and life satisfaction. In a representative Dutch sample (N = 8,320), with participants providing data on both personality and life satisfaction measures every 2 years over an 8-year period, the authors reproduce findings from previous research. However, some of the structural equation models overviewed have not previously been applied to the personality-life satisfaction relation. The extended empirical examination suggests intraindividual changes in life satisfaction predict subsequent intraindividual changes in personality traits. The availability of data sets with 3 or more assessment waves allows the application of more advanced structural equation models such as the autoregressive latent trajectory or the extended latent change score model, which accounts for the complex dynamic nature of change processes and allows stronger inferences on the nature of the association between variables. However, the choice of model should be determined by theories of change processes in the variables being studied. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  12. Bivariate modelling of the financial development-fossil fuel consumption nexus in Ghana

    OpenAIRE

    Yeboah Asuamah, Samuel

    2017-01-01

    The present paper modelled the relationship between financial developments and fossil fuel energy consumption in Ghana for the period 1970-2011 by applying Autoregressive Distributed Lad Model (ARDL). The findings of the paper on the cointegration test indicate significant evidence of cointegration between fossil fuel consumption and financial development. The findings seem to suggest that financial development is an explanatory variable in fossil fuel consumption management in achieving sust...

  13. Fitting a Bivariate Measurement Error Model for Episodically Consumed Dietary Components

    KAUST Repository

    Zhang, Saijuan

    2011-01-06

    There has been great public health interest in estimating usual, i.e., long-term average, intake of episodically consumed dietary components that are not consumed daily by everyone, e.g., fish, red meat and whole grains. Short-term measurements of episodically consumed dietary components have zero-inflated skewed distributions. So-called two-part models have been developed for such data in order to correct for measurement error due to within-person variation and to estimate the distribution of usual intake of the dietary component in the univariate case. However, there is arguably much greater public health interest in the usual intake of an episodically consumed dietary component adjusted for energy (caloric) intake, e.g., ounces of whole grains per 1000 kilo-calories, which reflects usual dietary composition and adjusts for different total amounts of caloric intake. Because of this public health interest, it is important to have models to fit such data, and it is important that the model-fitting methods can be applied to all episodically consumed dietary components.We have recently developed a nonlinear mixed effects model (Kipnis, et al., 2010), and have fit it by maximum likelihood using nonlinear mixed effects programs and methodology (the SAS NLMIXED procedure). Maximum likelihood fitting of such a nonlinear mixed model is generally slow because of 3-dimensional adaptive Gaussian quadrature, and there are times when the programs either fail to converge or converge to models with a singular covariance matrix. For these reasons, we develop a Monte-Carlo (MCMC) computation of fitting this model, which allows for both frequentist and Bayesian inference. There are technical challenges to developing this solution because one of the covariance matrices in the model is patterned. Our main application is to the National Institutes of Health (NIH)-AARP Diet and Health Study, where we illustrate our methods for modeling the energy-adjusted usual intake of fish and whole

  14. Testing for a Common Volatility Process and Information Spillovers in Bivariate Financial Time Series Models

    NARCIS (Netherlands)

    J. Chen (Jinghui); M. Kobayashi (Masahito); M.J. McAleer (Michael)

    2016-01-01

    textabstractThe paper considers the problem as to whether financial returns have a common volatility process in the framework of stochastic volatility models that were suggested by Harvey et al. (1994). We propose a stochastic volatility version of the ARCH test proposed by Engle and Susmel (1993),

  15. Application of Multinomial Probit Model in Analyzing Factors Affecting the Occupation of Graduated Students from the University of Agricultural Applied-Science

    Directory of Open Access Journals (Sweden)

    H. Mohammadi

    2016-03-01

    Full Text Available Introduction:Scientificand practicaltrainingwith an emphasis onoperation andapplication of what is taught and having an empiricalapproachto education isa more suitable approach for creating jobs. Preparation of educational needs of the agricultural sector by scientificand practicaltraining and providingemploymentin agreement with education and skills is one of the most important programs in order to achieve the objectives of comprehensive development of the country. An imbalance seems to exist between the processes and materials in university courses and the skills and abilities needed by the labor market and this is the most importantreason for the failureof the university graduatesin finding employment. This studyhas beendone for understandingthe type of jobof agricultural graduatesof training center of Jihad-e-Keshavarzi in Mashhad and the factor saffecting their employment. Materials and Methods: This study is an applied research and the statistical population is 167 and includes all the students who had earned a Bachelor’s degree who had come to receive their graduation certificates in 2011. The dependent variable is type of job which includes five categories of employment in the public sector related to education, employ men unrelated to the government, employment related tothe privatesector andthe unemployed who were seeking workin the private sector. Independent variables includegender,quotainuniversityadmissions, the level of interestin thefield of study,satisfaction withthe discipline, evaluationand trainingof graduatesofvocational skillsacquired incollegegraduates'assessmentof thework culturein the societyand evaluation oflack ofcapitalas a factor preventingemployment in the academicfield. Information was collected through questionnaires and the multiple probit mode lwas used. Results and discussion: The results ofthe survey showthatjobsof graduates are divided intofour categoriesincluding:Related to the field of study and

  16. SU-C-BRC-04: Efficient Dose Calculation Algorithm for FFF IMRT with a Simplified Bivariate Gaussian Source Model

    Energy Technology Data Exchange (ETDEWEB)

    Li, F; Park, J; Barraclough, B; Lu, B; Li, J; Liu, C; Yan, G [University Florida, Gainesville, FL (United States)

    2016-06-15

    Purpose: To develop an efficient and accurate independent dose calculation algorithm with a simplified analytical source model for the quality assurance and safe delivery of Flattening Filter Free (FFF)-IMRT on an Elekta Versa HD. Methods: The source model consisted of a point source and a 2D bivariate Gaussian source, respectively modeling the primary photons and the combined effect of head scatter, monitor chamber backscatter and collimator exchange effect. The in-air fluence was firstly calculated by back-projecting the edges of beam defining devices onto the source plane and integrating the visible source distribution. The effect of the rounded MLC leaf end, tongue-and-groove and interleaf transmission was taken into account in the back-projection. The in-air fluence was then modified with a fourth degree polynomial modeling the cone-shaped dose distribution of FFF beams. Planar dose distribution was obtained by convolving the in-air fluence with a dose deposition kernel (DDK) consisting of the sum of three 2D Gaussian functions. The parameters of the source model and the DDK were commissioned using measured in-air output factors (Sc) and cross beam profiles, respectively. A novel method was used to eliminate the volume averaging effect of ion chambers in determining the DDK. Planar dose distributions of five head-and-neck FFF-IMRT plans were calculated and compared against measurements performed with a 2D diode array (MapCHECK™) to validate the accuracy of the algorithm. Results: The proposed source model predicted Sc for both 6MV and 10MV with an accuracy better than 0.1%. With a stringent gamma criterion (2%/2mm/local difference), the passing rate of the FFF-IMRT dose calculation was 97.2±2.6%. Conclusion: The removal of the flattening filter represents a simplification of the head structure which allows the use of a simpler source model for very accurate dose calculation. The proposed algorithm offers an effective way to ensure the safe delivery of FFF-IMRT.

  17. A Hybrid ANN-GA Model to Prediction of Bivariate Binary Responses: Application to Joint Prediction of Occurrence of Heart Block and Death in Patients with Myocardial Infarction.

    Science.gov (United States)

    Mirian, Negin-Sadat; Sedehi, Morteza; Kheiri, Soleiman; Ahmadi, Ali

    2016-01-01

    In medical studies, when the joint prediction about occurrence of two events should be anticipated, a statistical bivariate model is used. Due to the limitations of usual statistical models, other methods such as Artificial Neural Network (ANN) and hybrid models could be used. In this paper, we propose a hybrid Artificial Neural Network-Genetic Algorithm (ANN-GA) model to prediction the occurrence of heart block and death in myocardial infarction (MI) patients simultaneously. For fitting and comparing the models, 263 new patients with definite diagnosis of MI hospitalized in Cardiology Ward of Hajar Hospital, Shahrekord, Iran, from March, 2014 to March, 2016 were enrolled. Occurrence of heart block and death were employed as bivariate binary outcomes. Bivariate Logistic Regression (BLR), ANN and hybrid ANN-GA models were fitted to data. Prediction accuracy was used to compare the models. The codes were written in Matlab 2013a and Zelig package in R3.2.2. The prediction accuracy of BLR, ANN and hybrid ANN-GA models was obtained 77.7%, 83.69% and 93.85% for the training and 78.48%, 84.81% and 96.2% for the test data, respectively. In both training and test data set, hybrid ANN-GA model had better accuracy. ANN model could be a suitable alternative for modeling and predicting bivariate binary responses when the presuppositions of statistical models are not met in actual data. In addition, using optimization methods, such as hybrid ANN-GA model, could improve precision of ANN model.

  18. Joint association analysis of bivariate quantitative and qualitative traits.

    Science.gov (United States)

    Yuan, Mengdie; Diao, Guoqing

    2011-11-29

    Univariate genome-wide association analysis of quantitative and qualitative traits has been investigated extensively in the literature. In the presence of correlated phenotypes, it is more intuitive to analyze all phenotypes simultaneously. We describe an efficient likelihood-based approach for the joint association analysis of quantitative and qualitative traits in unrelated individuals. We assume a probit model for the qualitative trait, under which an unobserved latent variable and a prespecified threshold determine the value of the qualitative trait. To jointly model the quantitative and qualitative traits, we assume that the quantitative trait and the latent variable follow a bivariate normal distribution. The latent variable is allowed to be correlated with the quantitative phenotype. Simultaneous modeling of the quantitative and qualitative traits allows us to make more precise inference on the pleiotropic genetic effects. We derive likelihood ratio tests for the testing of genetic effects. An application to the Genetic Analysis Workshop 17 data is provided. The new method yields reasonable power and meaningful results for the joint association analysis of the quantitative trait Q1 and the qualitative trait disease status at SNPs with not too small MAF.

  19. Dealing with selection bias in educational transition models

    DEFF Research Database (Denmark)

    Holm, Anders; Jæger, Mads Meier

    2011-01-01

    This paper proposes the bivariate probit selection model (BPSM) as an alternative to the traditional Mare model for analyzing educational transitions. The BPSM accounts for selection on unobserved variables by allowing for unobserved variables which affect the probability of making educational tr...

  20. The use of bivariate spatial modeling of questionnaire and parasitology data to predict the distribution of Schistosoma haematobium in Coastal Kenya.

    Directory of Open Access Journals (Sweden)

    Hugh J W Sturrock

    Full Text Available Questionnaires of reported blood in urine (BIU distributed through the existing school system provide a rapid and reliable method to classify schools according to the prevalence of Schistosoma haematobium, thereby helping in the targeting of schistosomiasis control. However, not all schools return questionnaires and it is unclear whether treatment is warranted in such schools. This study investigates the use of bivariate spatial modelling of available and multiple data sources to predict the prevalence of S. haematobium at every school along the Kenyan coast.Data from a questionnaire survey conducted by the Kenya Ministry of Education in Coast Province in 2009 were combined with available parasitological and environmental data in a Bayesian bivariate spatial model. This modeled the relationship between BIU data and environmental covariates, as well as the relationship between BIU and S. haematobium infection prevalence, to predict S. haematobium infection prevalence at all schools in the study region. Validation procedures were implemented to assess the predictive accuracy of endemicity classification.The prevalence of BIU was negatively correlated with distance to nearest river and there was considerable residual spatial correlation at small (~15 km spatial scales. There was a predictable relationship between the prevalence of reported BIU and S. haematobium infection. The final model exhibited excellent sensitivity (0.94 but moderate specificity (0.69 in identifying low (<10% prevalence schools, and had poor performance in differentiating between moderate and high prevalence schools (sensitivity 0.5, specificity 1.Schistosomiasis is highly focal and there is a need to target treatment on a school-by-school basis. The use of bivariate spatial modelling can supplement questionnaire data to identify schools requiring mass treatment, but is unable to distinguish between moderate and high prevalence schools.

  1. Socio-Economic Factors Affecting Adoption of Modern Information and Communication Technology by Farmers in India: Analysis Using Multivariate Probit Model

    Science.gov (United States)

    Mittal, Surabhi; Mehar, Mamta

    2016-01-01

    Purpose: The paper analyzes factors that affect the likelihood of adoption of different agriculture-related information sources by farmers. Design/Methodology/Approach: The paper links the theoretical understanding of the existing multiple sources of information that farmers use, with the empirical model to analyze the factors that affect the…

  2. A Hybrid Forecasting Model Based on Bivariate Division and a Backpropagation Artificial Neural Network Optimized by Chaos Particle Swarm Optimization for Day-Ahead Electricity Price

    Directory of Open Access Journals (Sweden)

    Zhilong Wang

    2014-01-01

    Full Text Available In the electricity market, the electricity price plays an inevitable role. Nevertheless, accurate price forecasting, a vital factor affecting both government regulatory agencies and public power companies, remains a huge challenge and a critical problem. Determining how to address the accurate forecasting problem becomes an even more significant task in an era in which electricity is increasingly important. Based on the chaos particle swarm optimization (CPSO, the backpropagation artificial neural network (BPANN, and the idea of bivariate division, this paper proposes a bivariate division BPANN (BD-BPANN method and the CPSO-BD-BPANN method for forecasting electricity price. The former method creatively transforms the electricity demand and price to be a new variable, named DV, which is calculated using the division principle, to forecast the day-ahead electricity by multiplying the forecasted values of the DVs and forecasted values of the demand. Next, to improve the accuracy of BD-BPANN, chaos particle swarm optimization and BD-BPANN are synthesized to form a novel model, CPSO-BD-BPANN. In this study, CPSO is utilized to optimize the initial parameters of BD-BPANN to make its output more stable than the original model. Finally, two forecasting strategies are proposed regarding different situations.

  3. Flash flood susceptibility analysis and its mapping using different bivariate models in Iran: a comparison between Shannon's entropy, statistical index, and weighting factor models.

    Science.gov (United States)

    Khosravi, Khabat; Pourghasemi, Hamid Reza; Chapi, Kamran; Bahri, Masoumeh

    2016-12-01

    Flooding is a very common worldwide natural hazard causing large-scale casualties every year; Iran is not immune to this thread as well. Comprehensive flood susceptibility mapping is very important to reduce losses of lives and properties. Thus, the aim of this study is to map susceptibility to flooding by different bivariate statistical methods including Shannon's entropy (SE), statistical index (SI), and weighting factor (Wf). In this regard, model performance evaluation is also carried out in Haraz Watershed, Mazandaran Province, Iran. In the first step, 211 flood locations were identified by the documentary sources and field inventories, of which 70% (151 positions) were used for flood susceptibility modeling and 30% (60 positions) for evaluation and verification of the model. In the second step, ten influential factors in flooding were chosen, namely slope angle, plan curvature, altitude, topographic wetness index (TWI), stream power index (SPI), distance from river, rainfall, geology, land use, and normalized difference vegetation index (NDVI). In the next step, flood susceptibility maps were prepared by these four methods in ArcGIS. As the last step, receiver operating characteristic (ROC) curve was drawn and the area under the curve (AUC) was calculated for quantitative assessment of each model. The results showed that the best model to estimate the susceptibility to flooding in Haraz Watershed was SI model with the prediction and success rates of 99.71 and 98.72%, respectively, followed by Wf and SE models with the AUC values of 98.1 and 96.57% for the success rate, and 97.6 and 92.42% for the prediction rate, respectively. In the SI and Wf models, the highest and lowest important parameters were the distance from river and geology. Flood susceptibility maps are informative for managers and decision makers in Haraz Watershed in order to contemplate measures to reduce human and financial losses.

  4. Monitoring bivariate process

    Directory of Open Access Journals (Sweden)

    Marcela A. G. Machado

    2009-12-01

    Full Text Available The T² chart and the generalized variance |S| chart are the usual tools for monitoring the mean vector and the covariance matrix of multivariate processes. The main drawback of these charts is the difficulty to obtain and to interpret the values of their monitoring statistics. In this paper, we study control charts for monitoring bivariate processes that only requires the computation of sample means (the ZMAX chart for monitoring the mean vector, sample variances (the VMAX chart for monitoring the covariance matrix, or both sample means and sample variances (the MCMAX chart in the case of the joint control of the mean vector and the covariance matrix.Os gráficos de T² e da variância amostral generalizada |S| são as ferramentas usualmente utilizadas no monitoramento do vetor de médias e da matriz de covariâncias de processos multivariados. A principal desvantagem desses gráficos é a dificuldade em obter e interpretar os valores de suas estatísticas de monitoramento. Neste artigo, estudam-se gráficos de controle para o monitoramento de processos bivariados que necessitam somente do cálculo de médias amostrais (gráfico ZMAX para o monitoramento do vetor de médias, ou das variâncias amostrais (gráfico VMAX para o monitoramento da matriz de covariâncias, ou então das médias e variâncias amostrais (gráfico MCMAX para o caso do monitoramento conjunto do vetor de médias e da matriz de covariâncias.

  5. Contributory fault and level of personal injury to drivers involved in head-on collisions: Application of copula-based bivariate ordinal models.

    Science.gov (United States)

    Wali, Behram; Khattak, Asad J; Xu, Jingjing

    2018-01-01

    The main objective of this study is to simultaneously investigate the degree of injury severity sustained by drivers involved in head-on collisions with respect to fault status designation. This is complicated to answer due to many issues, one of which is the potential presence of correlation between injury outcomes of drivers involved in the same head-on collision. To address this concern, we present seemingly unrelated bivariate ordered response models by analyzing the joint injury severity probability distribution of at-fault and not-at-fault drivers. Moreover, the assumption of bivariate normality of residuals and the linear form of stochastic dependence implied by such models may be unduly restrictive. To test this, Archimedean copula structures and normal mixture marginals are integrated into the joint estimation framework, which can characterize complex forms of stochastic dependencies and non-normality in residual terms. The models are estimated using 2013 Virginia police reported two-vehicle head-on collision data, where exactly one driver is at-fault. The results suggest that both at-fault and not-at-fault drivers sustained serious/fatal injuries in 8% of crashes, whereas, in 4% of the cases, the not-at-fault driver sustained a serious/fatal injury with no injury to the at-fault driver at all. Furthermore, if the at-fault driver is fatigued, apparently asleep, or has been drinking the not-at-fault driver is more likely to sustain a severe/fatal injury, controlling for other factors and potential correlations between the injury outcomes. While not-at-fault vehicle speed affects injury severity of at-fault driver, the effect is smaller than the effect of at-fault vehicle speed on at-fault injury outcome. Contrarily, and importantly, the effect of at-fault vehicle speed on injury severity of not-at-fault driver is almost equal to the effect of not-at-fault vehicle speed on injury outcome of not-at-fault driver. Compared to traditional ordered probability

  6. The intermediate endpoint effect in logistic and probit regression

    Science.gov (United States)

    MacKinnon, DP; Lockwood, CM; Brown, CH; Wang, W; Hoffman, JM

    2010-01-01

    Background An intermediate endpoint is hypothesized to be in the middle of the causal sequence relating an independent variable to a dependent variable. The intermediate variable is also called a surrogate or mediating variable and the corresponding effect is called the mediated, surrogate endpoint, or intermediate endpoint effect. Clinical studies are often designed to change an intermediate or surrogate endpoint and through this intermediate change influence the ultimate endpoint. In many intermediate endpoint clinical studies the dependent variable is binary, and logistic or probit regression is used. Purpose The purpose of this study is to describe a limitation of a widely used approach to assessing intermediate endpoint effects and to propose an alternative method, based on products of coefficients, that yields more accurate results. Methods The intermediate endpoint model for a binary outcome is described for a true binary outcome and for a dichotomization of a latent continuous outcome. Plots of true values and a simulation study are used to evaluate the different methods. Results Distorted estimates of the intermediate endpoint effect and incorrect conclusions can result from the application of widely used methods to assess the intermediate endpoint effect. The same problem occurs for the proportion of an effect explained by an intermediate endpoint, which has been suggested as a useful measure for identifying intermediate endpoints. A solution to this problem is given based on the relationship between latent variable modeling and logistic or probit regression. Limitations More complicated intermediate variable models are not addressed in the study, although the methods described in the article can be extended to these more complicated models. Conclusions Researchers are encouraged to use an intermediate endpoint method based on the product of regression coefficients. A common method based on difference in coefficient methods can lead to distorted

  7. Heterogeneous Impact of the “Seguro Popular” Program on the Utilization of Obstetrical Services in Mexico, 2001–2006: A Multinomial Probit Model with a Discrete Endogenous Variable

    Science.gov (United States)

    Sosa-Rubi, Sandra G.; Galárraga, Omar

    2009-01-01

    Objective We evaluated the impact of Seguro Popular (SP), a program introduced in 2001 in Mexico primarily to finance health care for the poor. We focused on the effect of household enrollment in SP on pregnant women’s access to obstetrical services, an important outcome measure of both maternal and infant health. Data We relied upon data from the cross-sectional 2006 National Health and Nutrition Survey (ENSANUT) in Mexico. We analyzed the responses of 3,890 women who delivered babies during 2001–2006 and whose households lacked employer-based health care coverage. Methods We formulated a multinomial probit model that distinguished between three mutually exclusive sites for delivering a baby: a health unit specifically accredited by SP; a non-SP-accredited clinic run by the Department of Health (Secretaría de Salud, or SSA); and private obstetrical care. Our model accounted for the endogeneity of the household’s binary decision to enroll in the SP program. Results Women in households that participated in the SP program had a much stronger preference for having a baby in a SP-sponsored unit rather than paying out of pocket for a private delivery. At the same time, participation in SP was associated with a stronger preference for delivering in the private sector rather than at a state-run SSA clinic. On balance, the Seguro Popular program reduced pregnant women’s attendance at an SSA clinic much more than it reduced the probability of delivering a baby in the private sector. The quantitative impact of the SP program varied with the woman’s education and health, as well as the assets and location (rural versus urban) of the household. Conclusions The SP program had a robust, significantly positive impact on access to obstetrical services. Our finding that women enrolled in SP switched from non-SP state-run facilities, rather than from out-of-pocket private services, is important for public policy and requires further exploration. PMID:18824268

  8. Using probit regression to disclose the analytical performance of qualitative and semi-quantitative tests.

    Science.gov (United States)

    Åsberg, Arne; Johnsen, Harald; Mikkelsen, Gustav; Hov, Gunhild Garmo

    2016-11-01

    The analytical performance of qualitative and semi-quantitative tests is usually studied by calculating the fraction of positive results after replicate testing of a few specimens with known concentrations of the analyte. We propose using probit regression to model the probability of positive results as a function of the analyte concentration, based on testing many specimens once with a qualitative and a quantitative test. We collected laboratory data where urine specimens had been analyzed by both a urine albumin ('protein') dipstick test (Combur-Test strips) and a quantitative test (BN ProSpec System). For each dipstick cut-off level probit regression was used to estimate the probability of positive results as a function of urine albumin concentration. We also used probit regression to estimate the standard deviation of the continuous measurement signal that lies behind the binary test response. Finally, we used probit regression to estimate the probability of reading a specific semi-quantitative dipstick result as a function of urine albumin concentration. Based on analyses of 3259 specimens, the concentration of urine albumin with a 0.5 (50%) probability of positive result was 57 mg/L at the lowest possible cut-off limit, and 246 and 750 mg/L at the next (higher) levels. The corresponding standard deviations were 29, 83, and 217 mg/L, respectively. Semi-quantitatively, the maximum probability of these three readings occurred at a u-albumin of 117, 420, and 1200 mg/L, respectively. Probit regression is a useful tool to study the analytical performance of qualitative and semi-quantitative tests.

  9. Covariate analysis of bivariate survival data

    Energy Technology Data Exchange (ETDEWEB)

    Bennett, L.E.

    1992-01-01

    The methods developed are used to analyze the effects of covariates on bivariate survival data when censoring and ties are present. The proposed method provides models for bivariate survival data that include differential covariate effects and censored observations. The proposed models are based on an extension of the univariate Buckley-James estimators which replace censored data points by their expected values, conditional on the censoring time and the covariates. For the bivariate situation, it is necessary to determine the expectation of the failure times for one component conditional on the failure or censoring time of the other component. Two different methods have been developed to estimate these expectations. In the semiparametric approach these expectations are determined from a modification of Burke's estimate of the bivariate empirical survival function. In the parametric approach censored data points are also replaced by their conditional expected values where the expected values are determined from a specified parametric distribution. The model estimation will be based on the revised data set, comprised of uncensored components and expected values for the censored components. The variance-covariance matrix for the estimated covariate parameters has also been derived for both the semiparametric and parametric methods. Data from the Demographic and Health Survey was analyzed by these methods. The two outcome variables are post-partum amenorrhea and breastfeeding; education and parity were used as the covariates. Both the covariate parameter estimates and the variance-covariance estimates for the semiparametric and parametric models will be compared. In addition, a multivariate test statistic was used in the semiparametric model to examine contrasts. The significance of the statistic was determined from a bootstrap distribution of the test statistic.

  10. Financial Applications of Bivariate Markov Processes

    OpenAIRE

    Ortobelli Lozza, Sergio; Angelelli, Enrico; Bianchi, Annamaria

    2011-01-01

    This paper describes a methodology to approximate a bivariate Markov process by means of a proper Markov chain and presents possible financial applications in portfolio theory, option pricing and risk management. In particular, we first show how to model the joint distribution between market stochastic bounds and future wealth and propose an application to large-scale portfolio problems. Secondly, we examine an application to VaR estimation. Finally, we propose a methodology...

  11. Exploring Driver Injury Severity at Intersection: An Ordered Probit Analysis

    Directory of Open Access Journals (Sweden)

    Yaping Zhang

    2015-02-01

    Full Text Available It is well known that intersections are the most hazardous locations; however, only little is known about driver injury severity in intersection crashes. Hence, the main goal of this study was to further examine the different factors contributing to driver injury severity involved in fatal crashes at intersections. Data used for the present analysis was from the US DOT-Fatality Analysis Reporting System (FARS crash database from the year 2011. An ordered probit model was employed to fit the fatal crash data and analyze the factors impacting each injury severity level. The analysis results displayed that driver injury severity is significantly affected by many factors. They include driver age and gender, driver ethnicity, vehicle type and age (years of use, crash type, driving drunk, speeding, violating stop sign, cognitively distracted driving, and seat belt usage. These findings from the current study are beneficial to form a solid basis for adopting corresponding measures to effectively drop injury severity suffering from intersection crash. More insights into the effects of risk factors on driver injury severity could be acquired using more advanced statistical models.

  12. Reliability for some bivariate beta distributions

    Directory of Open Access Journals (Sweden)

    Nadarajah Saralees

    2005-01-01

    Full Text Available In the area of stress-strength models there has been a large amount of work as regards estimation of the reliability R=Pr( Xbivariate distribution with dependence between X and Y . In particular, we derive explicit expressions for R when the joint distribution is bivariate beta. The calculations involve the use of special functions.

  13. Reliability for some bivariate gamma distributions

    Directory of Open Access Journals (Sweden)

    Nadarajah Saralees

    2005-01-01

    Full Text Available In the area of stress-strength models, there has been a large amount of work as regards estimation of the reliability R=Pr( Xbivariate distribution with dependence between X and Y . In particular, we derive explicit expressions for R when the joint distribution is bivariate gamma. The calculations involve the use of special functions.

  14. Prediction of unwanted pregnancies using logistic regression, probit regression and discriminant analysis.

    Science.gov (United States)

    Ebrahimzadeh, Farzad; Hajizadeh, Ebrahim; Vahabi, Nasim; Almasian, Mohammad; Bakhteyar, Katayoon

    2015-01-01

    Unwanted pregnancy not intended by at least one of the parents has undesirable consequences for the family and the society. In the present study, three classification models were used and compared to predict unwanted pregnancies in an urban population. In this cross-sectional study, 887 pregnant mothers referring to health centers in Khorramabad, Iran, in 2012 were selected by the stratified and cluster sampling; relevant variables were measured and for prediction of unwanted pregnancy, logistic regression, discriminant analysis, and probit regression models and SPSS software version 21 were used. To compare these models, indicators such as sensitivity, specificity, the area under the ROC curve, and the percentage of correct predictions were used. The prevalence of unwanted pregnancies was 25.3%. The logistic and probit regression models indicated that parity and pregnancy spacing, contraceptive methods, household income and number of living male children were related to unwanted pregnancy. The performance of the models based on the area under the ROC curve was 0.735, 0.733, and 0.680 for logistic regression, probit regression, and linear discriminant analysis, respectively. Given the relatively high prevalence of unwanted pregnancies in Khorramabad, it seems necessary to revise family planning programs. Despite the similar accuracy of the models, if the researcher is interested in the interpretability of the results, the use of the logistic regression model is recommended.

  15. Analysing risk factors of co-occurrence of schistosomiasis haematobium and hookworm using bivariate regression models: Case study of Chikwawa, Malawi

    Directory of Open Access Journals (Sweden)

    Bruce B.W. Phiri

    2016-06-01

    Full Text Available Schistosomiasis and soil-transmitted helminth (STH infections constitute a major public health problem in many parts of sub-Saharan Africa. In areas where prevalence of geo-helminths and schistosomes is high, co-infection with multiple parasite species is common, resulting in disproportionately elevated burden compared with single infections. Determining risk factors of co-infection intensity is important for better design of targeted interventions. In this paper, we examined risk factors of hookworm and S. haematobium co-infection intensity, in Chikwawa district, southern Malawi in 2005, using bivariate count models. Results show that hookworm and S. haematobium infections were much localised with small proportion of individuals harbouring more parasites especially among school-aged children. The risk of co-intensity with both hookworm and S. haematobium was high for all ages, although this diminished with increasing age, increased with fishing (hookworm: coefficient. = 12.29; 95% CI = 11.50–13.09; S. haematobium: 0.040; 95% CI = 0.0037, 3.832. Both infections were abundant in those with primary education (hookworm: coef. = 0.072; 95% CI = 0.056, 0.401 and S. haematobium: coef. = 0.286; 95% CI = 0.034, 0.538. However, much lower risk was observed for those who were farmers (hookworm: coef. = −0.349, 95% CI = −0.547,−0.150; S. haematobium: coef. −0.239, 95% CI = −0.406, −0.072. In conclusion, our findings suggest that efforts to control helminths infection should be co-integrated and health promotion campaigns should be aimed at school-going children and adults who are in constant contact with water.

  16. Bivariate value-at-risk

    Directory of Open Access Journals (Sweden)

    Giuseppe Arbia

    2007-10-01

    Full Text Available In this paper we extend the concept of Value-at-risk (VaR to bivariate return distributions in order to obtain measures of the market risk of an asset taking into account additional features linked to downside risk exposure. We first present a general definition of risk as the probability of an adverse event over a random distribution and we then introduce a measure of market risk (b-VaR that admits the traditional b of an asset in portfolio management as a special case when asset returns are normally distributed. Empirical evidences are provided by using Italian stock market data.

  17. Spectral density regression for bivariate extremes

    KAUST Repository

    Castro Camilo, Daniela

    2016-05-11

    We introduce a density regression model for the spectral density of a bivariate extreme value distribution, that allows us to assess how extremal dependence can change over a covariate. Inference is performed through a double kernel estimator, which can be seen as an extension of the Nadaraya–Watson estimator where the usual scalar responses are replaced by mean constrained densities on the unit interval. Numerical experiments with the methods illustrate their resilience in a variety of contexts of practical interest. An extreme temperature dataset is used to illustrate our methods. © 2016 Springer-Verlag Berlin Heidelberg

  18. Probit vs. semi-nonparametric estimation: examining the role of disability on institutional entry for older adults.

    Science.gov (United States)

    Sharma, Andy

    2017-06-01

    The purpose of this study was to showcase an advanced methodological approach to model disability and institutional entry. Both of these are important areas to investigate given the on-going aging of the United States population. By 2020, approximately 15% of the population will be 65 years and older. Many of these older adults will experience disability and require formal care. A probit analysis was employed to determine which disabilities were associated with admission into an institution (i.e. long-term care). Since this framework imposes strong distributional assumptions, misspecification leads to inconsistent estimators. To overcome such a short-coming, this analysis extended the probit framework by employing an advanced semi-nonparamertic maximum likelihood estimation utilizing Hermite polynomial expansions. Specification tests show semi-nonparametric estimation is preferred over probit. In terms of the estimates, semi-nonparametric ratios equal 42 for cognitive difficulty, 64 for independent living, and 111 for self-care disability while probit yields much smaller estimates of 19, 30, and 44, respectively. Public health professionals can use these results to better understand why certain interventions have not shown promise. Equally important, healthcare workers can use this research to evaluate which type of treatment plans may delay institutionalization and improve the quality of life for older adults. Implications for rehabilitation With on-going global aging, understanding the association between disability and institutional entry is important in devising successful rehabilitation interventions. Semi-nonparametric is preferred to probit and shows ambulatory and cognitive impairments present high risk for institutional entry (long-term care). Informal caregiving and home-based care require further examination as forms of rehabilitation/therapy for certain types of disabilities.

  19. Comparison between two bivariate Poisson distributions through the ...

    African Journals Online (AJOL)

    To remedy this problem, Berkhout and Plug proposed a bivariate Poisson distribution accepting the correlation as well negative, equal to zero, that positive. In this paper, we show that these models are nearly everywhere asymptotically equal. From this survey that the ø-divergence converges toward zero, both models are ...

  20. An assessment on the use of bivariate, multivariate and soft ...

    Indian Academy of Sciences (India)

    Conditional probability (CP), logistic regression (LR) and artificial neural networks (ANN) models representing the bivariate, multivariate and soft computing techniques were used in GIS based collapse susceptibility mapping in an area from Sivas basin (Turkey). Collapse-related factors, directly or indirectly related to the ...

  1. An assessment on the use of bivariate, multivariate and soft ...

    Indian Academy of Sciences (India)

    The paper presented herein compares and discusses the use of bivariate, multivariate and soft computing techniques for ... map is a useful tool in urban planning. ..... 381. Table 1. Frequency ratio of geological factors to collapse occurrences and results of the P(A/Bi) obtained from the. Conditional Probability model. Class.

  2. Bivariate hard thresholding in wavelet function estimation

    OpenAIRE

    Piotr Fryzlewicz

    2007-01-01

    We propose a generic bivariate hard thresholding estimator of the discrete wavelet coefficients of a function contaminated with i.i.d. Gaussian noise. We demonstrate its good risk properties in a motivating example, and derive upper bounds for its mean-square error. Motivated by the clustering of large wavelet coefficients in real-life signals, we propose two wavelet denoising algorithms, both of which use specific instances of our bivariate estimator. The BABTE algorithm uses basis averaging...

  3. How Much Math Do Students Need to Succeed in Business and Economics Statistics? An Ordered Probit Analysis

    OpenAIRE

    Jeffrey J. Green; Courtenay C. Stone; Abera Zegeye; Thomas A. Charles

    2008-01-01

    Because statistical analysis requires both familiarity with and the ability to use mathematics, students typically are required to take one or more prerequisite math courses prior to enrolling in the business statistics course. Despite these math prerequisites, however, students find it extremely difficult to learn business statistics. In this study, we use an ordered probit model to examine the effect of alternative prerequisite math course sequences on the grade performance of 1,684 busines...

  4. Bivariate extreme value with application to PM10 concentration analysis

    Science.gov (United States)

    Amin, Nor Azrita Mohd; Adam, Mohd Bakri; Ibrahim, Noor Akma; Aris, Ahmad Zaharin

    2015-05-01

    This study is focus on a bivariate extreme of renormalized componentwise maxima with generalized extreme value distribution as a marginal function. The limiting joint distribution of several parametric models are presented. Maximum likelihood estimation is employed for parameter estimations and the best model is selected based on the Akaike Information Criterion. The weekly and monthly componentwise maxima series are extracted from the original observations of daily maxima PM10 data for two air quality monitoring stations located in Pasir Gudang and Johor Bahru. The 10 years data are considered for both stations from year 2001 to 2010. The asymmetric negative logistic model is found as the best fit bivariate extreme model for both weekly and monthly maxima componentwise series. However the dependence parameters show that the variables for weekly maxima series is more dependence to each other compared to the monthly maxima.

  5. A bivariate measurement error model for nitrogen and potassium intakes to evaluate the performance of regression calibration in the European Prospective Investigation into Cancer and Nutrition study

    NARCIS (Netherlands)

    Ferrari, P.; Roddam, A.; Fahey, M. T.; Jenab, M.; Bamia, C.; Ocke, M.; Amiano, P.; Hjartaker, A.; Biessy, C.; Rinaldi, S.; Huybrechts, I.; Tjonneland, A.; Dethlefsen, C.; Niravong, M.; Clavel-Chapelon, F.; Linseisen, J.; Boeing, H.; Oikonomou, E.; Orfanos, P.; Palli, D.; de Magistris, M. Santucci; Bueno-de-Mesquita, H. B.; Peeters, P. H. M.; Parr, C. L.; Braaten, T.; Dorronsoro, M.; Berenguer, T.; Gullberg, B.; Johansson, I.; Welch, A. A.; Riboli, E.; Bingham, S.; Slimani, N.

    2009-01-01

    Objectives: Within the European Prospective Investigation into Cancer and Nutrition (EPIC) study, the performance of 24-h dietary recall (24-HDR) measurements as reference measurements in a linear regression calibration model is evaluated critically at the individual (within-centre) and aggregate

  6. Spatial prediction of flood susceptible areas using rule based decision tree (DT) and a novel ensemble bivariate and multivariate statistical models in GIS

    Science.gov (United States)

    Tehrany, Mahyat Shafapour; Pradhan, Biswajeet; Jebur, Mustafa Neamah

    2013-11-01

    Decision tree (DT) machine learning algorithm was used to map the flood susceptible areas in Kelantan, Malaysia.We used an ensemble frequency ratio (FR) and logistic regression (LR) model in order to overcome weak points of the LR.Combined method of FR and LR was used to map the susceptible areas in Kelantan, Malaysia.Results of both methods were compared and their efficiency was assessed.Most influencing conditioning factors on flooding were recognized.

  7. Stereology of extremes; bivariate models and computation

    Czech Academy of Sciences Publication Activity Database

    Beneš, Viktor; Bodlák, M.; Hlubinka, D.

    2003-01-01

    Roč. 5, č. 3 (2003), s. 289-308 ISSN 1387-5841 R&D Projects: GA AV ČR IAA1075201; GA ČR GA201/03/0946 Institutional research plan: CEZ:AV0Z1075907 Keywords : sample extreme s * domain of attraction * normalizing constants Subject RIV: BA - General Mathematics

  8. A review of R-packages for random-intercept probit regression in small clusters

    Directory of Open Access Journals (Sweden)

    Haeike Josephy

    2016-10-01

    Full Text Available Generalized Linear Mixed Models (GLMMs are widely used to model clustered categorical outcomes. To tackle the intractable integration over the random effects distributions, several approximation approaches have been developed for likelihood-based inference. As these seldom yield satisfactory results when analyzing binary outcomes from small clusters, estimation within the Structural Equation Modeling (SEM framework is proposed as an alternative. We compare the performance of R-packages for random-intercept probit regression relying on: the Laplace approximation, adaptive Gaussian quadrature (AGQ, penalized quasi-likelihood, an MCMC-implementation, and integrated nested Laplace approximation within the GLMM-framework, and a robust diagonally weighted least squares estimation within the SEM-framework. In terms of bias for the fixed and random effect estimators, SEM usually performs best for cluster size two, while AGQ prevails in terms of precision (mainly because of SEM's robust standard errors. As the cluster size increases, however, AGQ becomes the best choice for both bias and precision.

  9. Computational approach to Thornley's problem by bivariate operational calculus

    Science.gov (United States)

    Bazhlekova, E.; Dimovski, I.

    2012-10-01

    Thornley's problem is an initial-boundary value problem with a nonlocal boundary condition for linear onedimensional reaction-diffusion equation, used as a mathematical model of spiral phyllotaxis in botany. Applying a bivariate operational calculus we find explicit representation of the solution, containing two convolution products of special solutions and the arbitrary initial and boundary functions. We use a non-classical convolution with respect to the space variable, extending in this way the classical Duhamel principle. The special solutions involved are represented in the form of fast convergent series. Numerical examples are considered to show the application of the present technique and to analyze the character of the solution.

  10. Bivariate generalized Pareto distribution for extreme atmospheric particulate matter

    Science.gov (United States)

    Amin, Nor Azrita Mohd; Adam, Mohd Bakri; Ibrahim, Noor Akma; Aris, Ahmad Zaharin

    2015-02-01

    The high particulate matter (PM10) level is the prominent issue causing various impacts to human health and seriously affecting the economics. The asymptotic theory of extreme value is apply for analyzing the relation of extreme PM10 data from two nearby air quality monitoring stations. The series of daily maxima PM10 for Johor Bahru and Pasir Gudang stations are consider for year 2001 to 2010 databases. The 85% and 95% marginal quantile apply to determine the threshold values and hence construct the series of exceedances over the chosen threshold. The logistic, asymmetric logistic, negative logistic and asymmetric negative logistic models areconsidered as the dependence function to the joint distribution of a bivariate observation. Maximum likelihood estimation is employed for parameter estimations. The best fitted model is chosen based on the Akaike Information Criterion and the quantile plots. It is found that the asymmetric logistic model gives the best fitted model for bivariate extreme PM10 data and shows the weak dependence between two stations.

  11. Solving Bivariate Polynomial Systems on a GPU

    International Nuclear Information System (INIS)

    Moreno Maza, Marc; Pan Wei

    2012-01-01

    We present a CUDA implementation of dense multivariate polynomial arithmetic based on Fast Fourier Transforms over finite fields. Our core routine computes on the device (GPU) the subresultant chain of two polynomials with respect to a given variable. This subresultant chain is encoded by values on a FFT grid and is manipulated from the host (CPU) in higher-level procedures. We have realized a bivariate polynomial system solver supported by our GPU code. Our experimental results (including detailed profiling information and benchmarks against a serial polynomial system solver implementing the same algorithm) demonstrate that our strategy is well suited for GPU implementation and provides large speedup factors with respect to pure CPU code.

  12. Bivariate Rayleigh Distribution and its Properties

    Directory of Open Access Journals (Sweden)

    Ahmad Saeed Akhter

    2007-01-01

    Full Text Available Rayleigh (1880 observed that the sea waves follow no law because of the complexities of the sea, but it has been seen that the probability distributions of wave heights, wave length, wave induce pitch, wave and heave motions of the ships follow the Rayleigh distribution. At present, several different quantities are in use for describing the state of the sea; for example, the mean height of the waves, the root mean square height, the height of the “significant waves” (the mean height of the highest one-third of all the waves the maximum height over a given interval of the time, and so on. At present, the ship building industry knows less than any other construction industry about the service conditions under which it must operate. Only small efforts have been made to establish the stresses and motions and to incorporate the result of such studies in to design. This is due to the complexity of the problem caused by the extensive variability of the sea and the corresponding response of the ships. Although the problem appears feasible, yet it is possible to predict service conditions for ships in an orderly and relatively simple manner Rayleigh (1980 derived it from the amplitude of sound resulting from many independent sources. This distribution is also connected with one or two dimensions and is sometimes referred to as “random walk” frequency distribution. The Rayleigh distribution can be derived from the bivariate normal distribution when the variate are independent and random with equal variances. We try to construct bivariate Rayleigh distribution with marginal Rayleigh distribution function and discuss its fundamental properties.

  13. A Comparison of Alternative Specifications of the College Attendance Equation with an Extension to two-stage Selectivity-Correction Models.

    Science.gov (United States)

    Hilmer, Michael J.

    2001-01-01

    Estimates a college-attendance equation for a common set of students (from the High School and Beyond Survey) using three popular econometric specifications: the multinomial logit, the ordered probit, and the bivariate probit. Estimated marginal effects do not differ significantly across the three specifications. Choice of specification may not…

  14. Multiple imputation methods for bivariate outcomes in cluster randomised trials.

    Science.gov (United States)

    DiazOrdaz, K; Kenward, M G; Gomes, M; Grieve, R

    2016-09-10

    Missing observations are common in cluster randomised trials. The problem is exacerbated when modelling bivariate outcomes jointly, as the proportion of complete cases is often considerably smaller than the proportion having either of the outcomes fully observed. Approaches taken to handling such missing data include the following: complete case analysis, single-level multiple imputation that ignores the clustering, multiple imputation with a fixed effect for each cluster and multilevel multiple imputation. We contrasted the alternative approaches to handling missing data in a cost-effectiveness analysis that uses data from a cluster randomised trial to evaluate an exercise intervention for care home residents. We then conducted a simulation study to assess the performance of these approaches on bivariate continuous outcomes, in terms of confidence interval coverage and empirical bias in the estimated treatment effects. Missing-at-random clustered data scenarios were simulated following a full-factorial design. Across all the missing data mechanisms considered, the multiple imputation methods provided estimators with negligible bias, while complete case analysis resulted in biased treatment effect estimates in scenarios where the randomised treatment arm was associated with missingness. Confidence interval coverage was generally in excess of nominal levels (up to 99.8%) following fixed-effects multiple imputation and too low following single-level multiple imputation. Multilevel multiple imputation led to coverage levels of approximately 95% throughout. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  15. Bell-Type Inequalities for Bivariate Maps on Orthomodular Lattices

    Science.gov (United States)

    Pykacz, Jarosław; Valášková, L'ubica; Nánásiová, Ol'ga

    2015-08-01

    Bell-type inequalities on orthomodular lattices, in which conjunctions of propositions are not modeled by meets but by maps for simultaneous measurements (-maps), are studied. It is shown, that the most simple of these inequalities, that involves only two propositions, is always satisfied, contrary to what happens in the case of traditional version of this inequality in which conjunctions of propositions are modeled by meets. Equivalence of various Bell-type inequalities formulated with the aid of bivariate maps on orthomodular lattices is studied. Our investigations shed new light on the interpretation of various multivariate maps defined on orthomodular lattices already studied in the literature. The paper is concluded by showing the possibility of using -maps and -maps to represent counterfactual conjunctions and disjunctions of non-compatible propositions about quantum systems.

  16. Appropriateness of Probit-9 in development of quarantine treatments for timber and timber commodities

    Science.gov (United States)

    Marcus Schortemeyer; Ken Thomas; Robert A. Haack; Adnan Uzunovic; Kelli Hoover; Jack A. Simpson; Cheryl A. Grgurinovic

    2011-01-01

    Following the increasing international phasing out of methyl bromide for quarantine purposes, the development of alternative treatments for timber pests becomes imperative. The international accreditation of new quarantine treatments requires verification standards that give confidence in the effectiveness of a treatment. Probit-9 mortality is a standard for treatment...

  17. Identifying the Factors Influence Turkish Deposit Banks to Join Corporate Social Responsibility Activities by Using Panel Probit Method

    Directory of Open Access Journals (Sweden)

    Serhat Yuksel

    2017-02-01

    Full Text Available This study aims to determine the influencing factors of the banks to join corporate social responsibility activities. Within this scope, annual data of 23 deposit banks in Turkey for the periods between 2005 and 2015 was taken into the consideration. In addition to this situation, panel probit model was used in the analysis so as to achieve this objective. According to the results of the analysis, it was determined that there is a negative relationship between CSR activities and nonperforming loans ratio. This situation shows that banks do not prefer to make social responsibility activities in case of higher financial losses. In addition to this situation, it was also identified that there is a positive relationship between return on asset and corporate social responsibility activities of the banks. In other words, it can be understood that Turkish deposit banks, which have higher profitability, joint more CSR activities in comparison with others.

  18. Stress-strength reliability for general bivariate distributions

    Directory of Open Access Journals (Sweden)

    Alaa H. Abdel-Hamid

    2016-10-01

    Full Text Available An expression for the stress-strength reliability R=P(X1bivariate distribution. Such distribution includes bivariate compound Weibull, bivariate compound Gompertz, bivariate compound Pareto, among others. In the parametric case, the maximum likelihood estimates of the parameters and reliability function R are obtained. In the non-parametric case, point and interval estimates of R are developed using Govindarajulu's asymptotic distribution-free method when X1 and X2 are dependent. An example is given when the population distribution is bivariate compound Weibull. Simulation is performed, based on different sample sizes to study the performance of estimates.

  19. Ordered Probit Analysis of Consumers’ Preferences for Milk and Meat Quality Attributes in the Emerging Cities of Southern India

    Directory of Open Access Journals (Sweden)

    S. PRIYADHARSINI

    2017-09-01

    Full Text Available In order to assess consumer preferences for milk and meat quality attributes, a study was carried out in two Second-Tier cities of Tamil Nadu. Personal interviews were done to collect the data from 160 respondents chosen through a multistage sampling procedure in each of the two cities selected for this study. Ordered Probit model fitted for the attributes of milk showed that: family size had a significant positive preference towards texture, low fat and low price of milk, educated consumers paid greater attention to taste, safety, flavour, packaging and low fat attributes of milk and low income consumers paid less importance on most of the attributes of milk. Ordered Probit model for meat revealed that as the family size increased, the consumers were likely to give more importance to ageing and tenderness and less importance to leanness of meat. Male consumers paid greater attention to colour and females were none concerned with tenderness, cooking quality and price. As the education level increased, the consumers became more and more quality and price conscious. Households having children paid more importance to tenderness and taste attributes of meat, whereas the household having aged people opted for colour, taste, tenderness, cooking quality, leanness and price attributes. Low income consumers paid less importance to quality attributes and the respondents performing more physical activity paid lesser attention towards leanness and more importance to price of the meat. This suggests the need for enhancing the production of quality livestock products, together by developing a well-organized distribution system.

  20. STUDI PERBANDINGAN ANTARA ALGORITMA BIVARIATE MARGINAL DISTRIBUTION DENGAN ALGORITMA GENETIKA

    Directory of Open Access Journals (Sweden)

    Chastine Fatichah

    2006-01-01

    Full Text Available Bivariate Marginal Distribution Algorithm is extended from Estimation of Distribution Algorithm. This heuristic algorithm proposes the new approach for recombination of generate new individual that without crossover and mutation process such as genetic algorithm. Bivariate Marginal Distribution Algorithm uses connectivity variable the pair gene for recombination of generate new individual. Connectivity between variable is doing along optimization process. In this research, genetic algorithm performance with one point crossover is compared with Bivariate Marginal Distribution Algorithm performance in case Onemax, De Jong F2 function, and Traveling Salesman Problem. In this research, experimental results have shown performance the both algorithm is dependence of parameter respectively and also population size that used. For Onemax case with size small problem, Genetic Algorithm perform better with small number of iteration and more fast for get optimum result. However, Bivariate Marginal Distribution Algorithm perform better of result optimization for case Onemax with huge size problem. For De Jong F2 function, Genetic Algorithm perform better from Bivariate Marginal Distribution Algorithm of a number of iteration and time. For case Traveling Salesman Problem, Bivariate Marginal Distribution Algorithm have shown perform better from Genetic Algorithm of optimization result. Abstract in Bahasa Indonesia : Bivariate Marginal Distribution Algorithm merupakan perkembangan lebih lanjut dari Estimation of Distribution Algorithm. Algoritma heuristik ini mengenalkan pendekatan baru dalam melakukan rekombinasi untuk membentuk individu baru, yaitu tidak menggunakan proses crossover dan mutasi seperti pada Genetic Algorithm. Bivariate Marginal Distribution Algorithm menggunakan keterkaitan pasangan variabel dalam melakukan rekombinasi untuk membentuk individu baru. Keterkaitan antar variabel tersebut ditemukan selama proses optimasi berlangsung. Aplikasi yang

  1. Approximation of bivariate copulas by patched bivariate Fréchet copulas

    KAUST Repository

    Zheng, Yanting

    2011-03-01

    Bivariate Fréchet (BF) copulas characterize dependence as a mixture of three simple structures: comonotonicity, independence and countermonotonicity. They are easily interpretable but have limitations when used as approximations to general dependence structures. To improve the approximation property of the BF copulas and keep the advantage of easy interpretation, we develop a new copula approximation scheme by using BF copulas locally and patching the local pieces together. Error bounds and a probabilistic interpretation of this approximation scheme are developed. The new approximation scheme is compared with several existing copula approximations, including shuffle of min, checkmin, checkerboard and Bernstein approximations and exhibits better performance, especially in characterizing the local dependence. The utility of the new approximation scheme in insurance and finance is illustrated in the computation of the rainbow option prices and stop-loss premiums. © 2010 Elsevier B.V.

  2. Bivariate analysis of sensitivity and specificity produces informative summary measures in diagnostic reviews

    NARCIS (Netherlands)

    Reitsma, Johannes B.; Glas, Afina S.; Rutjes, Anne W. S.; Scholten, Rob J. P. M.; Bossuyt, Patrick M.; Zwinderman, Aeilko H.

    2005-01-01

    Background and Objectives: Studies of diagnostic accuracy most often report pairs of sensitivity and specificity. We demonstrate the advantage of using bivariate meta-regression models to analyze such data. Methods: We discuss the methodology of both the summary Receiver Operating Characteristic

  3. A bivariate optimal replacement policy for a multistate repairable system

    International Nuclear Information System (INIS)

    Zhang Yuanlin; Yam, Richard C.M.; Zuo, Ming J.

    2007-01-01

    In this paper, a deteriorating simple repairable system with k+1 states, including k failure states and one working state, is studied. It is assumed that the system after repair is not 'as good as new' and the deterioration of the system is stochastic. We consider a bivariate replacement policy, denoted by (T,N), in which the system is replaced when its working age has reached T or the number of failures it has experienced has reached N, whichever occurs first. The objective is to determine the optimal replacement policy (T,N)* such that the long-run expected profit per unit time is maximized. The explicit expression of the long-run expected profit per unit time is derived and the corresponding optimal replacement policy can be determined analytically or numerically. We prove that the optimal policy (T,N)* is better than the optimal policy N* for a multistate simple repairable system. We also show that a general monotone process model for a multistate simple repairable system is equivalent to a geometric process model for a two-state simple repairable system in the sense that they have the same structure for the long-run expected profit (or cost) per unit time and the same optimal policy. Finally, a numerical example is given to illustrate the theoretical results

  4. A comparison of bivariate and univariate QTL mapping in livestock populations

    Directory of Open Access Journals (Sweden)

    Sorensen Daniel

    2003-11-01

    Full Text Available Abstract This study presents a multivariate, variance component-based QTL mapping model implemented via restricted maximum likelihood (REML. The method was applied to investigate bivariate and univariate QTL mapping analyses, using simulated data. Specifically, we report results on the statistical power to detect a QTL and on the precision of parameter estimates using univariate and bivariate approaches. The model and methodology were also applied to study the effectiveness of partitioning the overall genetic correlation between two traits into a component due to many genes of small effect, and one due to the QTL. It is shown that when the QTL has a pleiotropic effect on two traits, a bivariate analysis leads to a higher statistical power of detecting the QTL and to a more precise estimate of the QTL's map position, in particular in the case when the QTL has a small effect on the trait. The increase in power is most marked in cases where the contributions of the QTL and of the polygenic components to the genetic correlation have opposite signs. The bivariate REML analysis can successfully partition the two components contributing to the genetic correlation between traits.

  5. Bivariate Developmental Relations between Calculations and Word Problems: A Latent Change Approach.

    Science.gov (United States)

    Gilbert, Jennifer K; Fuchs, Lynn S

    2017-10-01

    The relation between 2 forms of mathematical cognition, calculations and word problems, was examined. Across grades 2-3, performance of 328 children (mean starting age 7.63 [ SD =0.43]) was assessed 3 times. Comparison of a priori latent change score models indicated a dual change model, with consistently positive but slowing growth, described development in each domain better than a constant or proportional change model. The bivariate model including change models for both calculations and word problems indicated prior calculation performance and change were not predictors of subsequent word-problem change, and prior word-problem performance and change were not predictors of subsequent calculation change. Results were comparable for boys versus girls. The bivariate model, along with correlations among intercepts and slopes, suggest calculation and word-problem development are related, but through an external set of overlapping factors. Exploratory supplemental analyses corroborate findings and provide direction for future study.

  6. An Instrumental Variable Probit (IVP Analysis on Depressed Mood in Korea: The Impact of Gender Differences and Other Socio-Economic Factors

    Directory of Open Access Journals (Sweden)

    Lara Gitto

    2015-08-01

    -economic factors (such as education, residence in metropolitan areas, and so on. As the results of the Wald test carried out after the estimations did not allow to reject the null hypothesis of endogeneity, a probit model was run too. Results Overall, women tend to develop depression more frequently than men. There is an inverse effect of education on depressed mood (probability of -24.6% to report a depressed mood due to high school education, as it emerges from the probit model marginal effects, while marital status and the number of family members may act as protective factors (probability to report a depressed mood of -1.0% for each family member. Depression is significantly associated with socio-economic conditions, such as work and income. Living in metropolitan areas is inversely correlated with depression (probability of -4.1% to report a depressed mood estimated through the probit model: this could be explained considering that, in rural areas, people rarely have immediate access to high-quality health services. Conclusion This study outlines the factors that are more likely to impact on depression, and applies an IVP model to take into account the potential endogeneity of some of the predictors of depressive mood, such as female participation to workforce and health status. A probit model has been estimated too. Depression is associated with a wide range of socioeconomic factors, although the strength and direction of the association can differ by gender. Prevention approaches to contrast depressive symptoms might take into consideration the evidence offered by the present study.

  7. mitants of Order Statistics from Bivariate Inverse Rayleigh Distribution

    Directory of Open Access Journals (Sweden)

    Muhammad Aleem

    2006-01-01

    Full Text Available The probability density function (pdf of the rth, 1 r n and joint pdf of the rth and sth, 1 rBivariate Inverse Rayleigh Distribution and their moments, product moments are obtained. Its percentiles are also obtained.

  8. GIS-based bivariate statistical techniques for groundwater potential ...

    Indian Academy of Sciences (India)

    Groundwater potential analysis prepares better comprehension of hydrological settings of different regions. This study shows the potency of two GIS-based data driven bivariate techniques namely statistical index (SI) and Dempster–Shafer theory (DST) to analyze groundwater potential in Broujerd region of Iran.

  9. Dissecting the correlation structure of a bivariate phenotype ...

    Indian Academy of Sciences (India)

    Home; Journals; Journal of Genetics; Volume 84; Issue 2. Dissecting the correlation structure of a bivariate phenotype: common genes or shared environment? ... High correlations between two quantitative traits may be either due to common genetic factors or common environmental factors or a combination of both.

  10. About some properties of bivariate splines with shape parameters

    Science.gov (United States)

    Caliò, F.; Marchetti, E.

    2017-07-01

    The paper presents and proves geometrical properties of a particular bivariate function spline, built and algorithmically implemented in previous papers. The properties typical of this family of splines impact the field of computer graphics in particular that of the reverse engineering.

  11. Bivariate return periods of temperature and precipitation explain a large fraction of European crop yields

    Science.gov (United States)

    Zscheischler, Jakob; Orth, Rene; Seneviratne, Sonia I.

    2017-07-01

    Crops are vital for human society. Crop yields vary with climate and it is important to understand how climate and crop yields are linked to ensure future food security. Temperature and precipitation are among the key driving factors of crop yield variability. Previous studies have investigated mostly linear relationships between temperature and precipitation and crop yield variability. Other research has highlighted the adverse impacts of climate extremes, such as drought and heat waves, on crop yields. Impacts are, however, often non-linearly related to multivariate climate conditions. Here we derive bivariate return periods of climate conditions as indicators for climate variability along different temperature-precipitation gradients. We show that in Europe, linear models based on bivariate return periods of specific climate conditions explain on average significantly more crop yield variability (42 %) than models relying directly on temperature and precipitation as predictors (36 %). Our results demonstrate that most often crop yields increase along a gradient from hot and dry to cold and wet conditions, with lower yields associated with hot and dry periods. The majority of crops are most sensitive to climate conditions in summer and to maximum temperatures. The use of bivariate return periods allows the integration of non-linear impacts into climate-crop yield analysis. This offers new avenues to study the link between climate and crop yield variability and suggests that they are possibly more strongly related than what is inferred from conventional linear models.

  12. Is the probit 9 security level appropriate for disinfestation using gamma-radiation

    International Nuclear Information System (INIS)

    Ohta, A.T.; Kaneshiro, K.Y.; Kurihara, J.S.; Kanegawa, K.M.; Nagamine, L.R.

    1985-01-01

    The probit 9 concept requires that a given treatment result in 99.9968 percent mortality in an estimated population of 100,000 individuals. The USDA-Hawaiian Fruit Fly Investigations Laboratory has determined that 0.26 kGy is the minimum absorbed dose of gamma-radiation required to prevent adult emergence of the three species of fruit flies in Hawaii: the Mediterranean fruit fly, Ceratitis capitata; the Oriental fruit fly, Dacus dorsalis; and the melon fly, Dacus cucurbitae. However, at dosages higher than 0.26 kGy, the authors observed relatively high rates of egg hatch (10-30 percent). In addition, when eggs are treated at 0.26 kGy, those larvae that do hatch may develop into third instar larvae, and their feeing may decrease the marketability of the fruits. Furthermore, there is some uncertainty as to whether or not importing countries would accept fruits with any living larvae in the shipment. For these reasons, the authors tried to determine the minimum absorbed dosages required to obtain mortality in mature eggs and larvae of the medfly. Results of the research showed that although high egg and larval mortality was observed at dosages of 0.50 to 0.60 kGy in nearly all of the fruit types and varieties studied, 100 percent mortality of mature eggs and larvae was not attained at these dosages. Nevertheless, the authors think that an increase in the minimum absorbed dose higher than that determined using the probit 9 concept (i.e., 0.26 kGy) should be considered because they were able to ascertain that, at dosages from 0.40 to 0.60 kGy, not only is egg hatch greatly reduced but the larvae hatching from these eggs developed only to the late first or early second larval instar stages

  13. Global assessment of predictability of water availability: A bivariate probabilistic Budyko analysis

    Science.gov (United States)

    Wang, Weiguang; Fu, Jianyu

    2018-02-01

    Estimating continental water availability is of great importance for water resources management, in terms of maintaining ecosystem integrity and sustaining society development. To more accurately quantify the predictability of water availability, on the basis of univariate probabilistic Budyko framework, a bivariate probabilistic Budyko approach was developed using copula-based joint distribution model for considering the dependence between parameter ω of Wang-Tang's equation and the Normalized Difference Vegetation Index (NDVI), and was applied globally. The results indicate the predictive performance in global water availability is conditional on the climatic condition. In comparison with simple univariate distribution, the bivariate one produces the lower interquartile range under the same global dataset, especially in the regions with higher NDVI values, highlighting the importance of developing the joint distribution by taking into account the dependence structure of parameter ω and NDVI, which can provide more accurate probabilistic evaluation of water availability.

  14. Univariate and Bivariate Empirical Mode Decomposition for Postural Stability Analysis

    Directory of Open Access Journals (Sweden)

    Jacques Duchêne

    2008-05-01

    Full Text Available The aim of this paper was to compare empirical mode decomposition (EMD and two new extended methods of  EMD named complex empirical mode decomposition (complex-EMD and bivariate empirical mode decomposition (bivariate-EMD. All methods were used to analyze stabilogram center of pressure (COP time series. The two new methods are suitable to be applied to complex time series to extract complex intrinsic mode functions (IMFs before the Hilbert transform is subsequently applied on the IMFs. The trace of the analytic IMF in the complex plane has a circular form, with each IMF having its own rotation frequency. The area of the circle and the average rotation frequency of IMFs represent efficient indicators of the postural stability status of subjects. Experimental results show the effectiveness of these indicators to identify differences in standing posture between groups.

  15. Probability distributions with truncated, log and bivariate extensions

    CERN Document Server

    Thomopoulos, Nick T

    2018-01-01

    This volume presents a concise and practical overview of statistical methods and tables not readily available in other publications. It begins with a review of the commonly used continuous and discrete probability distributions. Several useful distributions that are not so common and less understood are described with examples and applications in full detail: discrete normal, left-partial, right-partial, left-truncated normal, right-truncated normal, lognormal, bivariate normal, and bivariate lognormal. Table values are provided with examples that enable researchers to easily apply the distributions to real applications and sample data. The left- and right-truncated normal distributions offer a wide variety of shapes in contrast to the symmetrically shaped normal distribution, and a newly developed spread ratio enables analysts to determine which of the three distributions best fits a particular set of sample data. The book will be highly useful to anyone who does statistical and probability analysis. This in...

  16. Spectrum-based estimators of the bivariate Hurst exponent

    Czech Academy of Sciences Publication Activity Database

    Krištoufek, Ladislav

    2014-01-01

    Roč. 90, č. 6 (2014), art. 062802 ISSN 1539-3755 R&D Projects: GA ČR(CZ) GP14-11402P Institutional support: RVO:67985556 Keywords : bivariate Hurst exponent * power- law cross-correlations * estimation Subject RIV: AH - Economics Impact factor: 2.288, year: 2014 http://library.utia.cas.cz/separaty/2014/E/kristoufek-0436818.pdf

  17. Robust bivariate error detection in skewed data with application to historical radiosonde winds

    KAUST Repository

    Sun, Ying

    2017-01-18

    The global historical radiosonde archives date back to the 1920s and contain the only directly observed measurements of temperature, wind, and moisture in the upper atmosphere, but they contain many random errors. Most of the focus on cleaning these large datasets has been on temperatures, but winds are important inputs to climate models and in studies of wind climatology. The bivariate distribution of the wind vector does not have elliptical contours but is skewed and heavy-tailed, so we develop two methods for outlier detection based on the bivariate skew-t (BST) distribution, using either distance-based or contour-based approaches to flag observations as potential outliers. We develop a framework to robustly estimate the parameters of the BST and then show how the tuning parameter to get these estimates is chosen. In simulation, we compare our methods with one based on a bivariate normal distribution and a nonparametric approach based on the bagplot. We then apply all four methods to the winds observed for over 35,000 radiosonde launches at a single station and demonstrate differences in the number of observations flagged across eight pressure levels and through time. In this pilot study, the method based on the BST contours performs very well.

  18. Multiresolution transmission of the correlation modes between bivariate time series based on complex network theory

    Science.gov (United States)

    Huang, Xuan; An, Haizhong; Gao, Xiangyun; Hao, Xiaoqing; Liu, Pengpeng

    2015-06-01

    This study introduces an approach to study the multiscale transmission characteristics of the correlation modes between bivariate time series. The correlation between the bivariate time series fluctuates over time. The transmission among the correlation modes exhibits a multiscale phenomenon, which provides richer information. To investigate the multiscale transmission of the correlation modes, this paper describes a hybrid model integrating wavelet analysis and complex network theory to decompose and reconstruct the original bivariate time series into sequences in a joint time-frequency domain and defined the correlation modes at each time-frequency domain. We chose the crude oil spot and futures prices as the sample data. The empirical results indicate that the main duration of volatility (32-64 days) for the strongly positive correlation between the crude oil spot price and the futures price provides more useful information for investors. Moreover, the weighted degree, weighted indegree and weighted outdegree of the correlation modes follow power-law distributions. The correlation fluctuation strengthens the extent of persistence over the long term, whereas persistence weakens over the short and medium term. The primary correlation modes dominating the transmission process and the major intermediary modes in the transmission process are clustered both in the short and long term.

  19. Intravitreal bevacizumab injection alone or combined with triamcinolone versus macular photocoagulation in bilateral diabetic macular edema; application of bivariate generalized linear mixed model with asymmetric random effects in a subgroup of a clinical trial.

    Science.gov (United States)

    Yaseri, Mehdi; Zeraati, Hojjat; Mohammad, Kazem; Soheilian, Masoud; Ramezani, Alireza; Eslani, Medi; Peyman, Gholam A

    2014-01-01

    To compare the efficacy of intravitreal bevacizumab (IVB) injection alone or with intravitreal triamcinolone acetonide (IVB/IVT) versus macular photocoagulation (MPC) in bilateral diabetic macular edema (DME). In this study we revisited data from a subset of subjects previously enrolled in a randomized clinical trial. The original study included 150 eyes randomized to three treatment arms: 1.25 mg IVB alone, combined injection of 1.25 mg IVB and 2 mg IVT, and focal or modified grid MPC. To eliminate the possible effects of systemic confounders, we selected fellow eyes of bilaterally treated subjects who had undergone different treatments; eventually 30 eyes of 15 patients were re-evaluated at baseline, 6, 12, 18, and 24 months. Using mixed model analysis, we compared the treatment protocols regarding visual acuity (VA) and central macular thickness (CMT). Improvement in VA in the IVB group was significantly greater compared to MPC at months 6 and 12 (P = 0.037 and P = 0.035, respectively) but this difference did not persist thereafter up to 24 months. Other levels of VA were comparable at different follow-up intervals (all P > 0.05). The only significant difference in CMT was observed in favor of the IVB group as compared to IVB/IVT group at 24 months (P = 0.048). Overall VA was superior in IVB group as compared to MPC up to 12 months. Although the IVB group showed superiority regarding CMT reduction over 24 months as compared to IVB/IVT group, it was comparable to the MPC group through the same period of follow up.

  20. Evaluation of Prevalence and Related Factors of Pediatric Asthma in Children Under Six Years Old With Logistic Regression and Probit

    Directory of Open Access Journals (Sweden)

    AR Rajaeifard

    2011-08-01

    Full Text Available Introduction & Objective: Asthma is a chronic inflammatory airway disease. Asthma affects one in 13 school age children and is a leading cause of school absenteeism. It seems that prevalence of asthma is increasing wordwide. Many factors are identified and reported as factors related to asthma. This study was carried out to determine the prevalence of asthma and associated factors in 600 children under six years using logistic regression and probit. Materials & Methods: This cross-sectional study was conducted on 600 children under six years old. Questionnaire was constructed based on ISSAC questionnaire and its reliability was determined with a pilot study and calculated by the Cronbach's alpha equal to 69 percent. Cluster sampling based on household records as clusters was performed. Questionnaires were completed by trained staff under supervision of an expert person and by interviewing parents and children. Results: The prevalence of asthma was estimated to be 3.10 (7.89 to 12.78 percent. Based on fitting models to data, factors such as gender, maternal nutrition, exclusive breast feeding to 6 months, smoking at home by a family member and having a history of respiratory allergy in families were significantly associated with asthma prevalence (p-value ≤ 0.05. The results also demonstrated that the both models are almost identical in evaluating the data. Conclusion: This study showed that estimated asthma prevalence is equal to average prevalence reported in Iran. Protective factors, such as exclusive breast feeding as a strategy can be appropriated in children's health care programs and should be much more considered.

  1. Bivariate pointing movements on large touch screens: investigating the validity of a refined Fitts' Law.

    Science.gov (United States)

    Bützler, Jennifer; Vetter, Sebastian; Jochems, Nicole; Schlick, Christopher M

    2012-01-01

    On the basis of three empirical studies Fitts' Law was refined for bivariate pointing tasks on large touch screens. In the first study different target width parameters were investigated. The second study considered the effect of the motion angle. Based on the results of the two studies a refined model for movement time in human-computer interaction was formulated. A third study, which is described here in detail, concerns the validation of the refined model. For the validation study 20 subjects had to execute a bivariate pointing task on a large touch screen. In the experimental task 250 rectangular target objects were displayed at a randomly chosen position on the screen covering a broad range of ID values (ID= [1.01; 4.88]). Compared to existing refinements of Fitts' Law, the new model shows highest predictive validity. A promising field of application of the model is the ergonomic design and evaluation of project management software. By using the refined model, software designers can calculate a priori the appropriate angular position and the size of buttons, menus or icons.

  2. Bivariate spline solution of time dependent nonlinear PDE for a population density over irregular domains.

    Science.gov (United States)

    Gutierrez, Juan B; Lai, Ming-Jun; Slavov, George

    2015-12-01

    We study a time dependent partial differential equation (PDE) which arises from classic models in ecology involving logistic growth with Allee effect by introducing a discrete weak solution. Existence, uniqueness and stability of the discrete weak solutions are discussed. We use bivariate splines to approximate the discrete weak solution of the nonlinear PDE. A computational algorithm is designed to solve this PDE. A convergence analysis of the algorithm is presented. We present some simulations of population development over some irregular domains. Finally, we discuss applications in epidemiology and other ecological problems. Copyright © 2015 Elsevier Inc. All rights reserved.

  3. On the construction of bivariate exponential distributions with an arbitrary correlation coefficient

    DEFF Research Database (Denmark)

    Bladt, Mogens; Nielsen, Bo Friis

    coefficient (also negative). Secondly, the class satisfies that any linear combination (projection) of the marginal random variables is a phase {type distributions, The latter property is potentially important for the development hypothesis testing in linear models. Thirdly, it is very easy to simulate......In this paper we use a concept of multivariate phase-type distributions to define a class of bivariate exponential distributions. This class has the following three appealing properties. Firstly, we may construct a pair of exponentially distributed random variables with any feasible correlation...

  4. An efficient algorithm for generating random number pairs drawn from a bivariate normal distribution

    Science.gov (United States)

    Campbell, C. W.

    1983-01-01

    An efficient algorithm for generating random number pairs from a bivariate normal distribution was developed. Any desired value of the two means, two standard deviations, and correlation coefficient can be selected. Theoretically the technique is exact and in practice its accuracy is limited only by the quality of the uniform distribution random number generator, inaccuracies in computer function evaluation, and arithmetic. A FORTRAN routine was written to check the algorithm and good accuracy was obtained. Some small errors in the correlation coefficient were observed to vary in a surprisingly regular manner. A simple model was developed which explained the qualities aspects of the errors.

  5. Bivariate Drought Analysis Using Streamflow Reconstruction with Tree Ring Indices in the Sacramento Basin, California, USA

    Directory of Open Access Journals (Sweden)

    Jaewon Kwak

    2016-03-01

    Full Text Available Long-term streamflow data are vital for analysis of hydrological droughts. Using an artificial neural network (ANN model and nine tree-ring indices, this study reconstructed the annual streamflow of the Sacramento River for the period from 1560 to 1871. Using the reconstructed streamflow data, the copula method was used for bivariate drought analysis, deriving a hydrological drought return period plot for the Sacramento River basin. Results showed strong correlation among drought characteristics, and the drought with a 20-year return period (17.2 million acre-feet (MAF per year in the Sacramento River basin could be considered a critical level of drought for water shortages.

  6. A bivariate space-time downscaler under space and time misalignment.

    Science.gov (United States)

    Berrocal, Veronica J; Gelfand, Alan E; Holland, David M

    2010-12-01

    Ozone and particulate matter PM(2.5) are co-pollutants that have long been associated with increased public health risks. Information on concentration levels for both pollutants come from two sources: monitoring sites and output from complex numerical models that produce concentration surfaces over large spatial regions. In this paper, we offer a fully-model based approach for fusing these two sources of information for the pair of co-pollutants which is computationally feasible over large spatial regions and long periods of time. Due to the association between concentration levels of the two environmental contaminants, it is expected that information regarding one will help to improve prediction of the other. Misalignment is an obvious issue since the monitoring networks for the two contaminants only partly intersect and because the collection rate for PM(2.5) is typically less frequent than that for ozone.Extending previous work in Berrocal et al. (2009), we introduce a bivariate downscaler that provides a flexible class of bivariate space-time assimilation models. We discuss computational issues for model fitting and analyze a dataset for ozone and PM(2.5) for the ozone season during year 2002. We show a modest improvement in predictive performance, not surprising in a setting where we can anticipate only a small gain.

  7. A public perspective on the adoption of microgeneration technologies in New Zealand: A multivariate probit approach

    International Nuclear Information System (INIS)

    Baskaran, Ramesh; Managi, Shunsuke; Bendig, Mirko

    2013-01-01

    The growing demand for electricity in New Zealand has led to the construction of new hydro-dams or power stations that have had environmental, social and cultural effects. These effects may drive increases in electricity prices, as such prices reflect the cost of running existing power stations as well as building new ones. This study uses Canterbury and Central Otago as case studies because both regions face similar issues in building new hydro-dams and ever-increasing electricity prices that will eventually prompt households to buy power at higher prices. One way for households to respond to these price changes is to generate their own electricity through microgeneration technologies (MGT). The objective of this study is to investigate public perception and preferences regarding MGT and to analyze the factors that influence people’s decision to adopt such new technologies in New Zealand. The study uses a multivariate probit approach to examine households’ willingness to adopt any one MGT system or a combination of the MGT systems. Our findings provide valuable information for policy makers and marketers who wish to promote effective microgeneration technologies. - Highlights: ► We examine New Zealand households’ awareness level for microgeneration technologies (MGT) and empirically explore the factors that determine people’s willingness to adopt for MGT. ► The households are interested and willing to adopt the MGT systems. ► Noticeable heterogeneity exists between groups of households in adopting the MGT. ► No significant regional difference exists in promoting solar hot water policies. ► Public and private sectors incentives are important in promoting the MGT

  8. Selection effects in the bivariate brightness distribution for spiral galaxies

    International Nuclear Information System (INIS)

    Phillipps, S.; Disney, M.

    1986-01-01

    The joint distribution of total luminosity and characteristic surface brightness (the bivariate brightness distribution) is investigated for a complete sample of spiral galaxies in the Virgo cluster. The influence of selection and physical limits of various kinds on the apparent distribution are detailed. While the distribution of surface brightness for bright galaxies may be genuinely fairly narrow, faint galaxies exist right across the (quite small) range of accessible surface brightnesses so no statement can be made about the true extent of the distribution. The lack of high surface brightness bright galaxies in the Virgo sample relative to an overall RC2 sample (mostly field galaxies) supports the contention that the star-formation rate is reduced in the inner region of the cluster for environmental reasons. (author)

  9. Bivariate genome-wide association analyses identified genetic pleiotropic effects for bone mineral density and alcohol drinking in Caucasians.

    Science.gov (United States)

    Lu, Shan; Zhao, Lan-Juan; Chen, Xiang-Ding; Papasian, Christopher J; Wu, Ke-Hao; Tan, Li-Jun; Wang, Zhuo-Er; Pei, Yu-Fang; Tian, Qing; Deng, Hong-Wen

    2017-11-01

    Several studies indicated bone mineral density (BMD) and alcohol intake might share common genetic factors. The study aimed to explore potential SNPs/genes related to both phenotypes in US Caucasians at the genome-wide level. A bivariate genome-wide association study (GWAS) was performed in 2069 unrelated participants. Regular drinking was graded as 1, 2, 3, 4, 5, or 6, representing drinking alcohol never, less than once, once or twice, three to six times, seven to ten times, or more than ten times per week respectively. Hip, spine, and whole body BMDs were measured. The bivariate GWAS was conducted on the basis of a bivariate linear regression model. Sex-stratified association analyses were performed in the male and female subgroups. In males, the most significant association signal was detected in SNP rs685395 in DYNC2H1 with bivariate spine BMD and alcohol drinking (P = 1.94 × 10 -8 ). SNP rs685395 and five other SNPs, rs657752, rs614902, rs682851, rs626330, and rs689295, located in the same haplotype block in DYNC2H1 were the top ten most significant SNPs in the bivariate GWAS in males. Additionally, two SNPs in GRIK4 in males and three SNPs in OPRM1 in females were suggestively associated with BMDs (of the hip, spine, and whole body) and alcohol drinking. Nine SNPs in IL1RN were only suggestively associated with female whole body BMD and alcohol drinking. Our study indicated that DYNC2H1 may contribute to the genetic mechanisms of both spine BMD and alcohol drinking in male Caucasians. Moreover, our study suggested potential pleiotropic roles of OPRM1 and IL1RN in females and GRIK4 in males underlying variation of both BMD and alcohol drinking.

  10. A non-stationary cost-benefit based bivariate extreme flood estimation approach

    Science.gov (United States)

    Qi, Wei; Liu, Junguo

    2018-02-01

    Cost-benefit analysis and flood frequency analysis have been integrated into a comprehensive framework to estimate cost effective design values. However, previous cost-benefit based extreme flood estimation is based on stationary assumptions and analyze dependent flood variables separately. A Non-Stationary Cost-Benefit based bivariate design flood estimation (NSCOBE) approach is developed in this study to investigate influence of non-stationarities in both the dependence of flood variables and the marginal distributions on extreme flood estimation. The dependence is modeled utilizing copula functions. Previous design flood selection criteria are not suitable for NSCOBE since they ignore time changing dependence of flood variables. Therefore, a risk calculation approach is proposed based on non-stationarities in both marginal probability distributions and copula functions. A case study with 54-year observed data is utilized to illustrate the application of NSCOBE. Results show NSCOBE can effectively integrate non-stationarities in both copula functions and marginal distributions into cost-benefit based design flood estimation. It is also found that there is a trade-off between maximum probability of exceedance calculated from copula functions and marginal distributions. This study for the first time provides a new approach towards a better understanding of influence of non-stationarities in both copula functions and marginal distributions on extreme flood estimation, and could be beneficial to cost-benefit based non-stationary bivariate design flood estimation across the world.

  11. A method of moments to estimate bivariate survival functions: the copula approach

    Directory of Open Access Journals (Sweden)

    Silvia Angela Osmetti

    2013-05-01

    Full Text Available In this paper we discuss the problem on parametric and non parametric estimation of the distributions generated by the Marshall-Olkin copula. This copula comes from the Marshall-Olkin bivariate exponential distribution used in reliability analysis. We generalize this model by the copula and different marginal distributions to construct several bivariate survival functions. The cumulative distribution functions are not absolutely continuous and they unknown parameters are often not be obtained in explicit form. In order to estimate the parameters we propose an easy procedure based on the moments. This method consist in two steps: in the first step we estimate only the parameters of marginal distributions and in the second step we estimate only the copula parameter. This procedure can be used to estimate the parameters of complex survival functions in which it is difficult to find an explicit expression of the mixed moments. Moreover it is preferred to the maximum likelihood one for its simplex mathematic form; in particular for distributions whose maximum likelihood parameters estimators can not be obtained in explicit form.

  12. Epileptic seizure prediction based on a bivariate spectral power methodology.

    Science.gov (United States)

    Bandarabadi, Mojtaba; Teixeira, Cesar A; Direito, Bruno; Dourado, Antonio

    2012-01-01

    The spectral power of 5 frequently considered frequency bands (Alpha, Beta, Gamma, Theta and Delta) for 6 EEG channels is computed and then all the possible pairwise combinations among the 30 features set, are used to create a 435 dimensional feature space. Two new feature selection methods are introduced to choose the best candidate features among those and to reduce the dimensionality of this feature space. The selected features are then fed to Support Vector Machines (SVMs) that classify the cerebral state in preictal and non-preictal classes. The outputs of the SVM are regularized using a method that accounts for the classification dynamics of the preictal class, also known as "Firing Power" method. The results obtained using our feature selection approaches are compared with the ones obtained using minimum Redundancy Maximum Relevance (mRMR) feature selection method. The results in a group of 12 patients of the EPILEPSIAE database, containing 46 seizures and 787 hours multichannel recording for out-of-sample data, indicate the efficiency of the bivariate approach as well as the two new feature selection methods. The best results presented sensitivity of 76.09% (35 of 46 seizures predicted) and a false prediction rate of 0.15(-1).

  13. Preparation and bivariate analysis of suspensions of human chromosomes

    Energy Technology Data Exchange (ETDEWEB)

    van den Engh, G.J.; Trask, B.J.; Gray, J.W.; Langlois, R.G.; Yu, L.C.

    1985-01-01

    Chromosomes were isolated from a variety of human cell types using a HEPES-buffered hypotonic solution (pH 8.0) containing KCl, MgSO/sub 4/ dithioerythritol, and RNase. The chromosomes isolated by this procedure could be stained with a variety of fluorescent stains including propidium iodide, chromomycin A3, and Hoeschst 33258. Addition of sodium citrate to the stained chromosomes was found to improve the total fluorescence resolution. High-quality bivariate Hoeschst vs. chromomycin fluorescence distributions were obtained for chromosomes isolated from a human fibroblast cell strain, a human colon carcinoma cell line, and human peripheral blood lymphocyte cultures. Good flow karyotypes were also obtained from primary amniotic cell cultures. The Hoeschst vs. chromomycin flow karyotypes of a given cell line, made at different times and at dye concentrations varying over fourfold ranges, show little variation in the relative peak positions of the chromosomes. The size of the DNA in chromosomes isolated using this procedure ranges from 20 to 50 kilobases. The described isolation procedure is simple, it yields high-quality flow karyotypes, and it can be used to prepare chromosomes from clinical samples. 22 references, 7 figures, 1 table.

  14. Modeling Unobserved Consideration Sets for Household Panel Data

    NARCIS (Netherlands)

    J.E.M. van Nierop; R. Paap (Richard); B. Bronnenberg; Ph.H.B.F. Franses (Philip Hans)

    2000-01-01

    textabstractWe propose a new method to model consumers' consideration and choice processes. We develop a parsimonious probit type model for consideration and a multinomial probit model for choice, given consideration. Unlike earlier models of consideration ours is not prone to the curse of

  15. Bivariate return periods of temperature and precipitation explain a large fraction of European crop yields

    Directory of Open Access Journals (Sweden)

    J. Zscheischler

    2017-07-01

    Full Text Available Crops are vital for human society. Crop yields vary with climate and it is important to understand how climate and crop yields are linked to ensure future food security. Temperature and precipitation are among the key driving factors of crop yield variability. Previous studies have investigated mostly linear relationships between temperature and precipitation and crop yield variability. Other research has highlighted the adverse impacts of climate extremes, such as drought and heat waves, on crop yields. Impacts are, however, often non-linearly related to multivariate climate conditions. Here we derive bivariate return periods of climate conditions as indicators for climate variability along different temperature–precipitation gradients. We show that in Europe, linear models based on bivariate return periods of specific climate conditions explain on average significantly more crop yield variability (42 % than models relying directly on temperature and precipitation as predictors (36 %. Our results demonstrate that most often crop yields increase along a gradient from hot and dry to cold and wet conditions, with lower yields associated with hot and dry periods. The majority of crops are most sensitive to climate conditions in summer and to maximum temperatures. The use of bivariate return periods allows the integration of non-linear impacts into climate–crop yield analysis. This offers new avenues to study the link between climate and crop yield variability and suggests that they are possibly more strongly related than what is inferred from conventional linear models.

  16. Historical and future drought in Bangladesh using copula-based bivariate regional frequency analysis

    Science.gov (United States)

    Mortuza, Md Rubayet; Moges, Edom; Demissie, Yonas; Li, Hong-Yi

    2018-02-01

    The study aims at regional and probabilistic evaluation of bivariate drought characteristics to assess both the past and future drought duration and severity in Bangladesh. The procedures involve applying (1) standardized precipitation index to identify drought duration and severity, (2) regional frequency analysis to determine the appropriate marginal distributions for both duration and severity, (3) copula model to estimate the joint probability distribution of drought duration and severity, and (4) precipitation projections from multiple climate models to assess future drought trends. Since drought duration and severity in Bangladesh are often strongly correlated and do not follow same marginal distributions, the joint and conditional return periods of droughts are characterized using the copula-based joint distribution. The country is divided into three homogeneous regions using Fuzzy clustering and multivariate discordancy and homogeneity measures. For given severity and duration values, the joint return periods for a drought to exceed both values are on average 45% larger, while to exceed either value are 40% less than the return periods from the univariate frequency analysis, which treats drought duration and severity independently. These suggest that compared to the bivariate drought frequency analysis, the standard univariate frequency analysis under/overestimate the frequency and severity of droughts depending on how their duration and severity are related. Overall, more frequent and severe droughts are observed in the west side of the country. Future drought trend based on four climate models and two scenarios showed the possibility of less frequent drought in the future (2020-2100) than in the past (1961-2010).

  17. A composite likelihood method for bivariate meta-analysis in diagnostic systematic reviews.

    Science.gov (United States)

    Chen, Yong; Liu, Yulun; Ning, Jing; Nie, Lei; Zhu, Hongjian; Chu, Haitao

    2017-04-01

    Diagnostic systematic review is a vital step in the evaluation of diagnostic technologies. In many applications, it involves pooling pairs of sensitivity and specificity of a dichotomized diagnostic test from multiple studies. We propose a composite likelihood (CL) method for bivariate meta-analysis in diagnostic systematic reviews. This method provides an alternative way to make inference on diagnostic measures such as sensitivity, specificity, likelihood ratios, and diagnostic odds ratio. Its main advantages over the standard likelihood method are the avoidance of the nonconvergence problem, which is nontrivial when the number of studies is relatively small, the computational simplicity, and some robustness to model misspecifications. Simulation studies show that the CL method maintains high relative efficiency compared to that of the standard likelihood method. We illustrate our method in a diagnostic review of the performance of contemporary diagnostic imaging technologies for detecting metastases in patients with melanoma.

  18. A Basic Bivariate Structure of Personality Attributes Evident Across Nine Languages.

    Science.gov (United States)

    Saucier, Gerard; Thalmayer, Amber Gayle; Payne, Doris L; Carlson, Robert; Sanogo, Lamine; Ole-Kotikash, Leonard; Church, A Timothy; Katigbak, Marcia S; Somer, Oya; Szarota, Piotr; Szirmák, Zsofia; Zhou, Xinyue

    2014-02-01

    Here, two studies seek to characterize a parsimonious common-denominator personality structure with optimal cross-cultural replicability. Personality differences are observed in all human populations and cultures, but lexicons for personality attributes contain so many distinctions that parsimony is lacking. Models stipulating the most important attributes have been formulated by experts or by empirical studies drawing on experience in a very limited range of cultures. Factor analyses of personality lexicons of nine languages of diverse provenance (Chinese, Korean, Filipino, Turkish, Greek, Polish, Hungarian, Maasai, and Senoufo) were examined, and their common structure was compared to that of several prominent models in psychology. A parsimonious bivariate model showed evidence of substantial convergence and ubiquity across cultures. Analyses involving key markers of these dimensions in English indicate that they are broad dimensions involving the overlapping content of the interpersonal circumplex, models of communion and agency, and morality/warmth and competence. These "Big Two" dimensions-Social Self-Regulation and Dynamism-provide a common-denominator model involving the two most crucial axes of personality variation, ubiquitous across cultures. The Big Two might serve as an umbrella model serving to link diverse theoretical models and associated research literatures. © 2013 Wiley Periodicals, Inc.

  19. Measuring early or late dependence for bivariate lifetimes of twins

    DEFF Research Database (Denmark)

    Scheike, Thomas; Holst, Klaus K; Hjelmborg, Jacob B

    2015-01-01

    -Oakes model. This model can be extended in several directions. One extension is to allow the dependence parameter to depend on covariates. Another extension is to model dependence via piecewise constant cross-hazard ratio models. We show how both these models can be implemented for large sample data......, and suggest a computational solution for obtaining standard errors for such models for large registry data. In addition we consider alternative models that have some computational advantages and with different dependence parameters based on odds ratios of the survival function using the Plackett distribution...

  20. A bivariate optimal replacement policy with cumulative repair cost ...

    Indian Academy of Sciences (India)

    Min-Tsai Lai

    Shock model; cumulative damage model; cumulative repair cost limit; preventive maintenance model. 1. Introduction. Most production systems are repaired or replaced when they have already failed. However, they may require much time and high expenses to repair a failed system, so maintaining a system to prevent ...

  1. GIS-based bivariate statistical techniques for groundwater potential ...

    Indian Academy of Sciences (India)

    Ali Haghizadeh

    2017-11-23

    Nov 23, 2017 ... So, these models are known as computational intel- ligence and machine learning techniques to use for replacing physically based models. In contrast, knowledge-driven methods (KDM) use rich prior knowledge for model building based on knowledge engineering and management technologies (Azkune.

  2. On the matched pairs sign test using bivariate ranked set sampling ...

    African Journals Online (AJOL)

    BVRSS) is introduced and investigated. We show that this test is asymptotically more efficient than its counterpart sign test based on a bivariate simple random sample (BVSRS). The asymptotic null distribution and the efficiency of the test are derived.

  3. A bivariate optimal replacement policy with cumulative repair cost ...

    Indian Academy of Sciences (India)

    Min-Tsai Lai

    [5], Mizuno [6], Nakagawa [1, 7], Qian et al [8] and Perry. [9]. Furthermore, the replacement models in which a sys- tem is replaced after a shock N were proposed in Nagakawa. [10]. The replacement model with multiple decision vari- ables T, N, and k were proposed in Nakagawa and Kijima. [11] and Satow and Nakagawa ...

  4. Assessing the copula selection for bivariate frequency analysis ...

    Indian Academy of Sciences (India)

    58

    upper tail copulas (Frank, Clayton and Gaussian), if there exists asymptotic dependence in the. 24 flood characteristics. ... characteristics and Frank, Clayton and Gaussian copulas are the appropriate copula models in. 30 ..... The mean of daily discharge of Trian stream gauge from 1978 to 2013 is 527.4 m3/s and the. 181.

  5. GIS-Based bivariate statistical techniques for groundwater potential ...

    Indian Academy of Sciences (India)

    24

    driven and knowledge-driven models (Corsini ... addition, the usage application of GIS-based SI technique in groundwater potential mapping .... lithology of an given area and affect the drainage density and can be of great big value for to evaluate ...

  6. Assessing protein conformational sampling methods based on bivariate lag-distributions of backbone angles

    KAUST Repository

    Maadooliat, Mehdi

    2012-08-27

    Despite considerable progress in the past decades, protein structure prediction remains one of the major unsolved problems in computational biology. Angular-sampling-based methods have been extensively studied recently due to their ability to capture the continuous conformational space of protein structures. The literature has focused on using a variety of parametric models of the sequential dependencies between angle pairs along the protein chains. In this article, we present a thorough review of angular-sampling-based methods by assessing three main questions: What is the best distribution type to model the protein angles? What is a reasonable number of components in a mixture model that should be considered to accurately parameterize the joint distribution of the angles? and What is the order of the local sequence-structure dependency that should be considered by a prediction method? We assess the model fits for different methods using bivariate lag-distributions of the dihedral/planar angles. Moreover, the main information across the lags can be extracted using a technique called Lag singular value decomposition (LagSVD), which considers the joint distribution of the dihedral/planar angles over different lags using a nonparametric approach and monitors the behavior of the lag-distribution of the angles using singular value decomposition. As a result, we developed graphical tools and numerical measurements to compare and evaluate the performance of different model fits. Furthermore, we developed a web-tool (http://www.stat.tamu. edu/~madoliat/LagSVD) that can be used to produce informative animations. © The Author 2012. Published by Oxford University Press.

  7. Diffusion of particles on the patchwise bivariate surfaces

    Czech Academy of Sciences Publication Activity Database

    Tarasenko, Alexander; Jastrabík, Lubomír

    2015-01-01

    Roč. 458, Feb (2015), s. 27-34 ISSN 0921-4526 R&D Projects: GA TA ČR TA01010517; GA ČR GAP108/12/1941; GA TA ČR TA03010743 Institutional support: RVO:68378271 Keywords : kinetic Monte Carlo simulations * lattice-gas model * patchwise lattice * surface diffusion Subject RIV: BM - Solid Matter Physics ; Magnetism Impact factor: 1.352, year: 2015

  8. Bivariate frequency analysis of rainfall intensity and duration for urban stormwater infrastructure design

    Science.gov (United States)

    Jun, Changhyun; Qin, Xiaosheng; Gan, Thian Yew; Tung, Yeou-Koung; De Michele, Carlo

    2017-10-01

    This study presents a storm-event based bivariate frequency analysis approach to determine design rainfalls in which, the number, intensity and duration of actual rainstorm events were considered. To derive more realistic design storms, the occurrence probability of an individual rainstorm event was determined from the joint distribution of storm intensity and duration through a copula model. Hourly rainfall data were used at three climate stations respectively located in Singapore, South Korea and Canada. It was found that the proposed approach could give a more realistic description of rainfall characteristics of rainstorm events and design rainfalls. As results, the design rainfall quantities from actual rainstorm events at the three studied sites are consistently lower than those obtained from the conventional rainfall depth-duration-frequency (DDF) method, especially for short-duration storms (such as 1-h). It results from occurrence probabilities of each rainstorm event and a different angle for rainfall frequency analysis, and could offer an alternative way of describing extreme rainfall properties and potentially help improve the hydrologic design of stormwater management facilities in urban areas.

  9. Multiscale Fluctuation Features of the Dynamic Correlation between Bivariate Time Series

    Directory of Open Access Journals (Sweden)

    Meihui Jiang

    2016-01-01

    Full Text Available The fluctuation of the dynamic correlation between bivariate time series has some special features on the time-frequency domain. In order to study these fluctuation features, this paper built the dynamic correlation network models using two kinds of time series as sample data. After studying the dynamic correlation networks at different time-scales, we found that the correlation between time series is a dynamic process. The correlation is strong and stable in the long term, but it is weak and unstable in the short and medium term. There are key correlation modes which can effectively indicate the trend of the correlation. The transmission characteristics of correlation modes show that it is easier to judge the trend of the fluctuation of the correlation between time series from the short term to long term. The evolution of media capability of the correlation modes shows that the transmission media in the long term have higher value to predict the trend of correlation. This work does not only propose a new perspective to analyze the correlation between time series but also provide important information for investors and decision makers.

  10. Bayesian bivariate meta-analysis of diagnostic test studies using integrated nested Laplace approximations.

    Science.gov (United States)

    Paul, M; Riebler, A; Bachmann, L M; Rue, H; Held, L

    2010-05-30

    For bivariate meta-analysis of diagnostic studies, likelihood approaches are very popular. However, they often run into numerical problems with possible non-convergence. In addition, the construction of confidence intervals is controversial. Bayesian methods based on Markov chain Monte Carlo (MCMC) sampling could be used, but are often difficult to implement, and require long running times and diagnostic convergence checks. Recently, a new Bayesian deterministic inference approach for latent Gaussian models using integrated nested Laplace approximations (INLA) has been proposed. With this approach MCMC sampling becomes redundant as the posterior marginal distributions are directly and accurately approximated. By means of a real data set we investigate the influence of the prior information provided and compare the results obtained by INLA, MCMC, and the maximum likelihood procedure SAS PROC NLMIXED. Using a simulation study we further extend the comparison of INLA and SAS PROC NLMIXED by assessing their performance in terms of bias, mean-squared error, coverage probability, and convergence rate. The results indicate that INLA is more stable and gives generally better coverage probabilities for the pooled estimates and less biased estimates of variance parameters. The user-friendliness of INLA is demonstrated by documented R-code. Copyright (c) 2010 John Wiley & Sons, Ltd.

  11. Programa en "BASIC" para el cálculo de DL50 por el método de Probits

    Directory of Open Access Journals (Sweden)

    Ramiro Castro de la Mata

    1998-01-01

    Full Text Available Se describe un programa en lenguaje BASIC, para plataforma MSDOS, diseñado para calcular la Dosis Letal Media (DL50 de agentes tóxicos utilizando el método de transformación en probits. Para la utilización del programa, el operador provee las dosis del agente tóxico empleado en el ensayo de toxicidad aguda letal, el número de animales utilizados en cada dosis, y el número de animales que murieron con dicha dosis; la DL50, así como los límites fiducidales al 95% de confianza. Asimismo se presentan los valores definitivos y provisionales de la recta. El programa ofrece además la posibilidad de efectuar reanálisis de los datos en los casos en que se desee buscar mayor aproximación de los valores; gráfica, archivo de resultados en disco, y la opción de comparación de dos curvas, para la estimación de potencias relativas.

  12. Bivariable analysis of ventricular late potentials in high resolution ECG records

    International Nuclear Information System (INIS)

    Orosco, L; Laciar, E

    2007-01-01

    In this study the bivariable analysis for ventricular late potentials detection in high-resolution electrocardiographic records is proposed. The standard time-domain analysis and the application of the time-frequency technique to high-resolution ECG records are briefly described as well as their corresponding results. In the proposed technique the time-domain parameter, QRSD and the most significant time-frequency index, EN QRS are used like variables. A bivariable index is defined, that combines the previous parameters. The propose technique allows evaluating the risk of ventricular tachycardia in post-myocardial infarct patients. The results show that the used bivariable index allows discriminating between the patient's population with ventricular tachycardia and the subjects of the control group. Also, it was found that the bivariable technique obtains a good valuation as diagnostic test. It is concluded that comparatively, the valuation of the bivariable technique as diagnostic test is superior to that of the time-domain method and the time-frequency technique evaluated individually

  13. A note on finding peakedness in bivariate normal distribution using Mathematica

    Directory of Open Access Journals (Sweden)

    Anwer Khurshid

    2007-07-01

    Full Text Available Peakedness measures the concentration around the central value. A classical standard measure of peakedness is kurtosis which is the degree of peakedness of a probability distribution. In view of inconsistency of kurtosis in measuring of the peakedness of a distribution, Horn (1983 proposed a measure of peakedness for symmetrically unimodal distributions. The objective of this paper is two-fold. First, Horn’s method has been extended for bivariate normal distribution. Secondly, to show that computer algebra system Mathematica can be extremely useful tool for all sorts of computation related to bivariate normal distribution. Mathematica programs are also provided.

  14. Causal networks clarify productivity-richness interrelations, bivariate plots do not

    Science.gov (United States)

    Grace, James B.; Adler, Peter B.; Harpole, W. Stanley; Borer, Elizabeth T.; Seabloom, Eric W.

    2014-01-01

    Perhaps no other pair of variables in ecology has generated as much discussion as species richness and ecosystem productivity, as illustrated by the reactions by Pierce (2013) and others to Adler et al.'s (2011) report that empirical patterns are weak and inconsistent. Adler et al. (2011) argued we need to move beyond a focus on simplistic bivariate relationships and test mechanistic, multivariate causal hypotheses. We feel the continuing debate over productivity–richness relationships (PRRs) provides a focused context for illustrating the fundamental difficulties of using bivariate relationships to gain scientific understanding.

  15. Classification of Knee Joint Vibration Signals Using Bivariate Feature Distribution Estimation and Maximal Posterior Probability Decision Criterion

    Directory of Open Access Journals (Sweden)

    Fang Zheng

    2013-04-01

    Full Text Available Analysis of knee joint vibration or vibroarthrographic (VAG signals using signal processing and machine learning algorithms possesses high potential for the noninvasive detection of articular cartilage degeneration, which may reduce unnecessary exploratory surgery. Feature representation of knee joint VAG signals helps characterize the pathological condition of degenerative articular cartilages in the knee. This paper used the kernel-based probability density estimation method to model the distributions of the VAG signals recorded from healthy subjects and patients with knee joint disorders. The estimated densities of the VAG signals showed explicit distributions of the normal and abnormal signal groups, along with the corresponding contours in the bivariate feature space. The signal classifications were performed by using the Fisher’s linear discriminant analysis, support vector machine with polynomial kernels, and the maximal posterior probability decision criterion. The maximal posterior probability decision criterion was able to provide the total classification accuracy of 86.67% and the area (Az of 0.9096 under the receiver operating characteristics curve, which were superior to the results obtained by either the Fisher’s linear discriminant analysis (accuracy: 81.33%, Az: 0.8564 or the support vector machine with polynomial kernels (accuracy: 81.33%, Az: 0.8533. Such results demonstrated the merits of the bivariate feature distribution estimation and the superiority of the maximal posterior probability decision criterion for analysis of knee joint VAG signals.

  16. Diagnostic performance of des-γ-carboxy prothrombin (DCP) for hepatocellular carcinoma: a bivariate meta-analysis.

    Science.gov (United States)

    Gao, P; Li, M; Tian, Q B; Liu, Dian-Wu

    2012-01-01

    Serum markers are needed to be developed to specifically diagnose Hepatocellular carcinoma (HCC). Des-γ-carboxy prothrombin (DCP) is a promising tool with limited expense and widely accessibility, but the reported results have been controversial. In order to review the performance of DCP for the diagnosis of HCC, the meta-analysis was performed. After a systematic review of relevant studies, the sensitivity, specificity, positive and negative likelihood ratios (PLR and NLR, respectively) were pooled using a bivariate meta-analysis. Potential between-study heterogeneity was explored by meta-regression model. The post-test probability and the likelihood ratio scattergram to evaluate clinical usefulness were calculated. Based on literature review of 20 publications, the overall sensitivity, specificity, PLR and NLR of DCP for the detection of HCC were 67% (95%CI, 58%-74%), 92% (95%CI, 88%-94%), 7.9 (95%CI, 5.6-11.2) and 0.36 (95%CI, 0.29-0.46), respectively. The area under the bivariate summary receiving operating characteristics curve was 0.89 (95%CI, 0.85-0.92). Significant heterogeneity was present. In conclusion, the major role of DCP is the moderate confirmation of HCC. More prospective studies of DCP are needed in future.

  17. Accuracy of body mass index in predicting pre-eclampsia: bivariate meta-analysis

    NARCIS (Netherlands)

    Cnossen, J. S.; Leeflang, M. M. G.; de Haan, E. E. M.; Mol, B. W. J.; van der Post, J. A. M.; Khan, K. S.; ter Riet, G.

    2007-01-01

    OBJECTIVE: The objective of this study was to determine the accuracy of body mass index (BMI) (pre-pregnancy or at booking) in predicting pre-eclampsia and to explore its potential for clinical application. DESIGN: Systematic review and bivariate meta-analysis. SETTING: Medline, Embase, Cochrane

  18. First-order dominance: stronger characterization and a bivariate checking algorithm

    DEFF Research Database (Denmark)

    Range, Troels Martin; Østerdal, Lars Peter Raahave

    2018-01-01

    distributions. Utilizing that this problem can be formulated as a transportation problem with a special structure, we provide a stronger characterization of multivariate first-order dominance and develop a linear time complexity checking algorithm for the bivariate case. We illustrate the use of the checking...

  19. On the Construction of Bivariate Exponential Distributions with an Arbitrary Correlation Coefficient

    DEFF Research Database (Denmark)

    Bladt, Mogens; Nielsen, Bo Friis

    2010-01-01

    In this article we use the concept of multivariate phase-type distributions to define a class of bivariate exponential distributions. This class has the following three appealing properties. Firstly, we may construct a pair of exponentially distributed random variables with any feasible correlation...

  20. Semi-automated detection of aberrant chromosomes in bivariate flow karyotypes

    NARCIS (Netherlands)

    Boschman, G. A.; Manders, E. M.; Rens, W.; Slater, R.; Aten, J. A.

    1992-01-01

    A method is described that is designed to compare, in a standardized procedure, bivariate flow karyotypes of Hoechst 33258 (HO)/Chromomycin A3 (CA) stained human chromosomes from cells with aberrations with a reference flow karyotype of normal chromosomes. In addition to uniform normalization of

  1. A simple approximation to the bivariate normal distribution with large correlation coefficient

    NARCIS (Netherlands)

    Albers, Willem/Wim; Kallenberg, W.C.M.

    1994-01-01

    The bivariate normal distribution function is approximated with emphasis on situations where the correlation coefficient is large. The high accuracy of the approximation is illustrated by numerical examples. Moreover, exact upper and lower bounds are presented as well as asymptotic results on the

  2. Effect of catchment properties and flood generation regime on copula selection for bivariate flood frequency analysis

    Science.gov (United States)

    Filipova, Valeriya; Lawrence, Deborah; Klempe, Harald

    2018-02-01

    Applying copula-based bivariate flood frequency analysis is advantageous because the results provide information on both the flood peak and volume. More data are, however, required for such an analysis, and it is often the case that only data series with a limited record length are available. To overcome this issue of limited record length, data regarding climatic and geomorphological properties can be used to complement statistical methods. In this paper, we present a study of 27 catchments located throughout Norway, in which we assess whether catchment properties, flood generation processes and flood regime have an effect on the correlation between flood peak and volume and, in turn, on the selection of copulas. To achieve this, the annual maximum flood events were first classified into events generated primarily by rainfall, snowmelt or a combination of these. The catchments were then classified into flood regime, depending on the predominant flood generation process producing the annual maximum flood events. A contingency table and Fisher's exact test were used to determine the factors that affect the selection of copulas in the study area. The results show that the two-parameter copulas BB1 and BB7 are more commonly selected in catchments with high steepness, high mean annual runoff and rainfall flood regime. These findings suggest that in these types of catchments, the dependence structure between flood peak and volume is more complex and cannot be modeled effectively using a one-parameter copula. The results illustrate that by relating copula types to flood regime and catchment properties, additional information can be supplied for selecting copulas in catchments with limited data.

  3. Probit Analysis of Carbamate-Pesticide-Toxicity at Soil-Water Interface to N2-Fixing Cyanobacterium Cylindrospermum sp

    Directory of Open Access Journals (Sweden)

    Rabindra N. Padhy

    2015-03-01

    Full Text Available Toxicity-data of two carbamate insecticides, carbaryl and carbofuran, and three fungicides, ziram, zineb and mancozeb with rice-field N2-fixing cyanobacterium Cylindrospermum sp., obtained by in vitro growth and at soil-water interface, were analyzed by the probit method. Growth enhancing concentration, no-observed effective concentration, minimum inhibitory concentration, the highest permissive concentration and lethal concentration100 (LC100 were determined experimentally. The LC50 values of carbaryl, carbofuran, ziram, zineb and mancozeb in N2-fixing liquid medium were 56.2, 588.8, 0.07, 4.2 and 3.4 μg/mL, respectively, whereas the corresponding LC100 values were 100.0, 1500.0, 0.17, 25.0 and 9.0 μg/mL, respectively. The LC50 values of these pesticides in succession in N2-fixing agar medium were 44.7, 239.9, 0.07, 1.8 and 2.3 μg/mL, respectively, whereas the corresponding LC100 values were 100.0, 600.0, 0.17, 10.0 and 7.0 μg/mL, respectively. Similar results with nitrate supplemented liquid and agar media indicated that nitrate supplementation had toxicity reducing effect. The LC50 and LC100 values of toxicity in the N2-fixing liquid medium at soil-water interface were 91.2 and 200.0 μg/mL for carbaryl, 2 317 and 6 000 μg/mL for carbofuran, 0.15 and 0.50 μg/mL for ziram, 16.4 and 50.0 μg/mL for zineb, and 7.2 and 25.0 μg/mL for mancozeb, respectively. Each LC100 value at soil-water interface with a pesticide was significantly higher than its corresponding LC100 value at liquid/agar media. It can be concluded that, under the N2-fixing conditions, the cyanobacterium tolerated higher levels of each pesticide at soil-water interface.

  4. A multinomial-logit ordered-probit model for jointly analyzing crash avoidance maneuvers and crash severity

    DEFF Research Database (Denmark)

    Kaplan, Sigal; Prato, Carlo Giacomo

    ' propensity to engage in various corrective maneuvers in the case of the critical event of vehicle travelling. Five lateral and speed control maneuvers are considered: “braking”, “steering”, “braking & steering”, and “other maneuvers”, in addition to a “no action” option. The analyzed data are retrieved from...

  5. An improved method for bivariate meta-analysis when within-study correlations are unknown.

    Science.gov (United States)

    Hong, Chuan; D Riley, Richard; Chen, Yong

    2018-03-01

    Multivariate meta-analysis, which jointly analyzes multiple and possibly correlated outcomes in a single analysis, is becoming increasingly popular in recent years. An attractive feature of the multivariate meta-analysis is its ability to account for the dependence between multiple estimates from the same study. However, standard inference procedures for multivariate meta-analysis require the knowledge of within-study correlations, which are usually unavailable. This limits standard inference approaches in practice. Riley et al proposed a working model and an overall synthesis correlation parameter to account for the marginal correlation between outcomes, where the only data needed are those required for a separate univariate random-effects meta-analysis. As within-study correlations are not required, the Riley method is applicable to a wide variety of evidence synthesis situations. However, the standard variance estimator of the Riley method is not entirely correct under many important settings. As a consequence, the coverage of a function of pooled estimates may not reach the nominal level even when the number of studies in the multivariate meta-analysis is large. In this paper, we improve the Riley method by proposing a robust variance estimator, which is asymptotically correct even when the model is misspecified (ie, when the likelihood function is incorrect). Simulation studies of a bivariate meta-analysis, in a variety of settings, show a function of pooled estimates has improved performance when using the proposed robust variance estimator. In terms of individual pooled estimates themselves, the standard variance estimator and robust variance estimator give similar results to the original method, with appropriate coverage. The proposed robust variance estimator performs well when the number of studies is relatively large. Therefore, we recommend the use of the robust method for meta-analyses with a relatively large number of studies (eg, m≥50). When the

  6. Can the bivariate Hurst exponent be higher than an average of the separate Hurst exponents?

    Czech Academy of Sciences Publication Activity Database

    Krištoufek, Ladislav

    2015-01-01

    Roč. 431, č. 1 (2015), s. 124-127 ISSN 0378-4371 R&D Projects: GA ČR(CZ) GP14-11402P Institutional support: RVO:67985556 Keywords : Correlations * Power- law cross-correlations * Bivariate Hurst exponent * Spectrum coherence Subject RIV: AH - Economics Impact factor: 1.785, year: 2015 http://library.utia.cas.cz/separaty/2015/E/kristoufek-0452314.pdf

  7. The approximation of bivariate Chlodowsky-Sz?sz-Kantorovich-Charlier-type operators

    OpenAIRE

    Agrawal, Purshottam Narain; Baxhaku, Behar; Chauhan, Ruchi

    2017-01-01

    Abstract In this paper, we introduce a bivariate Kantorovich variant of combination of Szász and Chlodowsky operators based on Charlier polynomials. Then, we study local approximation properties for these operators. Also, we estimate the approximation order in terms of Peetre’s K-functional and partial moduli of continuity. Furthermore, we introduce the associated GBS-case (Generalized Boolean Sum) of these operators and study the degree of approximation by means of the Lipschitz class of Bög...

  8. On minimum divergence adaptation of discrete bivariate distributions to given marginals

    Czech Academy of Sciences Publication Activity Database

    Vajda, Igor; van der Meulen, E. C.

    2005-01-01

    Roč. 51, č. 1 (2005), s. 313-320 ISSN 0018-9448 R&D Projects: GA ČR GA201/02/1391; GA MŠk 1M0572 Institutional research plan: CEZ:AV0Z10750506 Keywords : approximation of contingency tables * bivariate discrete distributions * minimization of divergences Subject RIV: BD - Theory of Information Impact factor: 2.183, year: 2005

  9. Genetic determinant of trabecular bone score (TBS) and bone mineral density: A bivariate analysis.

    Science.gov (United States)

    Ho-Pham, Lan T; Hans, Didier; Doan, Minh C; Mai, Linh D; Nguyen, Tuan V

    2016-11-01

    This study sought to estimate the extent of genetic influence on the variation in trabecular bone score (TBS). We found that genetic factors accounted for ~45% of variance in TBS, and that the co-variation between TBS and bone density is partially determined by genetic factors. Trabecular bone score has emerged as an important predictor of fragility fracture, but factors underlying the individual differences in TBS have not been explored. In this study, we sought to determine the genetic contribution to the variation of TBS in the general population. The study included 556 women and 189 men from 265 families. The individuals aged 53years (SD 11). We measured lumbar spine bone mineral density (BMD; Hologic Horizon) and then derived the TBS from the same Hologic scan where BMD was derived. A biometric model was applied to the data to partition the variance of TBS into two components: one due to additive genetic factors, and one due to environmental factors. The index of heritability was estimated as the ratio of genetic variance to total variance of a trait. Bivariate genetic analysis was conducted to estimate the genetic correlation between TBS and BMD measurements. TBS was strongly correlated with lumbar spine BMD (r=0.73; P<0.001). On average TBS in men was higher than women, after adjusting age and height which are significantly associated with both TBS and lumbar spine BMD. The age and height adjusted index of heritability of TBS was 0.46 (95% CI, 0.39-0.54), which was not much different from that of LSBMD (0.44; 95% CI, 0.31-0.55). Moreover, the genetic correlation between TBS and LSBMD was 0.35 (95% CI, 0.21-0.46), between TBS and femoral neck BMD was 0.21 (95% CI, 0.10-0.33). Approximately 45% of the variance in TBS is under genetic influence, and this effect magnitude is similar to that of lumbar spine BMD. This finding provides a scientific justification for the search for specific genetic variants that may be associated with TBS and fracture risk

  10. A Stochastic Traffic Assignment Model Considering Differences in Passengers Utility Functions

    DEFF Research Database (Denmark)

    Nielsen, Otto Anker

    1997-01-01

    The paper presents a framework for public transport assignment that builds on the probit-based model of Sheffi & Powell (1981 & 1982). Hereby, the problems with overlapping routes that occur in many public transport models can be avoided. In the paper, the probit-based model in its pure form....... This is both due to the probit model’s ability to describe overlapping routes and due to the many different weights and distributions that make it possible to calibrate the model. In practice, the many parameters might also be the methods main weakness, since this complicates the calibration....

  11. A comparison between multivariate and bivariate analysis used in marketing research

    Directory of Open Access Journals (Sweden)

    Constantin, C.

    2012-01-01

    Full Text Available This paper is about an instrumental research conducted in order to compare the information given by two multivariate data analysis in comparison with the usual bivariate analysis. The outcomes of the research reveal that sometimes the multivariate methods use more information from a certain variable, but sometimes they use only a part of the information considered the most important for certain associations. For this reason, a researcher should use both categories of data analysis in order to obtain entirely useful information.

  12. A COMPARISON OF SOME ROBUST BIVARIATE CONTROL CHARTS FOR INDIVIDUAL OBSERVATIONS

    Directory of Open Access Journals (Sweden)

    Moustafa Omar Ahmed Abu - Shawiesh

    2014-06-01

    Full Text Available This paper proposed and considered some bivariate control charts to monitor individual observations from a statistical process control. Usual control charts which use mean and variance-covariance estimators are sensitive to outliers. We consider the following robust alternatives to the classical Hoteling's T2: T2MedMAD, T2MCD, T2MVE a simulation study has been conducted to compare the performance of these control charts. Two real life data are analyzed to illustrate the application of these robust alternatives.

  13. The approximation of bivariate Chlodowsky-Szász-Kantorovich-Charlier-type operators

    Directory of Open Access Journals (Sweden)

    Purshottam Narain Agrawal

    2017-08-01

    Full Text Available Abstract In this paper, we introduce a bivariate Kantorovich variant of combination of Szász and Chlodowsky operators based on Charlier polynomials. Then, we study local approximation properties for these operators. Also, we estimate the approximation order in terms of Peetre’s K-functional and partial moduli of continuity. Furthermore, we introduce the associated GBS-case (Generalized Boolean Sum of these operators and study the degree of approximation by means of the Lipschitz class of Bögel continuous functions. Finally, we present some graphical examples to illustrate the rate of convergence of the operators under consideration.

  14. The approximation of bivariate Chlodowsky-Szász-Kantorovich-Charlier-type operators.

    Science.gov (United States)

    Agrawal, Purshottam Narain; Baxhaku, Behar; Chauhan, Ruchi

    2017-01-01

    In this paper, we introduce a bivariate Kantorovich variant of combination of Szász and Chlodowsky operators based on Charlier polynomials. Then, we study local approximation properties for these operators. Also, we estimate the approximation order in terms of Peetre's K-functional and partial moduli of continuity. Furthermore, we introduce the associated GBS-case (Generalized Boolean Sum) of these operators and study the degree of approximation by means of the Lipschitz class of Bögel continuous functions. Finally, we present some graphical examples to illustrate the rate of convergence of the operators under consideration.

  15. Bivariate tensor product [Formula: see text]-analogue of Kantorovich-type Bernstein-Stancu-Schurer operators.

    Science.gov (United States)

    Cai, Qing-Bo; Xu, Xiao-Wei; Zhou, Guorong

    2017-01-01

    In this paper, we construct a bivariate tensor product generalization of Kantorovich-type Bernstein-Stancu-Schurer operators based on the concept of [Formula: see text]-integers. We obtain moments and central moments of these operators, give the rate of convergence by using the complete modulus of continuity for the bivariate case and estimate a convergence theorem for the Lipschitz continuous functions. We also give some graphs and numerical examples to illustrate the convergence properties of these operators to certain functions.

  16. Inheritance of dermatoglyphic traits in twins: univariate and bivariate variance decomposition analysis.

    Science.gov (United States)

    Karmakar, Bibha; Malkin, Ida; Kobyliansky, Eugene

    2012-01-01

    Dermatoglyphic traits in a sample of twins were analyzed to estimate the resemblance between MZ and DZ twins and to evaluate the mode of inheritance by using the maximum likelihood-based Variance decomposition analysis. The additive genetic variance component was significant in both sexes for four traits--PII, AB_RC, RC_HB, and ATD_L. AB RC and RC_HB had significant sex differences in means, whereas PII and ATD_L did not. The results of the Bivariate Variance decomposition analysis revealed that PII and RC_HB have a significant correlation in both genetic and residual components. Significant correlation in the additive genetic variance between AB_RC and ATD_L was observed. The same analysis only for the females sub-sample in the three traits RBL, RBR and AB_DIS shows that the additive genetic RBR component was significant and the AB_DIS sibling component was not significant while others cannot be constrained to zero. The additive variance for AB DIS sibling component was not significant. The three components additive, sibling and residual were significantly correlated between each pair of traits revealed by the Bivariate Variance decomposition analysis.

  17. Xp21 contiguous gene syndromes: Deletion quantitation with bivariate flow karyotyping allows mapping of patient breakpoints

    Energy Technology Data Exchange (ETDEWEB)

    McCabe, E.R.B.; Towbin, J.A. (Baylor College of Medicine, Houston, TX (United States)); Engh, G. van den; Trask, B.J. (Lawrence Livermore National Lab., CA (United States))

    1992-12-01

    Bivariate flow karyotyping was used to estimate the deletion sizes for a series of patients with Xp21 contiguous gene syndromes. The deletion estimates were used to develop an approximate scale for the genomic map in Xp21. The bivariate flow karyotype results were compared with clinical and molecular genetic information on the extent of the patients' deletions, and these various types of data were consistent. The resulting map spans >15 Mb, from the telomeric interval between DXS41 (99-6) and DXS68 (1-4) to a position centromeric to the ornithine transcarbamylase locus. The deletion sizing was considered to be accurate to [plus minus]1 Mb. The map provides information on the relative localization of genes and markers within this region. For example, the map suggests that the adrenal hypoplasia congenita and glycerol kinase genes are physically close to each other, are within 1-2 Mb of the telomeric end of the Duchenne muscular dystrophy (DMD) gene, and are nearer to the DMD locus than to the more distal marker DXS28 (C7). Information of this type is useful in developing genomic strategies for positional cloning in Xp21. These investigations demonstrate that the DNA from patients with Xp21 contiguous gene syndromes can be valuable reagents, not only for ordering loci and markers but also for providing an approximate scale to the map of the Xp21 region surrounding DMD. 44 refs., 3 figs.

  18. A behavioral choice model of the use of car-sharing and ride-sourcing services

    Energy Technology Data Exchange (ETDEWEB)

    Dias, Felipe F.; Lavieri, Patrícia S.; Garikapati, Venu M.; Astroza, Sebastian; Pendyala, Ram M.; Bhat, Chandra R.

    2017-07-26

    There are a number of disruptive mobility services that are increasingly finding their way into the marketplace. Two key examples of such services are car-sharing services and ride-sourcing services. In an effort to better understand the influence of various exogenous socio-economic and demographic variables on the frequency of use of ride-sourcing and car-sharing services, this paper presents a bivariate ordered probit model estimated on a survey data set derived from the 2014-2015 Puget Sound Regional Travel Study. Model estimation results show that users of these services tend to be young, well-educated, higher-income, working individuals residing in higher-density areas. There are significant interaction effects reflecting the influence of children and the built environment on disruptive mobility service usage. The model developed in this paper provides key insights into factors affecting market penetration of these services, and can be integrated in larger travel forecasting model systems to better predict the adoption and use of mobility-on-demand services.

  19. Obtaining DDF Curves of Extreme Rainfall Data Using Bivariate Copula and Frequency Analysis

    DEFF Research Database (Denmark)

    Sadri, Sara; Madsen, Henrik; Mikkelsen, Peter Steen

    2009-01-01

    , situated near Copenhagen in Denmark. For rainfall extracted using method 2, the marginal distribution of depth was found to fit the Generalized Pareto distribution while duration was found to fit the Gamma distribution, using the method of L-moments. The volume was fit with a generalized Pareto...... with duration for a given return period and name them DDF (depth-duration-frequency) curves. The copula approach does not assume the rainfall variables are independent or jointly normally distributed. Rainfall series are extracted in three ways: (1) by maximum mean intensity; (2) by depth and duration...... distribution and the duration was fit with a Pearson type III distribution for rainfall extracted using method 3. The Clayton copula was found to be appropriate for bivariate analysis of rainfall depth and duration for both methods 2 and 3. DDF curves derived using the Clayton copula for depth and duration...

  20. A Bivariate Chebyshev Spectral Collocation Quasilinearization Method for Nonlinear Evolution Parabolic Equations

    Directory of Open Access Journals (Sweden)

    S. S. Motsa

    2014-01-01

    Full Text Available This paper presents a new method for solving higher order nonlinear evolution partial differential equations (NPDEs. The method combines quasilinearisation, the Chebyshev spectral collocation method, and bivariate Lagrange interpolation. In this paper, we use the method to solve several nonlinear evolution equations, such as the modified KdV-Burgers equation, highly nonlinear modified KdV equation, Fisher's equation, Burgers-Fisher equation, Burgers-Huxley equation, and the Fitzhugh-Nagumo equation. The results are compared with known exact analytical solutions from literature to confirm accuracy, convergence, and effectiveness of the method. There is congruence between the numerical results and the exact solutions to a high order of accuracy. Tables were generated to present the order of accuracy of the method; convergence graphs to verify convergence of the method and error graphs are presented to show the excellent agreement between the results from this study and the known results from literature.

  1. Bivariate tensor product ( p , q $(p, q$ -analogue of Kantorovich-type Bernstein-Stancu-Schurer operators

    Directory of Open Access Journals (Sweden)

    Qing-Bo Cai

    2017-11-01

    Full Text Available Abstract In this paper, we construct a bivariate tensor product generalization of Kantorovich-type Bernstein-Stancu-Schurer operators based on the concept of ( p , q $(p, q$ -integers. We obtain moments and central moments of these operators, give the rate of convergence by using the complete modulus of continuity for the bivariate case and estimate a convergence theorem for the Lipschitz continuous functions. We also give some graphs and numerical examples to illustrate the convergence properties of these operators to certain functions.

  2. Who Is Overeducated and Why? Probit and Dynamic Mixed Multinomial Logit Analyses of Vertical Mismatch in East and West Germany

    Science.gov (United States)

    Boll, Christina; Leppin, Julian Sebastian; Schömann, Klaus

    2016-01-01

    Overeducation potentially signals a productivity loss. With Socio-Economic Panel data from 1984 to 2011 we identify drivers of educational mismatch for East and West medium and highly educated Germans. Addressing measurement error, state dependence and unobserved heterogeneity, we run dynamic mixed multinomial logit models for three different…

  3. Long-lead station-scale prediction of hydrological droughts in South Korea based on bivariate pattern-based downscaling

    Science.gov (United States)

    Sohn, Soo-Jin; Tam, Chi-Yung

    2016-05-01

    Capturing climatic variations in boreal winter to spring (December-May) is essential for properly predicting droughts in South Korea. This study investigates the variability and predictability of the South Korean climate during this extended season, based on observations from 60 station locations and multi-model ensemble (MME) hindcast experiments (1983/1984-2005/2006) archived at the APEC Climate Center (APCC). Multivariate empirical orthogonal function (EOF) analysis results based on observations show that the first two leading modes of winter-to-spring precipitation and temperature variability, which together account for ~80 % of the total variance, are characterized by regional-scale anomalies covering the whole South Korean territory. These modes were also closely related to some of the recurrent large-scale circulation changes in the northern hemisphere during the same season. Consistent with the above, examination of the standardized precipitation evapotranspiration index (SPEI) indicates that drought conditions in South Korea tend to be accompanied by regional-to-continental-scale circulation anomalies over East Asia to the western north Pacific. Motivated by the aforementioned findings on the spatial-temporal coherence among station-scale precipitation and temperature anomalies, a new bivariate and pattern-based downscaling method was developed. The novelty of this method is that precipitation and temperature data were first filtered using multivariate EOFs to enhance their spatial-temporal coherence, before being linked to large-scale circulation variables using canonical correlation analysis (CCA). To test its applicability and to investigate its related potential predictability, a perfect empirical model was first constructed with observed datasets as predictors. Next, a model output statistics (MOS)-type hybrid dynamical-statistical model was developed, using products from nine one-tier climate models as inputs. It was found that, with model sea

  4. GIS-based bivariate statistical techniques for groundwater potential analysis (an example of Iran)

    Science.gov (United States)

    Haghizadeh, Ali; Moghaddam, Davoud Davoudi; Pourghasemi, Hamid Reza

    2017-12-01

    Groundwater potential analysis prepares better comprehension of hydrological settings of different regions. This study shows the potency of two GIS-based data driven bivariate techniques namely statistical index (SI) and Dempster-Shafer theory (DST) to analyze groundwater potential in Broujerd region of Iran. The research was done using 11 groundwater conditioning factors and 496 spring positions. Based on the ground water potential maps (GPMs) of SI and DST methods, 24.22% and 23.74% of the study area is covered by poor zone of groundwater potential, and 43.93% and 36.3% of Broujerd region is covered by good and very good potential zones, respectively. The validation of outcomes displayed that area under the curve (AUC) of SI and DST techniques are 81.23% and 79.41%, respectively, which shows SI method has slightly a better performance than the DST technique. Therefore, SI and DST methods are advantageous to analyze groundwater capacity and scrutinize the complicated relation between groundwater occurrence and groundwater conditioning factors, which permits investigation of both systemic and stochastic uncertainty. Finally, it can be realized that these techniques are very beneficial for groundwater potential analyzing and can be practical for water-resource management experts.

  5. Perceived social support and academic achievement: cross-lagged panel and bivariate growth curve analyses.

    Science.gov (United States)

    Mackinnon, Sean P

    2012-04-01

    As students transition to post-secondary education, they experience considerable stress and declines in academic performance. Perceived social support is thought to improve academic achievement by reducing stress. Longitudinal designs with three or more waves are needed in this area because they permit stronger causal inferences and help disentangle the direction of relationships. This study uses a cross-lagged panel and a bivariate growth curve analysis with a three-wave longitudinal design. Participants include 10,445 students (56% female; 12.6% born outside of Canada) transitioning to post-secondary education from ages 15-19. Self-report measures of academic achievement and a generalized measure of perceived social support were used. An increase in average relative standing in academic achievement predicted an increase in average relative standing on perceived social support 2 years later, but the reverse was not true. High levels of perceived social support at age 15 did not protect against declines in academic achievement over time. In sum, perceived social support appears to have no bearing on adolescents' future academic performance, despite commonly held assumptions of its importance.

  6. Bivariate empirical mode decomposition for ECG-based biometric identification with emotional data.

    Science.gov (United States)

    Ferdinando, Hany; Seppanen, Tapio; Alasaarela, Esko

    2017-07-01

    Emotions modulate ECG signals such that they might affect ECG-based biometric identification in real life application. It motivated in finding good feature extraction methods where the emotional state of the subjects has minimum impacts. This paper evaluates feature extraction based on bivariate empirical mode decomposition (BEMD) for biometric identification when emotion is considered. Using the ECG signal from the Mahnob-HCI database for affect recognition, the features were statistical distributions of dominant frequency after applying BEMD analysis to ECG signals. The achieved accuracy was 99.5% with high consistency using kNN classifier in 10-fold cross validation to identify 26 subjects when the emotional states of the subjects were ignored. When the emotional states of the subject were considered, the proposed method also delivered high accuracy, around 99.4%. We concluded that the proposed method offers emotion-independent features for ECG-based biometric identification. The proposed method needs more evaluation related to testing with other classifier and variation in ECG signals, e.g. normal ECG vs. ECG with arrhythmias, ECG from various ages, and ECG from other affective databases.

  7. Bivariate flow cytometric analysis and sorting of different types of maize starch grains.

    Science.gov (United States)

    Zhang, Xudong; Feng, Jiaojiao; Wang, Heng; Zhu, Jianchu; Zhong, Yuyue; Liu, Linsan; Xu, Shutu; Zhang, Renhe; Zhang, Xinghua; Xue, Jiquan; Guo, Dongwei

    2018-02-01

    Particle-size distribution, granular structure, and composition significantly affect the physicochemical properties, rheological properties, and nutritional function of starch. Flow cytometry and flow sorting are widely considered convenient and efficient ways of classifying and separating natural biological particles or other substances into subpopulations, respectively, based on the differential response of each component to stimulation by a light beam; the results allow for the correlation analysis of parameters. In this study, different types of starches isolated from waxy maize, sweet maize, high-amylose maize, pop maize, and normal maize were initially classified into various subgroups by flow cytometer and then collected through flow sorting to observe their morphology and particle-size distribution. The results showed that a 0.25% Gelzan solution served as an optimal reagent for keeping individual starch particles homogeneously dispersed in suspension for a relatively long time. The bivariate flow cytometric population distributions indicated that the starches of normal maize, sweet maize, and pop maize were divided into two subgroups, whereas high-amylose maize starch had only one subgroup. Waxy maize starch, conversely, showed three subpopulations. The subgroups sorted by flow cytometer were determined and verified in terms of morphology and granule size by scanning electron microscopy and laser particle distribution analyzer. Results showed that flow cytometry can be regarded as a novel method for classifying and sorting starch granules. © 2017 International Society for Advancement of Cytometry. © 2017 International Society for Advancement of Cytometry.

  8. New Colors for Histology: Optimized Bivariate Color Maps Increase Perceptual Contrast in Histological Images.

    Directory of Open Access Journals (Sweden)

    Jakob Nikolas Kather

    Full Text Available Accurate evaluation of immunostained histological images is required for reproducible research in many different areas and forms the basis of many clinical decisions. The quality and efficiency of histopathological evaluation is limited by the information content of a histological image, which is primarily encoded as perceivable contrast differences between objects in the image. However, the colors of chromogen and counterstain used for histological samples are not always optimally distinguishable, even under optimal conditions.In this study, we present a method to extract the bivariate color map inherent in a given histological image and to retrospectively optimize this color map. We use a novel, unsupervised approach based on color deconvolution and principal component analysis to show that the commonly used blue and brown color hues in Hematoxylin-3,3'-Diaminobenzidine (DAB images are poorly suited for human observers. We then demonstrate that it is possible to construct improved color maps according to objective criteria and that these color maps can be used to digitally re-stain histological images.To validate whether this procedure improves distinguishability of objects and background in histological images, we re-stain phantom images and N = 596 large histological images of immunostained samples of human solid tumors. We show that perceptual contrast is improved by a factor of 2.56 in phantom images and up to a factor of 2.17 in sets of histological tumor images.Thus, we provide an objective and reliable approach to measure object distinguishability in a given histological image and to maximize visual information available to a human observer. This method could easily be incorporated in digital pathology image viewing systems to improve accuracy and efficiency in research and diagnostics.

  9. New Colors for Histology: Optimized Bivariate Color Maps Increase Perceptual Contrast in Histological Images.

    Science.gov (United States)

    Kather, Jakob Nikolas; Weis, Cleo-Aron; Marx, Alexander; Schuster, Alexander K; Schad, Lothar R; Zöllner, Frank Gerrit

    2015-01-01

    Accurate evaluation of immunostained histological images is required for reproducible research in many different areas and forms the basis of many clinical decisions. The quality and efficiency of histopathological evaluation is limited by the information content of a histological image, which is primarily encoded as perceivable contrast differences between objects in the image. However, the colors of chromogen and counterstain used for histological samples are not always optimally distinguishable, even under optimal conditions. In this study, we present a method to extract the bivariate color map inherent in a given histological image and to retrospectively optimize this color map. We use a novel, unsupervised approach based on color deconvolution and principal component analysis to show that the commonly used blue and brown color hues in Hematoxylin-3,3'-Diaminobenzidine (DAB) images are poorly suited for human observers. We then demonstrate that it is possible to construct improved color maps according to objective criteria and that these color maps can be used to digitally re-stain histological images. To validate whether this procedure improves distinguishability of objects and background in histological images, we re-stain phantom images and N = 596 large histological images of immunostained samples of human solid tumors. We show that perceptual contrast is improved by a factor of 2.56 in phantom images and up to a factor of 2.17 in sets of histological tumor images. Thus, we provide an objective and reliable approach to measure object distinguishability in a given histological image and to maximize visual information available to a human observer. This method could easily be incorporated in digital pathology image viewing systems to improve accuracy and efficiency in research and diagnostics.

  10. Analysis of input variables of an artificial neural network using bivariate correlation and canonical correlation

    International Nuclear Information System (INIS)

    Costa, Valter Magalhaes; Pereira, Iraci Martinez

    2011-01-01

    The monitoring of variables and diagnosis of sensor fault in nuclear power plants or processes industries is very important because a previous diagnosis allows the correction of the fault and, like this, to prevent the production stopped, improving operator's security and it's not provoking economics losses. The objective of this work is to build a set, using bivariate correlation and canonical correlation, which will be the set of input variables of an artificial neural network to monitor the greater number of variables. This methodology was applied to the IEA-R1 Research Reactor at IPEN. Initially, for the input set of neural network we selected the variables: nuclear power, primary circuit flow rate, control/safety rod position and difference in pressure in the core of the reactor, because almost whole of monitoring variables have relation with the variables early described or its effect can be result of the interaction of two or more. The nuclear power is related to the increasing and decreasing of temperatures as well as the amount radiation due fission of the uranium; the rods are controls of power and influence in the amount of radiation and increasing and decreasing of temperatures; the primary circuit flow rate has the function of energy transport by removing the nucleus heat. An artificial neural network was trained and the results were satisfactory since the IEA-R1 Data Acquisition System reactor monitors 64 variables and, with a set of 9 input variables resulting from the correlation analysis, it was possible to monitor 51 variables. (author)

  11. Analysis of input variables of an artificial neural network using bivariate correlation and canonical correlation

    Energy Technology Data Exchange (ETDEWEB)

    Costa, Valter Magalhaes; Pereira, Iraci Martinez, E-mail: valter.costa@usp.b [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2011-07-01

    The monitoring of variables and diagnosis of sensor fault in nuclear power plants or processes industries is very important because a previous diagnosis allows the correction of the fault and, like this, to prevent the production stopped, improving operator's security and it's not provoking economics losses. The objective of this work is to build a set, using bivariate correlation and canonical correlation, which will be the set of input variables of an artificial neural network to monitor the greater number of variables. This methodology was applied to the IEA-R1 Research Reactor at IPEN. Initially, for the input set of neural network we selected the variables: nuclear power, primary circuit flow rate, control/safety rod position and difference in pressure in the core of the reactor, because almost whole of monitoring variables have relation with the variables early described or its effect can be result of the interaction of two or more. The nuclear power is related to the increasing and decreasing of temperatures as well as the amount radiation due fission of the uranium; the rods are controls of power and influence in the amount of radiation and increasing and decreasing of temperatures; the primary circuit flow rate has the function of energy transport by removing the nucleus heat. An artificial neural network was trained and the results were satisfactory since the IEA-R1 Data Acquisition System reactor monitors 64 variables and, with a set of 9 input variables resulting from the correlation analysis, it was possible to monitor 51 variables. (author)

  12. VolHOG: a volumetric object recognition approach based on bivariate histograms of oriented gradients for vertebra detection in cervical spine MRI.

    Science.gov (United States)

    Daenzer, Stefan; Freitag, Stefan; von Sachsen, Sandra; Steinke, Hanno; Groll, Mathias; Meixensberger, Jürgen; Leimert, Mario

    2014-08-01

    The automatic recognition of vertebrae in volumetric images is an important step toward automatic spinal diagnosis and therapy support systems. There are many applications such as the detection of pathologies and segmentation which would benefit from automatic initialization by the detection of vertebrae. One possible application is the initialization of local vertebral segmentation methods, eliminating the need for manual initialization by a human operator. Automating the initialization process would optimize the clinical workflow. However, automatic vertebra recognition in magnetic resonance (MR) images is a challenging task due to noise in images, pathological deformations of the spine, and image contrast variations. This work presents a fully automatic algorithm for 3D cervical vertebra detection in MR images. We propose a machine learning method for cervical vertebra detection based on new features combined with a linear support vector machine for classification. An algorithm for bivariate gradient orientation histogram generation from three-dimensional raster image data is introduced which allows us to describe three-dimensional objects using the authors' proposed bivariate histograms. A detailed performance evaluation on 21 T2-weighted MR images of the cervical vertebral region is given. A single model for cervical vertebrae C3-C7 is generated and evaluated. The results show that the generic model performs equally well for each of the cervical vertebrae C3-C7. The algorithm's performance is also evaluated on images containing various levels of artificial noise. The results indicate that the proposed algorithm achieves good results despite the presence of severe image noise. The proposed detection method delivers accurate locations of cervical vertebrae in MR images which can be used in diagnosis and therapy. In order to achieve absolute comparability with the results of future work, the authors are following an open data approach by making the image dataset

  13. A non-parametric conditional bivariate reference region with an application to height/weight measurements on normal girls

    DEFF Research Database (Denmark)

    Petersen, Jørgen Holm

    2009-01-01

    A conceptually simple two-dimensional conditional reference curve is described. The curve gives a decision basis for determining whether a bivariate response from an individual is "normal" or "abnormal" when taking into account that a third (conditioning) variable may influence the bivariate...... response. The reference curve is not only characterized analytically but also by geometric properties that are easily communicated to medical doctors - the users of such curves. The reference curve estimator is completely non-parametric, so no distributional assumptions are needed about the two......-dimensional response. An example that will serve to motivate and illustrate the reference is the study of the height/weight distribution of 7-8-year-old Danish school girls born in 1930, 1950, or 1970....

  14. Predicting the Size of Sunspot Cycle 24 on the Basis of Single- and Bi-Variate Geomagnetic Precursor Methods

    Science.gov (United States)

    Wilson, Robert M.; Hathaway, David H.

    2009-01-01

    Examined are single- and bi-variate geomagnetic precursors for predicting the maximum amplitude (RM) of a sunspot cycle several years in advance. The best single-variate fit is one based on the average of the ap index 36 mo prior to cycle minimum occurrence (E(Rm)), having a coefficient of correlation (r) equal to 0.97 and a standard error of estimate (se) equal to 9.3. Presuming cycle 24 not to be a statistical outlier and its minimum in March 2008, the fit suggests cycle 24 s RM to be about 69 +/- 20 (the 90% prediction interval). The weighted mean prediction of 11 statistically important single-variate fits is 116 +/- 34. The best bi-variate fit is one based on the maximum and minimum values of the 12-mma of the ap index; i.e., APM# and APm*, where # means the value post-E(RM) for the preceding cycle and * means the value in the vicinity of cycle minimum, having r = 0.98 and se = 8.2. It predicts cycle 24 s RM to be about 92 +/- 27. The weighted mean prediction of 22 statistically important bi-variate fits is 112 32. Thus, cycle 24's RM is expected to lie somewhere within the range of about 82 to 144. Also examined are the late-cycle 23 behaviors of geomagnetic indices and solar wind velocity in comparison to the mean behaviors of cycles 2023 and the geomagnetic indices of cycle 14 (RM = 64.2), the weakest sunspot cycle of the modern era.

  15. Powerful bivariate genome-wide association analyses suggest the SOX6 gene influencing both obesity and osteoporosis phenotypes in males.

    Directory of Open Access Journals (Sweden)

    Yao-Zhong Liu

    2009-08-01

    Full Text Available Current genome-wide association studies (GWAS are normally implemented in a univariate framework and analyze different phenotypes in isolation. This univariate approach ignores the potential genetic correlation between important disease traits. Hence this approach is difficult to detect pleiotropic genes, which may exist for obesity and osteoporosis, two common diseases of major public health importance that are closely correlated genetically.To identify such pleiotropic genes and the key mechanistic links between the two diseases, we here performed the first bivariate GWAS of obesity and osteoporosis. We searched for genes underlying co-variation of the obesity phenotype, body mass index (BMI, with the osteoporosis risk phenotype, hip bone mineral density (BMD, scanning approximately 380,000 SNPs in 1,000 unrelated homogeneous Caucasians, including 499 males and 501 females. We identified in the male subjects two SNPs in intron 1 of the SOX6 (SRY-box 6 gene, rs297325 and rs4756846, which were bivariately associated with both BMI and hip BMD, achieving p values of 6.82x10(-7 and 1.47x10(-6, respectively. The two SNPs ranked at the top in significance for bivariate association with BMI and hip BMD in the male subjects among all the approximately 380,000 SNPs examined genome-wide. The two SNPs were replicated in a Framingham Heart Study (FHS cohort containing 3,355 Caucasians (1,370 males and 1,985 females from 975 families. In the FHS male subjects, the two SNPs achieved p values of 0.03 and 0.02, respectively, for bivariate association with BMI and femoral neck BMD. Interestingly, SOX6 was previously found to be essential to both cartilage formation/chondrogenesis and obesity-related insulin resistance, suggesting the gene's dual role in both bone and fat.Our findings, together with the prior biological evidence, suggest the SOX6 gene's importance in co-regulation of obesity and osteoporosis.

  16. The Earnings Impact of Training Duration in a Developing Country. An Ordered Probit Selection Model of Colombia's "Servicio Nacional de Aprendizaje" (SENA).

    Science.gov (United States)

    Jimenez, Emmanuel; Kugler, Bernardo

    1987-01-01

    Estimates the earnings impact of an extensive inservice training program in the developing world, Colombia's Servicio Nacional de Aprendizaje (SENA), through a comparison of nongraduates' and graduates' earnings profiles. (JOW)

  17. Modelling Stochastic Route Choice Behaviours with a Closed-Form Mixed Logit Model

    Directory of Open Access Journals (Sweden)

    Xinjun Lai

    2015-01-01

    Full Text Available A closed-form mixed Logit approach is proposed to model the stochastic route choice behaviours. It combines both the advantages of Probit and Logit to provide a flexible form in alternatives correlation and a tractable form in expression; besides, the heterogeneity in alternative variance can also be addressed. Paths are compared by pairs where the superiority of the binary Probit can be fully used. The Probit-based aggregation is also used for a nested Logit structure. Case studies on both numerical and empirical examples demonstrate that the new method is valid and practical. This paper thus provides an operational solution to incorporate the normal distribution in route choice with an analytical expression.

  18. A bivariate space–time downscaler under space and time misalignment

    OpenAIRE

    Berrocal, Veronica J.; Gelfand, Alan E.; Holland, David M.

    2010-01-01

    Ozone and particulate matter, PM2.5, are co-pollutants that have long been associated with increased public health risks. Information on concentration levels for both pollutants comes from two sources: monitoring sites and output from complex numerical models that produce concentration surfaces over large spatial regions. In this paper, we offer a fully-model-based approach for fusing these two sources of information for the pair of co-pollutants which is computationally ...

  19. The Bivariate (Complex) Fibonacci and Lucas Polynomials: An Historical Investigation with the Maple's Help

    Science.gov (United States)

    Alves, Francisco Regis Vieira; Catarino, Paula Maria Machado Cruz

    2016-01-01

    The current research around the Fibonacci's and Lucas' sequence evidences the scientific vigor of both mathematical models that continue to inspire and provide numerous specializations and generalizations, especially from the sixties. One of the current of research and investigations around the Generalized Sequence of Lucas, involves it's…

  20. Testing the specifications of parametric models using anchoring vignettes

    NARCIS (Netherlands)

    van Soest, A.H.O.; Vonkova, H.

    Comparing assessments on a subjective scale across countries or socio-economic groups is often hampered by differences in response scales across groups. Anchoring vignettes help to correct for such differences, either in parametric models (the compound hierarchical ordered probit (CHOPIT) model and

  1. Development and Implementation of a Telecommuting Evaluation Framework, and Modeling the Executive Telecommuting Adoption Process

    Science.gov (United States)

    Vora, V. P.; Mahmassani, H. S.

    2002-02-01

    This work proposes and implements a comprehensive evaluation framework to document the telecommuter, organizational, and societal impacts of telecommuting through telecommuting programs. Evaluation processes and materials within the outlined framework are also proposed and implemented. As the first component of the evaluation process, the executive survey is administered within a public sector agency. The survey data is examined through exploratory analysis and is compared to a previous survey of private sector executives. The ordinal probit, dynamic probit, and dynamic generalized ordinal probit (DGOP) models of telecommuting adoption are calibrated to identify factors which significantly influence executive adoption preferences and to test the robustness of such factors. The public sector DGOP model of executive willingness to support telecommuting under different program scenarios is compared with an equivalent private sector DGOP model. Through the telecommuting program, a case study of telecommuting travel impacts is performed to further substantiate research.

  2. A new research paradigm for bivariate allometry: combining ANOVA and non-linear regression.

    Science.gov (United States)

    Packard, Gary C

    2018-04-06

    A novel statistical routine is presented here for exploring and comparing patterns of allometric variation in two or more groups of subjects. The routine combines elements of the analysis of variance (ANOVA) with non-linear regression to achieve the equivalent of an analysis of covariance (ANCOVA) on curvilinear data. The starting point is a three-parameter power equation to which a categorical variable has been added to identify membership by each subject in a specific group or treatment. The protocol differs from earlier ones in that different assumptions can be made about the form for random error in the full statistical model (i.e. normal and homoscedastic, normal and heteroscedastic, lognormal and heteroscedastic). The general equation and several modifications thereof were used to study allometric variation in field metabolic rates of marsupial and placental mammals. The allometric equations for both marsupials and placentals have an explicit, non-zero intercept, but the allometric exponent is higher in the equation for placentals than in that for marsupials. The approach followed here is extraordinarily versatile, and it has wider application in allometry than standard ANCOVA performed on logarithmic transformations. © 2018. Published by The Company of Biologists Ltd.

  3. ABACUS: an entropy-based cumulative bivariate statistic robust to rare variants and different direction of genotype effect.

    Science.gov (United States)

    Di Camillo, Barbara; Sambo, Francesco; Toffolo, Gianna; Cobelli, Claudio

    2014-02-01

    In the past years, both sequencing and microarray have been widely used to search for relations between genetic variations and predisposition to complex pathologies such as diabetes or neurological disorders. These studies, however, have been able to explain only a small fraction of disease heritability, possibly because complex pathologies cannot be referred to few dysfunctional genes, but are rather heterogeneous and multicausal, as a result of a combination of rare and common variants possibly impairing multiple regulatory pathways. Rare variants, though, are difficult to detect, especially when the effects of causal variants are in different directions, i.e. with protective and detrimental effects. Here, we propose ABACUS, an Algorithm based on a BivAriate CUmulative Statistic to identify single nucleotide polymorphisms (SNPs) significantly associated with a disease within predefined sets of SNPs such as pathways or genomic regions. ABACUS is robust to the concurrent presence of SNPs with protective and detrimental effects and of common and rare variants; moreover, it is powerful even when few SNPs in the SNP-set are associated with the phenotype. We assessed ABACUS performance on simulated and real data and compared it with three state-of-the-art methods. When ABACUS was applied to type 1 and 2 diabetes data, besides observing a wide overlap with already known associations, we found a number of biologically sound pathways, which might shed light on diabetes mechanism and etiology. ABACUS is available at http://www.dei.unipd.it/∼dicamill/pagine/Software.html.

  4. Bivariate genome-wide association meta-analysis of pediatric musculoskeletal traits reveals pleiotropic effects at the SREBF1/TOM1L2 locus

    DEFF Research Database (Denmark)

    Medina-Gomez, Carolina; Kemp, John P; Dimou, Niki L

    2017-01-01

    Bone mineral density is known to be a heritable, polygenic trait whereas genetic variants contributing to lean mass variation remain largely unknown. We estimated the shared SNP heritability and performed a bivariate GWAS meta-analysis of total-body lean mass (TB-LM) and total-body less head bone...

  5. Bivariate genome-wide association meta-analysis of pediatric musculoskeletal traits reveals pleiotropic effects at the SREBF1/TOM1L2 locus

    NARCIS (Netherlands)

    M.C. Medina-Gomez (Carolina); J.P. Kemp (John); Dimou, N.L. (Niki L.); Kreiner, E. (Eskil); A. Chesi (Alessandra); B.S. Zemel (Babette S.); K. Bønnelykke (Klaus); Boer, C.G. (Cindy G.); T.S. Ahluwalia (Tarunveer Singh); H. Bisgaard; E. Evangelou (Evangelos); D.H.M. Heppe (Denise); Bonewald, L.F. (Lynda F.); Gorski, J.P. (Jeffrey P.); M. Ghanbari (Mohsen); S. Demissie (Serkalem); Duque, G. (Gustavo); M.T. Maurano (Matthew T.); D.P. Kiel (Douglas P.); Y.-H. Hsu (Yi-Hsiang); B.C.J. van der Eerden (Bram); Ackert-Bicknell, C. (Cheryl); S. Reppe (Sjur); K.M. Gautvik (Kaare); Raastad, T. (Truls); D. Karasik (David); J. van de Peppel (Jeroen); V.W.V. Jaddoe (Vincent); A.G. Uitterlinden (André); J.H. Tobias (Jon); S.F.A. Grant (Struan); Bagos, P.G. (Pantelis G.); D.M. Evans (David); F. Rivadeneira Ramirez (Fernando)

    2017-01-01

    markdownabstractBone mineral density is known to be a heritable, polygenic trait whereas genetic variants contributing to lean mass variation remain largely unknown. We estimated the shared SNP heritability and performed a bivariate GWAS meta-analysis of total-body lean mass (TB-LM) and total-body

  6. A comparison of the effect of 5-bromodeoxyuridine substitution on 33258 Hoechst- and DAPI-fluorescence of isolated chromosomes by bivariate flow karyotyping

    NARCIS (Netherlands)

    Buys, C. H.; Mesa, J.; van der Veen, A. Y.; Aten, J. A.

    1986-01-01

    Application of the fluorescent DNA-intercalator propidium iodide for stabilization of the mitotic chromosome structure during isolation of chromosomes from V79 Chinese hamster cells and subsequent staining with the fluorochromes 33258 Hoechst or DAPI allowed bivariate flow karyotyping of isolated

  7. An Almost Integration-free Approach to Ordered Response Models

    NARCIS (Netherlands)

    van Praag, B.M.S.; Ferrer-i-Carbonell, A.

    2006-01-01

    'In this paper we propose an alternative approach to the estimation of ordered response models. We show that the Probit-method may be replaced by a simple OLS-approach, called P(robit)OLS, without any loss of efficiency. This method can be generalized to the analysis of panel data. For large-scale

  8. Application of Health Belief Model for Promoting Behaviour Change ...

    African Journals Online (AJOL)

    The study analyzes the factors influencing conduct of HIV test and risky behavour change using the health belief model. The data were obtained from the Nigeria's 2004 NLSS data and analyzed with descriptive statistics and Probit regression. Results show that 87.79% of the single youths were aware of HIV/AIDS, 3.34% ...

  9. Consideration sets, intentions and the inclusion of "don't know" in a two-stage model for voter choice

    NARCIS (Netherlands)

    Paap, R; van Nierop, E; van Heerde, HJ; Wedel, M; Franses, PH; Alsem, KJ

    2005-01-01

    We present a statistical model for voter choice that incorporates a consideration set stage and final vote intention stage. The first stage involves a multivariate probit (MVP) model to describe the probabilities that a candidate or a party gets considered. The second stage of the model is a

  10. Quasi-bivariate variational mode decomposition as a tool of scale analysis in wall-bounded turbulence

    Science.gov (United States)

    Wang, Wenkang; Pan, Chong; Wang, Jinjun

    2018-01-01

    The identification and separation of multi-scale coherent structures is a critical task for the study of scale interaction in wall-bounded turbulence. Here, we propose a quasi-bivariate variational mode decomposition (QB-VMD) method to extract structures with various scales from instantaneous two-dimensional (2D) velocity field which has only one primary dimension. This method is developed from the one-dimensional VMD algorithm proposed by Dragomiretskiy and Zosso (IEEE Trans Signal Process 62:531-544, 2014) to cope with a quasi-2D scenario. It poses the feature of length-scale bandwidth constraint along the decomposed dimension, together with the central frequency re-balancing along the non-decomposed dimension. The feasibility of this method is tested on both a synthetic flow field and a turbulent boundary layer at moderate Reynolds number (Re_{τ } = 3458) measured by 2D particle image velocimetry (PIV). Some other popular scale separation tools, including pseudo-bi-dimensional empirical mode decomposition (PB-EMD), bi-dimensional EMD (B-EMD) and proper orthogonal decomposition (POD), are also tested for comparison. Among all these methods, QB-VMD shows advantages in both scale characterization and energy recovery. More importantly, the mode mixing problem, which degrades the performance of EMD-based methods, is avoided or minimized in QB-VMD. Finally, QB-VMD analysis of the wall-parallel plane in the log layer (at y/δ = 0.12) of the studied turbulent boundary layer shows the coexistence of large- or very large-scale motions (LSMs or VLSMs) and inner-scaled structures, which can be fully decomposed in both physical and spectral domains.

  11. A Bayesian Multi-Level Factor Analytic Model of Consumer Price Sensitivities across Categories

    Science.gov (United States)

    Duvvuri, Sri Devi; Gruca, Thomas S.

    2010-01-01

    Identifying price sensitive consumers is an important problem in marketing. We develop a Bayesian multi-level factor analytic model of the covariation among household-level price sensitivities across product categories that are substitutes. Based on a multivariate probit model of category incidence, this framework also allows the researcher to…

  12. Tradeoffs in Crop Residue Utilization in Mixed Crop-Livestock Systems and Implications for Conservation Agriculture and Sustainable Land Management

    OpenAIRE

    Jaleta, Moti; Kassie, Menale; Shiferaw, Bekele A.

    2012-01-01

    Crop residue use for soil mulch and animal feed are the two major competing purposes and the basic source of fundamental challenge in conservation agriculture (CA) where residue retention on farm plots is one of the three CA principles. Using survey data from Kenya and applying bivariate ordered Probit and bivariate Tobit models, this paper analyzes the tradeoffs in maize residue use as soil mulch and livestock feed in mixed farming systems. Results show that both the proportion and quantity ...

  13. Correlations and Non-Linear Probability Models

    DEFF Research Database (Denmark)

    Breen, Richard; Holm, Anders; Karlson, Kristian Bernt

    2014-01-01

    Although the parameters of logit and probit and other non-linear probability models are often explained and interpreted in relation to the regression coefficients of an underlying linear latent variable model, we argue that they may also be usefully interpreted in terms of the correlations betwee...... certain circumstances, which we explain, the derived correlation provides a way of overcoming the problems inherent in cross-sample comparisons of the parameters of non-linear probability models....

  14. Certification of family forests: What influences owners’ awareness and participation?

    Science.gov (United States)

    Selmin F. Creamer; Keith A. Blatner; Brett J. Butler

    2012-01-01

    In the United States, 35% of the forestland is owned by family forest owners with approximately 0.2% of this land reported to be enrolled in a forest certification system. The current study was conducted to provide insights into factors influencing family forest owners’ decisions to certify their lands. The bivariate probit model with sample selection results suggests...

  15. Intimate Partner Violence in Colombia: Who Is at Risk?

    Science.gov (United States)

    Friedemann-Sanchez, Greta; Lovaton, Rodrigo

    2012-01-01

    The role that domestic violence plays in perpetuating poverty is often overlooked as a development issue. Using data from the 2005 Demographic Health Survey, this paper examines the prevalence of intimate partner violence in Colombia. Employing an intrahousehold bargaining framework and a bivariate probit model, it assesses the prevalence of and…

  16. Does the capital structure of firms influence their innovation strategies? Evidence from the European agri-food sector

    NARCIS (Netherlands)

    Materia, V.C.; Abduraupov, Rustam; Dries, L.K.E.; Pascucci, S.

    2015-01-01

    The paper investigates the relationship between companies’ innovation strategies and their financing strategies. Innovation strategies are distinguished as in-house and outsourcing. A bivariate probit model is implemented using cross-section data on 1,393 agri-food firms in seven EU countries.

  17. Association of Supply Type with Fecal Contamination of Source Water and Household Stored Drinking Water in Developing Countries: A Bivariate Meta-analysis

    OpenAIRE

    Shields, Katherine F.; Bain, Robert E.S.; Cronk, Ryan; Wright, Jim A.; Bartram, Jamie

    2015-01-01

    Background Access to safe drinking water is essential for health. Monitoring access to drinking water focuses on water supply type at the source, but there is limited evidence on whether quality differences at the source persist in water stored in the household. Objectives We assessed the extent of fecal contamination at the source and in household stored water (HSW) and explored the relationship between contamination at each sampling point and water supply type. Methods We performed a bivari...

  18. Bivariate genome-wide association meta-analysis of pediatric musculoskeletal traits reveals pleiotropic effects at the SREBF1/TOM1L2 locus.

    Science.gov (United States)

    Medina-Gomez, Carolina; Kemp, John P; Dimou, Niki L; Kreiner, Eskil; Chesi, Alessandra; Zemel, Babette S; Bønnelykke, Klaus; Boer, Cindy G; Ahluwalia, Tarunveer S; Bisgaard, Hans; Evangelou, Evangelos; Heppe, Denise H M; Bonewald, Lynda F; Gorski, Jeffrey P; Ghanbari, Mohsen; Demissie, Serkalem; Duque, Gustavo; Maurano, Matthew T; Kiel, Douglas P; Hsu, Yi-Hsiang; C J van der Eerden, Bram; Ackert-Bicknell, Cheryl; Reppe, Sjur; Gautvik, Kaare M; Raastad, Truls; Karasik, David; van de Peppel, Jeroen; Jaddoe, Vincent W V; Uitterlinden, André G; Tobias, Jonathan H; Grant, Struan F A; Bagos, Pantelis G; Evans, David M; Rivadeneira, Fernando

    2017-07-25

    Bone mineral density is known to be a heritable, polygenic trait whereas genetic variants contributing to lean mass variation remain largely unknown. We estimated the shared SNP heritability and performed a bivariate GWAS meta-analysis of total-body lean mass (TB-LM) and total-body less head bone mineral density (TBLH-BMD) regions in 10,414 children. The estimated SNP heritability is 43% (95% CI: 34-52%) for TBLH-BMD, and 39% (95% CI: 30-48%) for TB-LM, with a shared genetic component of 43% (95% CI: 29-56%). We identify variants with pleiotropic effects in eight loci, including seven established bone mineral density loci: WNT4, GALNT3, MEPE, CPED1/WNT16, TNFSF11, RIN3, and PPP6R3/LRP5. Variants in the TOM1L2/SREBF1 locus exert opposing effects TB-LM and TBLH-BMD, and have a stronger association with the former trait. We show that SREBF1 is expressed in murine and human osteoblasts, as well as in human muscle tissue. This is the first bivariate GWAS meta-analysis to demonstrate genetic factors with pleiotropic effects on bone mineral density and lean mass.Bone mineral density and lean skeletal mass are heritable traits. Here, Medina-Gomez and colleagues perform bivariate GWAS analyses of total body lean mass and bone mass density in children, and show genetic loci with pleiotropic effects on both traits.

  19. Bivariate genome-wide association meta-analysis of pediatric musculoskeletal traits reveals pleiotropic effects at the SREBF1/TOM1L2 locus

    DEFF Research Database (Denmark)

    Medina-Gomez, Carolina; Kemp, John P; Dimou, Niki L

    2017-01-01

    Bone mineral density is known to be a heritable, polygenic trait whereas genetic variants contributing to lean mass variation remain largely unknown. We estimated the shared SNP heritability and performed a bivariate GWAS meta-analysis of total-body lean mass (TB-LM) and total-body less head bone...... mineral density (TBLH-BMD) regions in 10,414 children. The estimated SNP heritability is 43% (95% CI: 34-52%) for TBLH-BMD, and 39% (95% CI: 30-48%) for TB-LM, with a shared genetic component of 43% (95% CI: 29-56%). We identify variants with pleiotropic effects in eight loci, including seven established...... as in human muscle tissue. This is the first bivariate GWAS meta-analysis to demonstrate genetic factors with pleiotropic effects on bone mineral density and lean mass.Bone mineral density and lean skeletal mass are heritable traits. Here, Medina-Gomez and colleagues perform bivariate GWAS analyses of total...

  20. Reliability Implications in Wood Systems of a Bivariate Gaussian-Weibull Distribution and the Associated Univariate Pseudo-truncated Weibull

    Science.gov (United States)

    Steve P. Verrill; James W. Evans; David E. Kretschmann; Cherilyn A. Hatfield

    2014-01-01

    Two important wood properties are the modulus of elasticity (MOE) and the modulus of rupture (MOR). In the past, the statistical distribution of the MOE has often been modeled as Gaussian, and that of the MOR as lognormal or as a two- or three-parameter Weibull distribution. It is well known that MOE and MOR are positively correlated. To model the simultaneous behavior...

  1. Maximum likelihood estimation of the parameters of a bivariate Gaussian-Weibull distribution from machine stress-rated data

    Science.gov (United States)

    Steve P. Verrill; David E. Kretschmann; James W. Evans

    2016-01-01

    Two important wood properties are stiffness (modulus of elasticity, MOE) and bending strength (modulus of rupture, MOR). In the past, MOE has often been modeled as a Gaussian and MOR as a lognormal or a two- or threeparameter Weibull. It is well known that MOE and MOR are positively correlated. To model the simultaneous behavior of MOE and MOR for the purposes of wood...

  2. Some Extensions of the Nerlove-Press Model.

    Science.gov (United States)

    1980-10-01

    17- REFERENCES 1. Goodman, L. A., "A General Model for the Analysis of Surveys," American Journal of Sociology, Vol. 77, No. 6, May 1972, pp. 1035...Vol. 5, No. 4 pp. 525-545. 3. Huthin, B. (1979), "A Structural Probit Model with Latent Variables," Journal of the American Statistical Association...Rloot ThKKor of Sientif ic Pulication mint Scienc. 9 May 1977. San Fraisco, Cali- summer Entilenment on Duration and job Search Prmductivlvy

  3. Thurstonian models for sensory discrimination tests as generalized linear models

    DEFF Research Database (Denmark)

    Brockhoff, Per B.; Christensen, Rune Haubo Bojesen

    2010-01-01

    as a so-called generalized linear model. The underlying sensory difference 6 becomes directly a parameter of the statistical model and the estimate d' and it's standard error becomes the "usual" output of the statistical analysis. The d' for the monadic A-NOT A method is shown to appear as a standard...... linear contrast in a generalized linear model using the probit link function. All methods developed in the paper are implemented in our free R-package sensR (http://www.cran.r-project.org/package=sensR/). This includes the basic power and sample size calculations for these four discrimination tests...

  4. Testing for Bivariate Spherical Symmetry

    NARCIS (Netherlands)

    Einmahl, J.H.J.; Gantner, M.

    2010-01-01

    An omnibus test for spherical symmetry in R2 is proposed, employing localized empirical likelihood. The thus obtained test statistic is distri- bution-free under the null hypothesis. The asymptotic null distribution is established and critical values for typical sample sizes, as well as the

  5. Testing for bivariate spherical symmetry

    OpenAIRE

    Einmahl, J.H.J.; Gantner, M.

    2012-01-01

    An omnibus test for spherical symmetry in R2 is proposed, employing localized empirical likelihood. The thus obtained test statistic is distri- bution-free under the null hypothesis. The asymptotic null distribution is established and critical values for typical sample sizes, as well as the asymptotic ones, are presented. In a simulation study, the good perfor- mance of the test is demonstrated. Furthermore, a real data example is presented.

  6. BIMOND3, Monotone Bivariate Interpolation

    International Nuclear Information System (INIS)

    Fritsch, F.N.; Carlson, R.E.

    2001-01-01

    1 - Description of program or function: BIMOND is a FORTRAN-77 subroutine for piecewise bi-cubic interpolation to data on a rectangular mesh, which reproduces the monotonousness of the data. A driver program, BIMOND1, is provided which reads data, computes the interpolating surface parameters, and evaluates the function on a mesh suitable for plotting. 2 - Method of solution: Monotonic piecewise bi-cubic Hermite interpolation is used. 3 - Restrictions on the complexity of the problem: The current version of the program can treat data which are monotone in only one of the independent variables, but cannot handle piecewise monotone data

  7. Testing for bivariate spherical symmetry

    NARCIS (Netherlands)

    Einmahl, J.H.J.; Gantner, M.

    2012-01-01

    An omnibus test for spherical symmetry in R2 is proposed, employing localized empirical likelihood. The thus obtained test statistic is distribution free under the null hypothesis. The asymptotic null distribution is established and critical values for typical sample sizes, as well as the asymptotic

  8. Association of Supply Type with Fecal Contamination of Source Water and Household Stored Drinking Water in Developing Countries: A Bivariate Meta-analysis.

    Science.gov (United States)

    Shields, Katherine F; Bain, Robert E S; Cronk, Ryan; Wright, Jim A; Bartram, Jamie

    2015-12-01

    Access to safe drinking water is essential for health. Monitoring access to drinking water focuses on water supply type at the source, but there is limited evidence on whether quality differences at the source persist in water stored in the household. We assessed the extent of fecal contamination at the source and in household stored water (HSW) and explored the relationship between contamination at each sampling point and water supply type. We performed a bivariate random-effects meta-analysis of 45 studies, identified through a systematic review, that reported either the proportion of samples free of fecal indicator bacteria and/or individual sample bacteria counts for source and HSW, disaggregated by supply type. Water quality deteriorated substantially between source and stored water. The mean percentage of contaminated samples (noncompliance) at the source was 46% (95% CI: 33, 60%), whereas mean noncompliance in HSW was 75% (95% CI: 64, 84%). Water supply type was significantly associated with noncompliance at the source (p water (OR = 0.2; 95% CI: 0.1, 0.5) and HSW (OR = 0.3; 95% CI: 0.2, 0.8) from piped supplies had significantly lower odds of contamination compared with non-piped water, potentially due to residual chlorine. Piped water is less likely to be contaminated compared with other water supply types at both the source and in HSW. A focus on upgrading water services to piped supplies may help improve safety, including for those drinking stored water.

  9. Poisson versus threshold models for genetic analysis of clinical mastitis in US Holsteins.

    Science.gov (United States)

    Vazquez, A I; Weigel, K A; Gianola, D; Bates, D M; Perez-Cabal, M A; Rosa, G J M; Chang, Y M

    2009-10-01

    Typically, clinical mastitis is coded as the presence or absence of disease in a given lactation, and records are analyzed with either linear models or binary threshold models. Because the presence of mastitis may include cows with multiple episodes, there is a loss of information when counts are treated as binary responses. Poisson models are appropriated for random variables measured as the number of events, and although these models are used extensively in studying the epidemiology of mastitis, they have rarely been used for studying the genetic aspects of mastitis. Ordinal threshold models are pertinent for ordered categorical responses; although one can hypothesize that the number of clinical mastitis episodes per animal reflects a continuous underlying increase in mastitis susceptibility, these models have rarely been used in genetic analysis of mastitis. The objective of this study was to compare probit, Poisson, and ordinal threshold models for the genetic evaluation of US Holstein sires for clinical mastitis. Mastitis was measured as a binary trait or as the number of mastitis cases. Data from 44,908 first-parity cows recorded in on-farm herd management software were gathered, edited, and processed for the present study. The cows were daughters of 1,861 sires, distributed over 94 herds. Predictive ability was assessed via a 5-fold cross-validation using 2 loss functions: mean squared error of prediction (MSEP) as the end point and a cost difference function. The heritability estimates were 0.061 for mastitis measured as a binary trait in the probit model and 0.085 and 0.132 for the number of mastitis cases in the ordinal threshold and Poisson models, respectively; because of scale differences, only the probit and ordinal threshold models are directly comparable. Among healthy animals, MSEP was smallest for the probit model, and the cost function was smallest for the ordinal threshold model. Among diseased animals, MSEP and the cost function were smallest

  10. Formulating Rural Development Programmes to Aid Low-Income Farm Families

    OpenAIRE

    Findeis, Jill L.; Reddy, Venkateshwar K.

    1989-01-01

    Rural development programmes may facilitate the off-farm employment of low-income farm families and provide additional public suppon beyond traditional US farm income and price support programmes. To examine the implications of alternative rural development strategies for low-income farmers, joint off-farm labour participation models are developed for farm operators and spouses. Univariate and bivariate probit models are estimated. based on 1985 Current Population Survey farm household data. ...

  11. Diagnostic value of sTREM-1 in bronchoalveolar lavage fluid in ICU patients with bacterial lung infections: a bivariate meta-analysis.

    Science.gov (United States)

    Shi, Jia-Xin; Li, Jia-Shu; Hu, Rong; Li, Chun-Hua; Wen, Yan; Zheng, Hong; Zhang, Feng; Li, Qin

    2013-01-01

    The serum soluble triggering receptor expressed on myeloid cells-1 (sTREM-1) is a useful biomarker in differentiating bacterial infections from others. However, the diagnostic value of sTREM-1 in bronchoalveolar lavage fluid (BALF) in lung infections has not been well established. We performed a meta-analysis to assess the accuracy of sTREM-1 in BALF for diagnosis of bacterial lung infections in intensive care unit (ICU) patients. We searched PUBMED, EMBASE and Web of Knowledge (from January 1966 to October 2012) databases for relevant studies that reported diagnostic accuracy data of BALF sTREM-1 in the diagnosis of bacterial lung infections in ICU patients. Pooled sensitivity, specificity, and positive and negative likelihood ratios were calculated by a bivariate regression analysis. Measures of accuracy and Q point value (Q*) were calculated using summary receiver operating characteristic (SROC) curve. The potential between-studies heterogeneity was explored by subgroup analysis. Nine studies were included in the present meta-analysis. Overall, the prevalence was 50.6%; the sensitivity was 0.87 (95% confidence interval (CI), 0.72-0.95); the specificity was 0.79 (95% CI, 0.56-0.92); the positive likelihood ratio (PLR) was 4.18 (95% CI, 1.78-9.86); the negative likelihood ratio (NLR) was 0.16 (95% CI, 0.07-0.36), and the diagnostic odds ratio (DOR) was 25.60 (95% CI, 7.28-89.93). The area under the SROC curve was 0.91 (95% CI, 0.88-0.93), with a Q* of 0.83. Subgroup analysis showed that the assay method and cutoff value influenced the diagnostic accuracy of sTREM-1. BALF sTREM-1 is a useful biomarker of bacterial lung infections in ICU patients. Further studies are needed to confirm the optimized cutoff value.

  12. Landslide susceptibility analysis in central Vietnam based on an incomplete landslide inventory: Comparison of a new method to calculate weighting factors by means of bivariate statistics

    Science.gov (United States)

    Meinhardt, Markus; Fink, Manfred; Tünschel, Hannes

    2015-04-01

    Vietnam is regarded as a country strongly impacted by climate change. Population and economic growth result in additional pressures on the ecosystems in the region. In particular, changes in landuse and precipitation extremes lead to a higher landslide susceptibility in the study area (approx. 12,400 km2), located in central Vietnam and impacted by a tropical monsoon climate. Hence, this natural hazard is a serious problem in the study area. A probability assessment of landslides is therefore undertaken through the use of bivariate statistics. However, the landslide inventory based only on field campaigns does not cover the whole area. To avoid a systematic bias due to the limited mapping area, the investigated regions are depicted as the viewshed in the calculations. On this basis, the distribution of the landslides is evaluated in relation to the maps of 13 parameters, showing the strongest correlation to distance to roads and precipitation increase. An additional weighting of the input parameters leads to better results, since some parameters contribute more to landslides than others. The method developed in this work is based on the validation of different parameter sets used within the statistical index method. It is called "omit error" because always omitting another parameter leads to the weightings, which describe how strong every single parameter improves or reduces the objective function. Furthermore, this approach is used to find a better input parameter set by excluding some parameters. After this optimization, nine input parameters are left, and they are weighted by the omit error method, providing the best susceptibility map with a success rate of 92.9% and a prediction rate of 92.3%. This is an improvement of 4.4% and 4.2%, respectively, compared to the basic statistical index method with the 13 input parameters.

  13. Modelling world gold prices and USD foreign exchange relationship using multivariate GARCH model

    Science.gov (United States)

    Ping, Pung Yean; Ahmad, Maizah Hura Binti

    2014-12-01

    World gold price is a popular investment commodity. The series have often been modeled using univariate models. The objective of this paper is to show that there is a co-movement between gold price and USD foreign exchange rate. Using the effect of the USD foreign exchange rate on the gold price, a model that can be used to forecast future gold prices is developed. For this purpose, the current paper proposes a multivariate GARCH (Bivariate GARCH) model. Using daily prices of both series from 01.01.2000 to 05.05.2014, a causal relation between the two series understudied are found and a bivariate GARCH model is produced.

  14. Endogenous Women's Autonomy and the Use of Reproductive Health Services: Empirical Evidence from Tajikistan

    OpenAIRE

    Yusuke Kamiya

    2010-01-01

    Though gender equity is widely considered to be a key to improving maternal health in developing countries, little empirical evidence has been presented to support this claim. This paper investigates whether or not and how female autonomy within the household affects women's use of reproductive health care in Tajikistan, where the situation of maternal health and gender equity is worse compared with neighbouring countries. Estimation is performed using bivariate probit models in which woman's...

  15. Contractual arrangements and food quality certifications in the Mexican avocado industry

    OpenAIRE

    Arana-Coronado, J.J.; Bijman, J.; Omta, S.W.F.; Oude Lansink, A.G.J.M.

    2013-01-01

    The adoption of private quality certifications in agrifood supply chains often requires specific investments by producers which can be safeguarded by choosing specific contractual arrangements. Based on a survey data from avocado producers in Mexico, this paper aims to analyze the impact of transaction costs and relationship characteristics of the joint choice of contractual arrangements and quality certifications. Using a bivariate probit model, it shows that a producer’s decision to adopt p...

  16. The Impact of Agricultural Cooperatives on the Adoption of Technologies and Farm Performance of Apple Farmers in China

    OpenAIRE

    Ma, Wanglin

    2016-01-01

    This study examines the impact of agricultural cooperative membership on the adoption of technologies and farm performance, using data collected from 481 apple farmers in China. Specifically, the study first examines the impact of cooperative membership on investment in organic fertilizer, farmyard manure and chemical fertilizer using a recursive bivariate probit model. Second, the causal link between cooperative membership and adoption of integrated pest management (IPM) technology is analyz...

  17. Inference for the Bivariate and Multivariate Hidden Truncated Pareto(type II) and Pareto(type IV) Distribution and Some Measures of Divergence Related to Incompatibility of Probability Distribution

    Science.gov (United States)

    Ghosh, Indranil

    2011-01-01

    Consider a discrete bivariate random variable (X, Y) with possible values x[subscript 1], x[subscript 2],..., x[subscript I] for X and y[subscript 1], y[subscript 2],..., y[subscript J] for Y. Further suppose that the corresponding families of conditional distributions, for X given values of Y and of Y for given values of X are available. We…

  18. Comparison and validation of shallow landslides susceptibility maps generated by bi-variate and multi-variate linear probabilistic GIS-based techniques. A case study from Ribeira Quente Valley (S. Miguel Island, Azores)

    Science.gov (United States)

    Marques, R.; Amaral, P.; Zêzere, J. L.; Queiroz, G.; Goulart, C.

    2009-04-01

    Slope instability research and susceptibility mapping is a fundamental component of hazard assessment and is of extreme importance for risk mitigation, land-use management and emergency planning. Landslide susceptibility zonation has been actively pursued during the last two decades and several methodologies are still being improved. Among all the methods presented in the literature, indirect quantitative probabilistic methods have been extensively used. In this work different linear probabilistic methods, both bi-variate and multi-variate (Informative Value, Fuzzy Logic, Weights of Evidence and Logistic Regression), were used for the computation of the spatial probability of landslide occurrence, using the pixel as mapping unit. The methods used are based on linear relationships between landslides and 9 considered conditioning factors (altimetry, slope angle, exposition, curvature, distance to streams, wetness index, contribution area, lithology and land-use). It was assumed that future landslides will be conditioned by the same factors as past landslides in the study area. The presented work was developed for Ribeira Quente Valley (S. Miguel Island, Azores), a study area of 9,5 km2, mainly composed of volcanic deposits (ash and pumice lapilli) produced by explosive eruptions in Furnas Volcano. This materials associated to the steepness of the slopes (38,9% of the area has slope angles higher than 35°, reaching a maximum of 87,5°), make the area very prone to landslide activity. A total of 1.495 shallow landslides were mapped (at 1:5.000 scale) and included in a GIS database. The total affected area is 401.744 m2 (4,5% of the study area). Most slope movements are translational slides frequently evolving into debris-flows. The landslides are elongated, with maximum length generally equivalent to the slope extent, and their width normally does not exceed 25 m. The failure depth rarely exceeds 1,5 m and the volume is usually smaller than 700 m3. For modelling

  19. Determining Effects of Genes, Environment, and Gene X Environment Interaction That Are Common to Breast and Ovarian Cancers Via Bivariate Logistic Regression

    National Research Council Canada - National Science Library

    Ramakrishnan, Viswanathan

    2003-01-01

    .... A generalized estimation equations (GEE) logistic regression model was used for the modeling. A shared trait is defined for two discrete traits based upon explicit patterns of trait concordance and discordance within twin pairs...

  20. Newton Leibniz integration for ket-bra operators in quantum mechanics (V)—Deriving normally ordered bivariate-normal-distribution form of density operators and developing their phase space formalism

    Science.gov (United States)

    Fan, Hong-yi

    2008-06-01

    We show that Newton-Leibniz integration over Dirac's ket-bra projection operators with continuum variables, which can be performed by the technique of integration within ordered product (IWOP) of operators [Hong-yi Fan, Hai-liang Lu, Yue Fan, Ann. Phys. 321 (2006) 480], can directly recast density operators and generalized Wigner operators into normally ordered bivariate-normal-distribution form, which has resemblance in statistics. In this way the phase space formalism of quantum mechanics can be developed. The Husimi operator, entangled Husimi operator and entangled Wigner operator for entangled particles with different masses are naturally introduced by virtue of the IWOP technique, and their physical meanings are explained.

  1. Modeling animal movements using stochastic differential equations

    Science.gov (United States)

    Haiganoush K. Preisler; Alan A. Ager; Bruce K. Johnson; John G. Kie

    2004-01-01

    We describe the use of bivariate stochastic differential equations (SDE) for modeling movements of 216 radiocollared female Rocky Mountain elk at the Starkey Experimental Forest and Range in northeastern Oregon. Spatially and temporally explicit vector fields were estimated using approximating difference equations and nonparametric regression techniques. Estimated...

  2. Probit-method for information security risk assessment

    OpenAIRE

    Mokhor, Volodymyr; Tsurkan, Vasyl

    2013-01-01

    Приведено обоснование постановки задачи развития методологии количественной оценки рисков безопасности информации конкретных объектов информационной деятельности на основе пробит-анализа.

  3. Comparison on models for genetic evaluation of non-return rate and success in first insemination of the Danish Holstein cow

    DEFF Research Database (Denmark)

    Sun, C; Su, G

    2010-01-01

    The aim of is study was to compare a linear Gaussian model with logit model and probit model for genetic evaluation of non-return rate within 56 d after first-insemination (NRR56) and success in first insemination (SFI). The whole dataset used in the analysis contained 471,742 records from...... the first lactation of the Danish Holstein cows, covering insemination year from 1995 to 2004. Model stability was evaluated by the correlation between sire EBV (estimated breeding values) from two sub-datasets. The predictive ability of models was assessed by two criteria: 1) the correlation between...

  4. Observer analysis and its impact on task performance modeling

    Science.gov (United States)

    Jacobs, Eddie L.; Brown, Jeremy B.

    2014-05-01

    Fire fighters use relatively low cost thermal imaging cameras to locate hot spots and fire hazards in buildings. This research describes the analyses performed to study the impact of thermal image quality on fire fighter fire hazard detection task performance. Using human perception data collected by the National Institute of Standards and Technology (NIST) for fire fighters detecting hazards in a thermal image, an observer analysis was performed to quantify the sensitivity and bias of each observer. Using this analysis, the subjects were divided into three groups representing three different levels of performance. The top-performing group was used for the remainder of the modeling. Models were developed which related image quality factors such as contrast, brightness, spatial resolution, and noise to task performance probabilities. The models were fitted to the human perception data using logistic regression, as well as probit regression. Probit regression was found to yield superior fits and showed that models with not only 2nd order parameter interactions, but also 3rd order parameter interactions performed the best.

  5. Toxicological and physiological effects of ethephon on the model organism, Galleria mellonella L. 1758 (Lepidoptera: Pyralidae)

    OpenAIRE

    ALTUNTAŞ, Hülya; DUMAN, Emine; ŞANAL DEMİRCİ, Sümeyra Nur; ERGİN, Ekrem

    2016-01-01

    Ethephon (ETF) has been used in agriculture as an ethylene releaser type of plant growth regulator. The aim of this work was to determine the ecotoxicological effects of ETF on the survival and the antioxidant metabolism of the insects using a model organism Galleria mellonella L. 1758. A toxicity test was performed to determine the lethal doses of ETF on larvae. According to probit assay, the LD50 and LD99 values for force fed larvae were 344 and 419 µg/5 µl, respectively, 30 d after treatme...

  6. A Vehicle for Bivariate Data Analysis

    Science.gov (United States)

    Roscoe, Matt B.

    2016-01-01

    Instead of reserving the study of probability and statistics for special fourth-year high school courses, the Common Core State Standards for Mathematics (CCSSM) takes a "statistics for all" approach. The standards recommend that students in grades 6-8 learn to summarize and describe data distributions, understand probability, draw…

  7. Analysis of the asymmetrical shortest two-server queueing model

    NARCIS (Netherlands)

    J.W. Cohen

    1995-01-01

    textabstractThis study presents the analytic solution for the asymmetrical two-server queueing model with arriving customers joining the shorter queue for the case with Poisson arrivals and negative exponentially distributed service times. The bivariate generating function of the stationary joint

  8. Testing the normality assumption in the sample selection model with an application to travel demand

    NARCIS (Netherlands)

    van der Klaauw, B.; Koning, R.H.

    2003-01-01

    In this article we introduce a test for the normality assumption in the sample selection model. The test is based on a flexible parametric specification of the density function of the error terms in the model. This specification follows a Hermite series with bivariate normality as a special case.

  9. Testing the normality assumption in the sample selection model with an application to travel demand

    NARCIS (Netherlands)

    van der Klauw, B.; Koning, R.H.

    In this article we introduce a test for the normality assumption in the sample selection model. The test is based on a flexible parametric specification of the density function of the error terms in the model. This specification follows a Hermite series with bivariate normality as a special case.

  10. A Composite Likelihood Inference in Latent Variable Models for Ordinal Longitudinal Responses

    Science.gov (United States)

    Vasdekis, Vassilis G. S.; Cagnone, Silvia; Moustaki, Irini

    2012-01-01

    The paper proposes a composite likelihood estimation approach that uses bivariate instead of multivariate marginal probabilities for ordinal longitudinal responses using a latent variable model. The model considers time-dependent latent variables and item-specific random effects to be accountable for the interdependencies of the multivariate…

  11. International Sign Predictability of Stock Returns: The Role of the United States

    DEFF Research Database (Denmark)

    Nyberg, Henri; Pönkä, Harri

    We study the directional predictability of monthly excess stock market returns in the U.S. and ten other markets using univariate and bivariate binary response models. Our main interest is on the potential benefits of predicting the signs of the returns jointly, focusing on the predictive power...... from the U.S. to foreign markets. We introduce a new bivariate probit model that allows for such a contemporaneous predictive linkage from one market to the other. Our in-sample and out-of-sample forecasting results indicate superior predictive performance of the new model over the competing models...... by statistical measures and market timing performance, suggesting gradual diffusion of predictive information from the U.S. to the other markets....

  12. Taxes and Bribes in Uganda

    Science.gov (United States)

    Jagger, Pamela; Shively, Gerald

    2016-01-01

    Using data from 433 firms operating along Uganda’s charcoal and timber supply chains we investigate patterns of bribe payment and tax collection between supply chain actors and government officials responsible for collecting taxes and fees. We examine the factors associated with the presence and magnitude of bribe and tax payments using a series of bivariate probit and Tobit regression models. We find empirical support for a number of hypotheses related to payments, highlighting the role of queuing, capital-at-risk, favouritism, networks, and role in the supply chain. We also find that taxes crowd-in bribery in the charcoal market. PMID:27274568

  13. The physician-patient relationship revisited: the patient's view.

    Science.gov (United States)

    Schneider, Udo; Ulrich, Volker

    2008-12-01

    The importance of the physician-patient relationship for the health care market is beyond controversy. Recent work emphasizes a two-sided asymmetric information relationship between physician and patient. In contrast to most work looking only at the physician's perspective, our paper concentrates on the patient's view. Estimation results support the hypotheses that physician consultation and health relevant behavior are not stochastically independent. In the recursive bivariate probit model, patient's health relevant behavior has a significant influence on the probability of a physician visit. This means that health care demand and not only the contact decision is determined by both, patient and physician.

  14. Identifying the factors affecting bike-sharing usage and degree of satisfaction in Ningbo, China

    OpenAIRE

    Guo, Yanyong; Zhou, Jibiao; Wu, Yao; Li, Zhibin

    2017-01-01

    The boom in bike-sharing is receiving growing attention as societies become more aware of the importance of active non-motorized traffic modes. However, the low usage of this transport mode in China raises concerns. The primary objective of this study is to explore factors affecting bike-sharing usage and satisfaction degree of bike-sharing among the bike-sharing user population in China. Data were collected by a questionnaire survey in Ningbo. A bivariate ordered probit (BOP) model was devel...

  15. Taxes and Bribes in Uganda.

    Science.gov (United States)

    Jagger, Pamela; Shively, Gerald

    Using data from 433 firms operating along Uganda's charcoal and timber supply chains we investigate patterns of bribe payment and tax collection between supply chain actors and government officials responsible for collecting taxes and fees. We examine the factors associated with the presence and magnitude of bribe and tax payments using a series of bivariate probit and Tobit regression models. We find empirical support for a number of hypotheses related to payments, highlighting the role of queuing, capital-at-risk, favouritism, networks, and role in the supply chain. We also find that taxes crowd-in bribery in the charcoal market.

  16. A model for predicting Inactivity in the European Banking Sector

    Directory of Open Access Journals (Sweden)

    Themistokles Lazarides

    2015-08-01

    Full Text Available Purpose – The paper will addresses the issue of inactivity and will try to detect its causes using econometric models. The Banking sector of Europe has been under transformation or restructuring for almost half a century. Design/methodology/approach – Probit models and descriptive statistics have been used to create a system that predicts inactivity. The data was collected from Bankscope. Findings – The results of the econometric models show that from the six groups of indicators, four have been found to be statistically important (performance, size, ownership, corporate governance. These findings are consistent with the theory. Research limitations/implications – The limitation is that Bankscope does not provide any longitudinal data regarding ownership, management structure and there are some many missing values before 2007 for some of the financial ratios and data. Originality/value – The paper's value and innovation is that it has given a systemic approach to find indicators of inactivity.

  17. Multivariate longitudinal data analysis with mixed effects hidden Markov models.

    Science.gov (United States)

    Raffa, Jesse D; Dubin, Joel A

    2015-09-01

    Multiple longitudinal responses are often collected as a means to capture relevant features of the true outcome of interest, which is often hidden and not directly measurable. We outline an approach which models these multivariate longitudinal responses as generated from a hidden disease process. We propose a class of models which uses a hidden Markov model with separate but correlated random effects between multiple longitudinal responses. This approach was motivated by a smoking cessation clinical trial, where a bivariate longitudinal response involving both a continuous and a binomial response was collected for each participant to monitor smoking behavior. A Bayesian method using Markov chain Monte Carlo is used. Comparison of separate univariate response models to the bivariate response models was undertaken. Our methods are demonstrated on the smoking cessation clinical trial dataset, and properties of our approach are examined through extensive simulation studies. © 2015, The International Biometric Society.

  18. Do Stochastic Traffic Assignment Models Consider Differences in Road Users Utility Functions?

    DEFF Research Database (Denmark)

    Nielsen, Otto Anker

    1996-01-01

    The early stochastic traffic assignment models (e.g. Dial, 1971) build on the assump-tion that different routes are independent (the logit-model concept). Thus, the models gave severe problems in networks with overlapping routes. Daganzo & Sheffi (1977) suggested to use probit based models...... to overcome this problem. Sheffi & Powell (1981) presented a practically operational solution algorithm in which the travel resistance for each road segment is adjusted according to a Monte Carlo simulation following the Normal-distribution. By this the road users’ ‘perceived travel resistances’ are simulated....... A similar concept is used as a part of the Sto-chastic User Equilibrium model (SUE) suggested by Daganzo and Sheffi (1977) and operationalized by Sheffi & Powell (1982). In the paper it is discussed whether this way of modelling the ‘perceived travel resistance’ is sufficient to describe the road users...

  19. A Jump-Diffusion Model with Stochastic Volatility and Durations

    DEFF Research Database (Denmark)

    Wei, Wei; Pelletier, Denis

    Market microstructure theories suggest that the durations between transactions carry information about volatility. This paper puts forward a model featuring stochastic volatility, stochastic conditional duration, and jumps to analyze high frequency returns and durations. Durations affect price...... jumps in two ways: as exogenous sampling intervals, and through the interaction with volatility. We adopt a bivariate Ornstein-Ulenbeck process to model intraday volatility and conditional duration. We develop a MCMC algorithm for the inference on irregularly spaced multivariate processes with jumps...

  20. Discrete choice modeling of environmental security. Research report

    Energy Technology Data Exchange (ETDEWEB)

    Carson, K.S.

    1998-10-01

    The presence of overpopulation or unsustainable population growth may place pressure on the food and water supplies of countries in sensitive areas of the world. Severe air or water pollution may place additional pressure on these resources. These pressures may generate both internal and international conflict in these areas as nations struggle to provide for their citizens. Such conflicts may result in United States intervention, either unilaterally, or through the United Nations. Therefore, it is in the interests of the United States to identify potential areas of conflict in order to properly train and allocate forces. The purpose of this research is to forecast the probability of conflict in a nation as a function of it s environmental conditions. Probit, logit and ordered probit models are employed to forecast the probability of a given level of conflict. Data from 95 countries are used to estimate the models. Probability forecasts are generated for these 95 nations. Out-of sample forecasts are generated for an additional 22 nations. These probabilities are then used to rank nations from highest probability of conflict to lowest. The results indicate that the dependence of a nation`s economy on agriculture, the rate of deforestation, and the population density are important variables in forecasting the probability and level of conflict. These results indicate that environmental variables do play a role in generating or exacerbating conflict. It is unclear that the United States military has any direct role in mitigating the environmental conditions that may generate conflict. A more important role for the military is to aid in data gathering to generate better forecasts so that the troops are adequntely prepared when conflicts arises.

  1. Longitudinal beta-binomial modeling using GEE for overdispersed binomial data.

    Science.gov (United States)

    Wu, Hongqian; Zhang, Ying; Long, Jeffrey D

    2017-03-15

    Longitudinal binomial data are frequently generated from multiple questionnaires and assessments in various scientific settings for which the binomial data are often overdispersed. The standard generalized linear mixed effects model may result in severe underestimation of standard errors of estimated regression parameters in such cases and hence potentially bias the statistical inference. In this paper, we propose a longitudinal beta-binomial model for overdispersed binomial data and estimate the regression parameters under a probit model using the generalized estimating equation method. A hybrid algorithm of the Fisher scoring and the method of moments is implemented for computing the method. Extensive simulation studies are conducted to justify the validity of the proposed method. Finally, the proposed method is applied to analyze functional impairment in subjects who are at risk of Huntington disease from a multisite observational study of prodromal Huntington disease. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  2. Factors Explaining Households’ Cash Payment for Solid Waste Disposal and Recycling Behaviors in South Africa

    Directory of Open Access Journals (Sweden)

    Abayomi Samuel Oyekale

    2015-11-01

    Full Text Available Environmental safety is one of the top policy priorities in some developing countries. This study analyzed the factors influencing waste disposal and recycling by households in South Africa. The data were collected by Statistics South Africa in 2012 during the General Household Survey (GHS. Analysis of the data was carried out with the Bivariate Probit model. The results showed that 56.03% and 31.98% of all the households disposed waste through local authority/private companies and own refuse dump sites, respectively. Limpopo and Mpumalanga had the highest usage of own refuse dump sites and dumping of waste anywhere. Littering (34.03% and land degradation (31.53% were mostly perceived by the households, while 38.42% were paying for waste disposal and 8.16% would be willing to pay. Only 6.54% and 1.70% of all the households were recycling and selling waste respectively with glass (4.10% and papers (4.02% being most recycled. The results of the Bivariate Probit model identified income, access to social grants, Indian origin, and attainment of formal education as significant variables influencing payment for waste disposal and recycling. It was inter alia recommended that revision of environmental law, alleviating poverty, and gender sensitive environmental education and awareness creation would enhance environmental conservation behaviors in South Africa.

  3. Modeling soil water content for vegetation modeling improvement

    Science.gov (United States)

    Cianfrani, Carmen; Buri, Aline; Zingg, Barbara; Vittoz, Pascal; Verrecchia, Eric; Guisan, Antoine

    2016-04-01

    Soil water content (SWC) is known to be important for plants as it affects the physiological processes regulating plant growth. Therefore, SWC controls plant distribution over the Earth surface, ranging from deserts and grassland to rain forests. Unfortunately, only a few data on SWC are available as its measurement is very time consuming and costly and needs specific laboratory tools. The scarcity of SWC measurements in geographic space makes it difficult to model and spatially project SWC over larger areas. In particular, it prevents its inclusion in plant species distribution model (SDMs) as predictor. The aims of this study were, first, to test a new methodology allowing problems of the scarcity of SWC measurements to be overpassed and second, to model and spatially project SWC in order to improve plant SDMs with the inclusion of SWC parameter. The study was developed in four steps. First, SWC was modeled by measuring it at 10 different pressures (expressed in pF and ranging from pF=0 to pF=4.2). The different pF represent different degrees of soil water availability for plants. An ensemble of bivariate models was built to overpass the problem of having only a few SWC measurements (n = 24) but several predictors to include in the model. Soil texture (clay, silt, sand), organic matter (OM), topographic variables (elevation, aspect, convexity), climatic variables (precipitation) and hydrological variables (river distance, NDWI) were used as predictors. Weighted ensemble models were built using only bivariate models with adjusted-R2 > 0.5 for each SWC at different pF. The second step consisted in running plant SDMs including modeled SWC jointly with the conventional topo-climatic variable used for plant SDMs. Third, SDMs were only run using the conventional topo-climatic variables. Finally, comparing the models obtained in the second and third steps allowed assessing the additional predictive power of SWC in plant SDMs. SWC ensemble models remained very good, with

  4. Two-vehicle injury severity models based on integration of pavement management and traffic engineering factors.

    Science.gov (United States)

    Jiang, Ximiao; Huang, Baoshan; Yan, Xuedong; Zaretzki, Russell L; Richards, Stephen

    2013-01-01

    The severity of traffic-related injuries has been studied by many researchers in recent decades. However, the evaluation of many factors is still in dispute and, until this point, few studies have taken into account pavement management factors as points of interest. The objective of this article is to evaluate the combined influences of pavement management factors and traditional traffic engineering factors on the injury severity of 2-vehicle crashes. This study examines 2-vehicle rear-end, sideswipe, and angle collisions that occurred on Tennessee state routes from 2004 to 2008. Both the traditional ordered probit (OP) model and Bayesian ordered probit (BOP) model with weak informative prior were fitted for each collision type. The performances of these models were evaluated based on the parameter estimates and deviances. The results indicated that pavement management factors played identical roles in all 3 collision types. Pavement serviceability produces significant positive effects on the severity of injuries. The pavement distress index (PDI), rutting depth (RD), and rutting depth difference between right and left wheels (RD_df) were not significant in any of these 3 collision types. The effects of traffic engineering factors varied across collision types, except that a few were consistently significant in all 3 collision types, such as annual average daily traffic (AADT), rural-urban location, speed limit, peaking hour, and light condition. The findings of this study indicated that improved pavement quality does not necessarily lessen the severity of injuries when a 2-vehicle crash occurs. The effects of traffic engineering factors are not universal but vary by the type of crash. The study also found that the BOP model with a weak informative prior can be used as an alternative but was not superior to the traditional OP model in terms of overall performance.

  5. Summary goodness-of-fit statistics for binary generalized linear models with noncanonical link functions.

    Science.gov (United States)

    Canary, Jana D; Blizzard, Leigh; Barry, Ronald P; Hosmer, David W; Quinn, Stephen J

    2016-05-01

    Generalized linear models (GLM) with a canonical logit link function are the primary modeling technique used to relate a binary outcome to predictor variables. However, noncanonical links can offer more flexibility, producing convenient analytical quantities (e.g., probit GLMs in toxicology) and desired measures of effect (e.g., relative risk from log GLMs). Many summary goodness-of-fit (GOF) statistics exist for logistic GLM. Their properties make the development of GOF statistics relatively straightforward, but it can be more difficult under noncanonical links. Although GOF tests for logistic GLM with continuous covariates (GLMCC) have been applied to GLMCCs with log links, we know of no GOF tests in the literature specifically developed for GLMCCs that can be applied regardless of link function chosen. We generalize the Tsiatis GOF statistic originally developed for logistic GLMCCs, (TG), so that it can be applied under any link function. Further, we show that the algebraically related Hosmer-Lemeshow (HL) and Pigeon-Heyse (J(2) ) statistics can be applied directly. In a simulation study, TG, HL, and J(2) were used to evaluate the fit of probit, log-log, complementary log-log, and log models, all calculated with a common grouping method. The TG statistic consistently maintained Type I error rates, while those of HL and J(2) were often lower than expected if terms with little influence were included. Generally, the statistics had similar power to detect an incorrect model. An exception occurred when a log GLMCC was incorrectly fit to data generated from a logistic GLMCC. In this case, TG had more power than HL or J(2) . © 2015 John Wiley & Sons Ltd/London School of Economics.

  6. Modeling pairwise dependencies in precipitation intensities

    Directory of Open Access Journals (Sweden)

    M. Vrac

    2007-12-01

    Full Text Available In statistics, extreme events are classically defined as maxima over a block length (e.g. annual maxima of daily precipitation or as exceedances above a given large threshold. These definitions allow the hydrologist and the flood planner to apply the univariate Extreme Value Theory (EVT to their time series of interest. But these strategies have two main drawbacks. Firstly, working with maxima or exceedances implies that a lot of observations (those below the chosen threshold or the maximum are completely disregarded. Secondly, this univariate modeling does not take into account the spatial dependence. Nearby weather stations are considered independent, although their recordings can show otherwise.

    To start addressing these two issues, we propose a new statistical bivariate model that takes advantages of the recent advances in multivariate EVT. Our model can be viewed as an extension of the non-homogeneous univariate mixture. The two strong points of this latter model are its capacity at modeling the entire range of precipitation (and not only the largest values and the absence of an arbitrarily fixed large threshold to define exceedances. Here, we adapt this mixture and broaden it to the joint modeling of bivariate precipitation recordings. The performance and flexibility of this new model are illustrated on simulated and real precipitation data.

  7. Statistical models for competing risk analysis

    International Nuclear Information System (INIS)

    Sather, H.N.

    1976-08-01

    Research results on three new models for potential applications in competing risks problems. One section covers the basic statistical relationships underlying the subsequent competing risks model development. Another discusses the problem of comparing cause-specific risk structure by competing risks theory in two homogeneous populations, P1 and P2. Weibull models which allow more generality than the Berkson and Elveback models are studied for the effect of time on the hazard function. The use of concomitant information for modeling single-risk survival is extended to the multiple failure mode domain of competing risks. The model used to illustrate the use of this methodology is a life table model which has constant hazards within pre-designated intervals of the time scale. Two parametric models for bivariate dependent competing risks, which provide interesting alternatives, are proposed and examined

  8. Modelling

    CERN Document Server

    Spädtke, P

    2013-01-01

    Modeling of technical machines became a standard technique since computer became powerful enough to handle the amount of data relevant to the specific system. Simulation of an existing physical device requires the knowledge of all relevant quantities. Electric fields given by the surrounding boundary as well as magnetic fields caused by coils or permanent magnets have to be known. Internal sources for both fields are sometimes taken into account, such as space charge forces or the internal magnetic field of a moving bunch of charged particles. Used solver routines are briefly described and some bench-marking is shown to estimate necessary computing times for different problems. Different types of charged particle sources will be shown together with a suitable model to describe the physical model. Electron guns are covered as well as different ion sources (volume ion sources, laser ion sources, Penning ion sources, electron resonance ion sources, and H$^-$-sources) together with some remarks on beam transport.

  9. Comparison of models for genetic evaluation of number of inseminations to conception in Danish Holstein cows.

    Science.gov (United States)

    Guo, Gang; Hou, Yali; Zhang, Yuan; Su, Guosheng

    2017-04-01

    Number of inseminations to conception (NINS), an important fertility trait, requires appropriate approaches for genetic evaluation due to its non-normal distribution and censoring records. In this study, we analyzed NINS in 474 837 Danish Holstein cows at their first lactation by using seven models which deal with the categorical phenotypes and censored records in different manners, further assessed these models with regard to stability, lack of bias and accuracy of prediction. The estimated heritability from four models based on original NINS specified as a linear Gaussian model, categorical threshold model, threshold linear model and survival model were similar (0.031-0.037). While for the other three models based on the binary response derived from NINS, referred as threshold model (TM), logistic and probit models (LOGM and PROM), the heritability were estimated as 0.027, 0.063 and 0.027, respectively. The model comparison concluded that different models could lead to slightly different sire rankings in terms of breeding values; a more complicated model led to less stability of prediction; the models based on the binary response derived from NINS (TM, LOGM and PROM) had slightly better performances in terms of unbiased and accurate prediction of breeding values. © 2016 Japanese Society of Animal Science.

  10. Modelling female fertility traits in beef cattle using linear and non-linear models.

    Science.gov (United States)

    Naya, H; Peñagaricano, F; Urioste, J I

    2017-06-01

    Female fertility traits are key components of the profitability of beef cattle production. However, these traits are difficult and expensive to measure, particularly under extensive pastoral conditions, and consequently, fertility records are in general scarce and somehow incomplete. Moreover, fertility traits are usually dominated by the effects of herd-year environment, and it is generally assumed that relatively small margins are kept for genetic improvement. New ways of modelling genetic variation in these traits are needed. Inspired in the methodological developments made by Prof. Daniel Gianola and co-workers, we assayed linear (Gaussian), Poisson, probit (threshold), censored Poisson and censored Gaussian models to three different kinds of endpoints, namely calving success (CS), number of days from first calving (CD) and number of failed oestrus (FE). For models involving FE and CS, non-linear models overperformed their linear counterparts. For models derived from CD, linear versions displayed better adjustment than the non-linear counterparts. Non-linear models showed consistently higher estimates of heritability and repeatability in all cases (h 2  linear models; h 2  > 0.23 and r > 0.24, for non-linear models). While additive and permanent environment effects showed highly favourable correlations between all models (>0.789), consistency in selecting the 10% best sires showed important differences, mainly amongst the considered endpoints (FE, CS and CD). In consequence, endpoints should be considered as modelling different underlying genetic effects, with linear models more appropriate to describe CD and non-linear models better for FE and CS. © 2017 Blackwell Verlag GmbH.

  11. Use of an Amoeba proteus model for in vitro cytotoxicity testing in phytochemical research. Application to Euphorbia hirta extracts.

    Science.gov (United States)

    Duez, P; Livaditis, A; Guissou, P I; Sawadogo, M; Hanocq, M

    1991-09-01

    Amoeba proteus is proposed as a low-cost multi-purpose biochemical tool for screening and standardizing cytotoxic plant extracts with possible application in the laboratories of developing countries. Advantages and limitations of this test are examined and different mathematical treatments (probit analysis versus curve fitting to Von Bertalanffy and Hill functions) are investigated. Known anti-cancer (doxorubicin, daunorubicin, dacarbazine, 5-fluorouracil) and antiparasitic (emetine, dehydroemetine, metronidazole, cucurbitine, chloroquine) drugs were tested using this method and only metronidazole appeared inactive. Application of this model to Euphorbia hirta established that a 100 degrees C aqueous extraction of fresh aerial parts allows efficient extraction of active constituents and that drying the plant material before extraction considerably reduces activity.

  12. Probabilistic modelling of drought events in China via 2-dimensional joint copula

    Science.gov (United States)

    Ayantobo, Olusola O.; Li, Yi; Song, Songbai; Javed, Tehseen; Yao, Ning

    2018-04-01

    Probabilistic modelling of drought events is a significant aspect of water resources management and planning. In this study, popularly applied and several relatively new bivariate Archimedean copulas were employed to derive regional and spatial based copula models to appraise drought risk in mainland China over 1961-2013. Drought duration (Dd), severity (Ds), and peak (Dp), as indicated by Standardized Precipitation Evapotranspiration Index (SPEI), were extracted according to the run theory and fitted with suitable marginal distributions. The maximum likelihood estimation (MLE) and curve fitting method (CFM) were used to estimate the copula parameters of nineteen bivariate Archimedean copulas. Drought probabilities and return periods were analysed based on appropriate bivariate copula in sub-region I-VII and entire mainland China. The goodness-of-fit tests as indicated by the CFM showed that copula NN19 in sub-regions III, IV, V, VI and mainland China, NN20 in sub-region I and NN13 in sub-region VII are the best for modeling drought variables. Bivariate drought probability across mainland China is relatively high, and the highest drought probabilities are found mainly in the Northwestern and Southwestern China. Besides, the result also showed that different sub-regions might suffer varying drought risks. The drought risks as observed in Sub-region III, VI and VII, are significantly greater than other sub-regions. Higher probability of droughts of longer durations in the sub-regions also corresponds to shorter return periods with greater drought severity. These results may imply tremendous challenges for the water resources management in different sub-regions, particularly the Northwestern and Southwestern China.

  13. GIS-based bivariate statistical techniques for groundwater potential ...

    Indian Academy of Sciences (India)

    ... complicated relation between groundwater occurrence and groundwater conditioning factors, which permits investigation of both systemic and stochastic uncertainty. Finally, it can be realized that these techniques are very beneficial for groundwater potential analyzing and can be practical for water-resource management ...

  14. GIS-Based bivariate statistical techniques for groundwater potential ...

    Indian Academy of Sciences (India)

    24

    an example of Iran). Ali Haghizadeha, Davoud Davoudi Moghaddama, Hamid Reza Pourghasemib* a Department of Range and Watershed Management Engineering, Lorestan University, Lorestan, Iran b Department of Natural Resources and ...

  15. A Bivariate Extension to Traditional Empirical Orthogonal Function Analysis

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg; Hilger, Klaus Baggesen; Andersen, Ole Baltazar

    2002-01-01

    This paper describes the application of canonical correlations analysis to the joint analysis of global monthly mean values of 1996-1997 sea surface temperature (SST) and height (SSH) data. The SST data are considered as one set and the SSH data as another set of multivariate observations, both w...... as for example an increase in the SST will lead to an increase in the SSH. The analysis clearly shows the build-up of one of the largest El Niño events on record. Also the analysis indicates a phase lag of approximately one month between the SST and SSH fields....

  16. Dissecting the correlation structure of a bivariate phenotype ...

    Indian Academy of Sciences (India)

    Unknown

    High correlations between two quantitative traits may be either due to common genetic factors or common environ- mental factors or a combination ... different trait parameters and quantitative trait distributions. An application of the method .... mean vectors have components α1, β1 or – α1 and, α2, β2 or – α2, for the two traits ...

  17. GIS-based bivariate statistical techniques for groundwater potential ...

    Indian Academy of Sciences (India)

    Ali Haghizadeh

    2017-11-23

    Nov 23, 2017 ... of both systemic and stochastic uncertainty. Finally, it can be realized that these techniques are very beneficial for groundwater potential analyzing and can be practical for water-resource management experts. Keywords. Groundwater; statistical index; Dempster–Shafer theory; water resource management; ...

  18. Estimating twin concordance for bivariate competing risks twin data

    DEFF Research Database (Denmark)

    Scheike, Thomas; Holst, Klaus K.; Hjelmborg, Jacob B.

    2014-01-01

    For twin time-to-event data, we consider different concordance probabilities, such as the casewise concordance that are routinely computed as a measure of the lifetime dependence/correlation for specific diseases. The concordance probability here is the probability that both twins have experienced...... over time, and covariates may be further influential on the marginal risk and dependence structure. We establish the estimators large sample properties and suggest various tests, for example, for inferring familial influence. The method is demonstrated and motivated by specific twin data on cancer...

  19. A bivariate optimal replacement policy with cumulative repair cost ...

    Indian Academy of Sciences (India)

    Author Affiliations. MIN-T SAI LAI1 SHIH-CHIH CHEN2. Department of Business Administration, Southern Taiwan University of Science and Technology, Tainan, Taiwan, R.O.C; Department of Accounting Information, Southern Taiwan University of Science and Technology, Tainan, Taiwan, R.O.C ...

  20. Natural-color maps via coloring of bivariate grid data

    Science.gov (United States)

    Darbyshire, Jane E.; Jenny, Bernhard

    2017-09-01

    Natural ground color is useful for maps where a representation of the Earth's surface matters. Natural color schemes are less likely to be misinterpreted, as opposed to hypsometric color schemes, and are generally preferred by map readers. The creation of natural-color maps was once limited to manual cartographic techniques, but they can now be created digitally with the aid of raster graphics editing software. However, the creation of natural-color maps still requires many steps, a significant time investment, and fairly detailed digital land cover information, which makes this technique impossible to apply to global web maps at medium and large scales. A particular challenge for natural-color map creation is adjusting colors with location to create smoothly blending transitions. Adjustments with location are required to show land cover transitions between climate zones with a natural appearance. This study takes the first step in automating the process in order to facilitate the creation of medium- and large-scale natural-color maps covering large areas. A coloring method based on two grid inputs is presented. Here, we introduce an algorithmic method and prototype software for creating maps with this technique. The prototype software allows the map author to interactively assign colors to design the appearance of the map. This software can generate web map tiles at a global level for medium and large scales. Example natural-color web maps created with this coloring technique are provided.

  1. Cross-validation method for bivariate measure with certain mixture

    Science.gov (United States)

    Sabre, Rachid

    2016-04-01

    We consider a pair of random variables (X, Y) whose probability measure is the sum of an absolutely continuous measure, a discrete measure and a finite number of absolutely continuous measures on several lines. An asymptotically unbiased and consistent estimate of the density of the continuous part is given in [13]. In this work, we focus on the choice of these parameters so that this estimate will be optimal and the rate of convergence will be better, we as well as its rate of convergence. To achieve this we use the cross-validation technics.

  2. BIVARIATE SYMMETRICAL STATISTICS OF LONG-RANGE DEPENDENT OBSERVATIONS

    NARCIS (Netherlands)

    DEHLING, H; TAQQU, MS

    Let (X(j))j infinity = 1 be a stationary, mean-zero Gaussian sequence with covariances r(k) = EX(k+1)X1 satisfying r(0) = 1 and r(k) = k-D L(k) where D is small and L is slowly varying at infinity. Consider the sequence Y(j) = G(X(j)), j = 1,2,..., where G is any measurable function. We obtain the

  3. Approximation Order from Bivariate C1-Cubics: A Counter Example.

    Science.gov (United States)

    1982-06-01

    Approved for public release J Distribution walimitod i SPMMwozv by U. a. Army e~ serch Office National Science Vourdlation P. .O ox 12211 Washingtop, DC...e.g., in (Fr]). This needlessly complicates the notation. It ine sufficient to note that OW permutation of the meshline families can be accomplished by...J) x(-oJ) , for all p 6 P (1) jej and shme how this result leads, in standard quasi-interpolant fashion, to the conclusion that dist(f, 8 h 0(h3) (2

  4. Bivariate copulas on the exponentially weighted moving average control chart

    Directory of Open Access Journals (Sweden)

    Sasigarn Kuvattana

    2016-10-01

    Full Text Available This paper proposes four types of copulas on the Exponentially Weighted Moving Average (EWMA control chart when observations are from an exponential distribution using a Monte Carlo simulation approach. The performance of the control chart is based on the Average Run Length (ARL which is compared for each copula. Copula functions for specifying dependence between random variables are used and measured by Kendall’s tau. The results show that the Normal copula can be used for almost all shifts.

  5. Interpreting Bivariate Regression Coefficients: Going beyond the Average

    Science.gov (United States)

    Halcoussis, Dennis; Phillips, G. Michael

    2010-01-01

    Statistics, econometrics, investment analysis, and data analysis classes often review the calculation of several types of averages, including the arithmetic mean, geometric mean, harmonic mean, and various weighted averages. This note shows how each of these can be computed using a basic regression framework. By recognizing when a regression model…

  6. Assessing the copula selection for bivariate frequency analysis ...

    Indian Academy of Sciences (India)

    58

    37 frequency analysis methods cannot describe the random variable properties that are correlated. 38. (Sarhadi et al., 2016). This approach can lead to high uncertainty or failure of guidelines in. 39 water resources planning, operation and design of hydraulic structures or creating the flood risk. 40 mapping (Chebana and ...

  7. An assessment on the use of bivariate, multivariate and soft ...

    Indian Academy of Sciences (India)

    map is a useful tool in urban planning. There have been few studies carried out on collapse hazard ... application of a scoring system to several control- ling factors. The probabilistic method for assess- ment of .... of conditions and processes controlling collapse event. In order to predict collapse, it is necessary to assume ...

  8. DBH Prediction Using Allometry Described by Bivariate Copula Distribution

    Science.gov (United States)

    Xu, Q.; Hou, Z.; Li, B.; Greenberg, J. A.

    2017-12-01

    Forest biomass mapping based on single tree detection from the airborne laser scanning (ALS) usually depends on an allometric equation that relates diameter at breast height (DBH) with per-tree aboveground biomass. The incapability of the ALS technology in directly measuring DBH leads to the need to predict DBH with other ALS-measured tree-level structural parameters. A copula-based method is proposed in the study to predict DBH with the ALS-measured tree height and crown diameter using a dataset measured in the Lassen National Forest in California. Instead of exploring an explicit mathematical equation that explains the underlying relationship between DBH and other structural parameters, the copula-based prediction method utilizes the dependency between cumulative distributions of these variables, and solves the DBH based on an assumption that for a single tree, the cumulative probability of each structural parameter is identical. Results show that compared with the bench-marking least-square linear regression and the k-MSN imputation, the copula-based method obtains better accuracy in the DBH for the Lassen National Forest. To assess the generalization of the proposed method, prediction uncertainty is quantified using bootstrapping techniques that examine the variability of the RMSE of the predicted DBH. We find that the copula distribution is reliable in describing the allometric relationship between tree-level structural parameters, and it contributes to the reduction of prediction uncertainty.

  9. Dissecting the correlation structure of a bivariate phenotype ...

    Indian Academy of Sciences (India)

    Unknown

    We use Monte-Carlo simulations to evaluate the performance of the proposed test under different trait parameters and quantitative trait distributions. An application of the method is illustrated using data on two alcohol-related phenotypes from a project on the collaborative study on the genetics of alcoholism. [Ghosh S 2005 ...

  10. PUBLIC APPROVAL OF PLANT AND ANIMAL BIOTECHNOLOGY IN KOREA: AN ORDERED PROBIT ANALYSIS

    OpenAIRE

    Hallman, William K.; Onyango, Benjamin M.; Govindasamy, Ramu; Jang, Ho-Min; Puduri, Venkata S.

    2004-01-01

    This study analyzes predictors of Korean public acceptance of the use of biotechnology to create genetically modified food products. Results indicate that the consumers with above average knowledge of specific outcomes of genetic modification were more likely than those with inaccurate or no knowledge to approve use of plant or animal genetic modification for the creation of new food products. Young South Koreans consumers (ages 20 to 29 years old) were more likely than old consumers (ages 50...

  11. A Markov Chain Model for Contagion

    Directory of Open Access Journals (Sweden)

    Angelos Dassios

    2014-11-01

    Full Text Available We introduce a bivariate Markov chain counting process with contagion for modelling the clustering arrival of loss claims with delayed settlement for an insurance company. It is a general continuous-time model framework that also has the potential to be applicable to modelling the clustering arrival of events, such as jumps, bankruptcies, crises and catastrophes in finance, insurance and economics with both internal contagion risk and external common risk. Key distributional properties, such as the moments and probability generating functions, for this process are derived. Some special cases with explicit results and numerical examples and the motivation for further actuarial applications are also discussed. The model can be considered a generalisation of the dynamic contagion process introduced by Dassios and Zhao (2011.

  12. Segmentation and intensity estimation of microarray images using a gamma-t mixture model.

    Science.gov (United States)

    Baek, Jangsun; Son, Young Sook; McLachlan, Geoffrey J

    2007-02-15

    We present a new approach to the analysis of images for complementary DNA microarray experiments. The image segmentation and intensity estimation are performed simultaneously by adopting a two-component mixture model. One component of this mixture corresponds to the distribution of the background intensity, while the other corresponds to the distribution of the foreground intensity. The intensity measurement is a bivariate vector consisting of red and green intensities. The background intensity component is modeled by the bivariate gamma distribution, whose marginal densities for the red and green intensities are independent three-parameter gamma distributions with different parameters. The foreground intensity component is taken to be the bivariate t distribution, with the constraint that the mean of the foreground is greater than that of the background for each of the two colors. The degrees of freedom of this t distribution are inferred from the data but they could be specified in advance to reduce the computation time. Also, the covariance matrix is not restricted to being diagonal and so it allows for nonzero correlation between R and G foreground intensities. This gamma-t mixture model is fitted by maximum likelihood via the EM algorithm. A final step is executed whereby nonparametric (kernel) smoothing is undertaken of the posterior probabilities of component membership. The main advantages of this approach are: (1) it enjoys the well-known strengths of a mixture model, namely flexibility and adaptability to the data; (2) it considers the segmentation and intensity simultaneously and not separately as in commonly used existing software, and it also works with the red and green intensities in a bivariate framework as opposed to their separate estimation via univariate methods; (3) the use of the three-parameter gamma distribution for the background red and green intensities provides a much better fit than the normal (log normal) or t distributions; (4) the

  13. Acceptability of GM foods among Pakistani consumers.

    Science.gov (United States)

    Ali, Akhter; Rahut, Dil Bahadur; Imtiaz, Muhammad

    2016-04-02

    In Pakistan majority of the consumers do not have information about genetically modified (GM) foods. In developing countries particularly in Pakistan few studies have focused on consumers' acceptability about GM foods. Using comprehensive primary dataset collected from 320 consumers in 2013 from Pakistan, this study analyzes the determinants of consumers' acceptability of GM foods. The data was analyzed by employing the bivariate probit model and censored least absolute deviation (CLAD) models. The empirical results indicated that urban consumers are more aware of GM foods compared to rural consumers. The acceptance of GM foods was more among females' consumers as compared to male consumers. In addition, the older consumers were more willing to accept GM food compared to young consumers. The acceptability of GM foods was also higher among wealthier households. Low price is the key factor leading to the acceptability of GM foods. The acceptability of the GM foods also reduces the risks among Pakistani consumers.

  14. A Smooth Transition Logit Model of the Effects of Deregulation in the Electricity Market

    DEFF Research Database (Denmark)

    Hurn, A.S.; Silvennoinen, Annastiina; Teräsvirta, Timo

    We consider a nonlinear vector model called the logistic vector smooth transition autoregressive model. The bivariate single-transition vector smooth transition regression model of Camacho (2004) is generalised to a multivariate and multitransition one. A modelling strategy consisting of specific......We consider a nonlinear vector model called the logistic vector smooth transition autoregressive model. The bivariate single-transition vector smooth transition regression model of Camacho (2004) is generalised to a multivariate and multitransition one. A modelling strategy consisting...... of specification, including testing linearity, estimation and evaluation of these models is constructed. Nonlinear least squares estimation of the parameters of the model is discussed. Evaluation by misspecification tests is carried out using tests derived in a companion paper. The use of the modelling strategy...... is illustrated by two applications. In the first one, the dynamic relationship between the US gasoline price and consumption is studied and possible asymmetries in it considered. The second application consists of modelling two well known Icelandic riverflow series, previously considered by many hydrologists...

  15. Evaluation of fecal mRNA reproducibility via a marginal transformed mixture modeling approach

    Directory of Open Access Journals (Sweden)

    Davidson Laurie A

    2010-01-01

    Full Text Available Abstract Background Developing and evaluating new technology that enables researchers to recover gene-expression levels of colonic cells from fecal samples could be key to a non-invasive screening tool for early detection of colon cancer. The current study, to the best of our knowledge, is the first to investigate and report the reproducibility of fecal microarray data. Using the intraclass correlation coefficient (ICC as a measure of reproducibility and the preliminary analysis of fecal and mucosal data, we assessed the reliability of mixture density estimation and the reproducibility of fecal microarray data. Using Monte Carlo-based methods, we explored whether ICC values should be modeled as a beta-mixture or transformed first and fitted with a normal-mixture. We used outcomes from bootstrapped goodness-of-fit tests to determine which approach is less sensitive toward potential violation of distributional assumptions. Results The graphical examination of both the distributions of ICC and probit-transformed ICC (PT-ICC clearly shows that there are two components in the distributions. For ICC measurements, which are between 0 and 1, the practice in literature has been to assume that the data points are from a beta-mixture distribution. Nevertheless, in our study we show that the use of a normal-mixture modeling approach on PT-ICC could provide superior performance. Conclusions When modeling ICC values of gene expression levels, using mixture of normals in the probit-transformed (PT scale is less sensitive toward model mis-specification than using mixture of betas. We show that a biased conclusion could be made if we follow the traditional approach and model the two sets of ICC values using the mixture of betas directly. The problematic estimation arises from the sensitivity of beta-mixtures toward model mis-specification, particularly when there are observations in the neighborhood of the the boundary points, 0 or 1. Since beta-mixture modeling

  16. Demand and welfare effects in recreational travel models

    DEFF Research Database (Denmark)

    Hellström, Jörgen; Nordström, Leif Jonas

    for the households welfare loss. Approximating the welfare loss by the change in consumer surplus, accounting for the positive e¤ect from longer stays, imposes a lower bound on the households welfare loss. From a distributional point of view, the results reveal that the CO2 tax reform is regressive, in the sense....... In the empirical study, a bivariate zero-in.ated Poisson lognormal regression model is introduced in order to accommodate the large number of zeroes in the sample. The welfare analysis reveals that the equivalent variation (EV) measure, for the count data demand system, can be seen as an upper bound...

  17. Model fit after pairwise maximum likelihood

    Directory of Open Access Journals (Sweden)

    M. T. eBarendse

    2016-04-01

    Full Text Available Maximum likelihood factor analysis of discrete data within the structural equation modeling framework rests on the assumption that the observed discrete responses are manifestations of underlying continuous scores that are normally distributed. As maximizing the likelihood of multivariate response patterns is computationally very intensive, the sum of the log--likelihoods of the bivariate response patterns is maximized instead. Little is yet known about how to assess model fit when the analysis is based on such a pairwise maximum likelihood (PML of two--way contingency tables. We propose new fit criteria for the PML method and conduct a simulation study to evaluate their performance in model selection. With large sample sizes (500 or more, PML performs as well the robust weighted least squares analysis of polychoric correlations.

  18. Children's emotional and behavioral problems and their mothers' labor supply.

    Science.gov (United States)

    Richard, Patrick; Gaskin, Darrell J; Alexandre, Pierre K; Burke, Laura S; Younis, Mustafa

    2014-01-01

    It has been documented that about 20% of children and adolescents suffer from a diagnosable mental or addictive disorder in the United States. The high prevalence of children's emotional and behavioral problems (EBP) might have a negative effect on their mothers' labor market outcomes because children with EBP require additional time for treatment. However, these children may require additional financial resources, which might promote mothers' labor supply. Previous studies have only considered chronic conditions in analyzing the impact of children's health on parental work activities. Moreover, most of these studies have not accounted for endogeneity in children's health. This article estimates the effects of children's EBP on their mothers' labor supply by family structure while accounting for endogeneity in children's health. We used the 1997 and 2002 Child Development Supplements (CDS) to the Panel Study of Income Dynamics (PSID). We used probit and bivariate probit models to estimate mothers' probability of employment, and tobit and instrumental variable tobit models to estimate the effects of children's EBP on their mothers' work hours. Findings show negative effects of children's EBP on their married mothers' employment and on their single mothers' work hours. © The Author(s) 2014.

  19. Children’s Emotional and Behavioral Problems and Their Mothers’ Labor Supply

    Directory of Open Access Journals (Sweden)

    Patrick Richard PhD

    2014-11-01

    Full Text Available It has been documented that about 20% of children and adolescents suffer from a diagnosable mental or addictive disorder in the United States. The high prevalence of children’s emotional and behavioral problems (EBP might have a negative effect on their mothers’ labor market outcomes because children with EBP require additional time for treatment. However, these children may require additional financial resources, which might promote mothers’ labor supply. Previous studies have only considered chronic conditions in analyzing the impact of children’s health on parental work activities. Moreover, most of these studies have not accounted for endogeneity in children’s health. This article estimates the effects of children’s EBP on their mothers’ labor supply by family structure while accounting for endogeneity in children’s health. We used the 1997 and 2002 Child Development Supplements (CDS to the Panel Study of Income Dynamics (PSID. We used probit and bivariate probit models to estimate mothers’ probability of employment, and tobit and instrumental variable tobit models to estimate the effects of children’s EBP on their mothers’ work hours. Findings show negative effects of children’s EBP on their married mothers’ employment and on their single mothers’ work hours.

  20. Hedging effectiveness and volatility models for crude oil market: a dynamic approach; Modelos de volatilidade e a efetividade do hedge no mercado de petroleo: um abordagem dinamica

    Energy Technology Data Exchange (ETDEWEB)

    Salles, Andre Assis de [Universidade Federal do Rio de Janeiro (UFRJ), RJ (Brazil)

    2012-07-01

    The hedge strategies allow negotiators that have short and long positions in the market protection against price fluctuations. This paper examines the performance of bivariate volatility models for the crude oil spot and future returns of the Western Texas Intermediate - WTI type barrel prices. Besides the volatility of spot and future crude oil barrel returns time series, the hedge ratio strategy is examined through the hedge effectiveness. Thus this study shows hedge strategies built using methodologies applied in the variance modeling of returns of crude oil prices in the spot and future markets, and covariance between these two market returns, which correspond to the inputs of the hedge strategy shown in this work. From the studied models the bivariate GARCH in a Diagonal VECH and BEKK representations was chosen, using three different models for the mean: a bivariate autoregressive, a vector autoregressive and a vector error correction. The methodologies used here take into consideration the denial of assumptions of homoscedasticity and normality for the return distributions. The data used is logarithm returns of daily prices quoted in dollars per barrel from November 2008 to May 2010 for spot and future contracts, in particular the June contract. (author)

  1. An Ordered Regression Model to Predict Transit Passengers’ Behavioural Intentions

    Energy Technology Data Exchange (ETDEWEB)

    Oña, J. de; Oña, R. de; Eboli, L.; Forciniti, C.; Mazzulla, G.

    2016-07-01

    Passengers’ behavioural intentions after experiencing transit services can be viewed as signals that show if a customer continues to utilise a company’s service. Users’ behavioural intentions can depend on a series of aspects that are difficult to measure directly. More recently, transit passengers’ behavioural intentions have been just considered together with the concepts of service quality and customer satisfaction. Due to the characteristics of the ways for evaluating passengers’ behavioural intentions, service quality and customer satisfaction, we retain that this kind of issue could be analysed also by applying ordered regression models. This work aims to propose just an ordered probit model for analysing service quality factors that can influence passengers’ behavioural intentions towards the use of transit services. The case study is the LRT of Seville (Spain), where a survey was conducted in order to collect the opinions of the passengers about the existing transit service, and to have a measure of the aspects that can influence the intentions of the users to continue using the transit service in the future. (Author)

  2. Does prenatal care benefit maternal health? A study of post-partum maternal care use.

    Science.gov (United States)

    Liu, Tsai-Ching; Chen, Bradley; Chan, Yun-Shan; Chen, Chin-Shyan

    2015-10-01

    Most studies on prenatal care focus on its effects on infant health, while studying less about the effects on maternal health. Using the Longitudinal Health Insurance claims data in Taiwan in a recursive bivariate probit model, this study examines the impact of adequate prenatal care on the probability of post-partum maternal hospitalization during the first 6 months after birth. The results show that adequate prenatal care significantly reduces the probability of post-partum maternal hospitalization among women who have had vaginal delivery by 43.8%. This finding suggests that the benefits of prenatal care may have been underestimated among women with vaginal delivery. Timely and adequate prenatal care not only creates a positive impact on infant health, but also yields significant benefits for post-partum maternal health. However, we do not find similar benefits of prenatal care for women undergoing a cesarean section. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  3. Breathtaking or stagnation?

    DEFF Research Database (Denmark)

    Sauer, Johannes; Graversen, Jesper Tranbjerg; Park, Tim

    2006-01-01

    techniques. Finally we try to conclude on the significance of subsidies for promoting long term growth in organic production by estimating a bootstrapped bivariate probit model with respect to factors influencing the probability of organic market exit. The results revealed significant difference...... in the organic farms' technical efficiency, no significant total factor productivity growth and even a slightly negative rate of technical change in the period investigated. We found evidence for a positive relationship between subsidy payments and an increase in farm efficiency, technology improvements...... and a decreasing probability of organic market exit which was also confirmed for off farm income. Finally the general index mode specification was found to deliver a more accurate mapping of total factor productivity growth....

  4. Recent productivity developments and technical change in Danish organic farming - stagnation?

    DEFF Research Database (Denmark)

    Sauer, Johannes; Graversen, Jesper Tranbjerg; Park, Tim

    This paper attempts to quantitatively measure the change in the productivity of Dan-ish organic farming in recent years by using panel data on 56 organic farms mainly engaged in milk production for the period 2002 to 2004. Based on a translog pro-duction frontier framework the technical and scale...... growth in organic production by estimating a bootstrapped bivariate probit model with respect to factors influencing the probability of organic market exit. The results revealed significant differencies in the organic farms’ technical efficiencies, no sig-nificant total factor productivity growth...... and even a slightly negative rate of technical change in the period investigated. These empirical results seem not strong enough to support the view of a profound stagnation in organic milk farming over the last years. We found evidence for a positive relationship between subsidy payments and an increase...

  5. HIV Testing Among Young People Aged 16-24 in South Africa: Impact of Mass Media Communication Programs.

    Science.gov (United States)

    Do, Mai; Figueroa, Maria Elena; Lawrence Kincaid, D

    2016-09-01

    Knowing one's serostatus is critical in the HIV prevention, care and treatment continuum. This study examines the impact of communication programs on HIV testing in South Africa. Data came from 2204 young men and women aged 16-24 who reported to be sexually active in a population based survey. Structural equation modeling was used to test the directions and causal pathways between communication program exposure, HIV testing discussion, and having a test in the last 12 months. Bivariate and multivariate probit regressions provided evidence of exogeneity of communication exposure and the two HIV-related outcomes. One in three sampled individuals had been tested in the last 12 months. Communication program exposure only had an indirect effect on getting tested by encouraging young people to talk about testing. The study suggests that communication programs may create an environment that supports open HIV-related discussions and may have a long-term impact on behavior change.

  6. Contractual arrangements and food quality certifications in the Mexican avocado industry

    Directory of Open Access Journals (Sweden)

    J. J. Arana-Coronado

    2013-03-01

    Full Text Available The adoption of private quality certifications in agrifood supply chains often requires specific investments by producers which can be safeguarded by choosing specific contractual arrangements. Based on a survey data from avocado producers in Mexico, this paper aims to analyze the impact of transaction costs and relationship characteristics of the joint choice of contractual arrangements and quality certifications. Using a bivariate probit model, it shows that a producer’s decision to adopt private quality certifications is directly linked to high levels of asset specificity and price. In order to ensure the high level of specificity under the presence of low levels of price uncertainty, producers have relied on relational governance supported by the expectation of continuity in their bilateral relationships with buyers.

  7. Income inequality, perceived happiness, and self-rated health: evidence from nationwide surveys in Japan.

    Science.gov (United States)

    Oshio, Takashi; Kobayashi, Miki

    2010-05-01

    In this study, we examined how regional inequality is associated with perceived happiness and self-rated health at an individual level by using micro-data from nationwide surveys in Japan. We estimated the bivariate ordered probit models to explore the associations between regional inequality and two subjective outcomes, and evaluated effect modification to their sensitivities to regional inequality using the categories of key individual attributes. We found that individuals who live in areas of high inequality tend to report themselves as both unhappy and unhealthy, even after controlling for various individual and regional characteristics and taking into account the correlation between the two subjective outcomes. Gender, age, educational attainment, income, occupational status, and political views modify the associations of regional inequality with the subjective assessments of happiness and health. Notably, those with an unstable occupational status are most affected by inequality when assessing both perceived happiness and health. Copyright 2010 Elsevier Ltd. All rights reserved.

  8. Box-Cox Mixed Logit Model for Travel Behavior Analysis

    Science.gov (United States)

    Orro, Alfonso; Novales, Margarita; Benitez, Francisco G.

    2010-09-01

    To represent the behavior of travelers when they are deciding how they are going to get to their destination, discrete choice models, based on the random utility theory, have become one of the most widely used tools. The field in which these models were developed was halfway between econometrics and transport engineering, although the latter now constitutes one of their principal areas of application. In the transport field, they have mainly been applied to mode choice, but also to the selection of destination, route, and other important decisions such as the vehicle ownership. In usual practice, the most frequently employed discrete choice models implement a fixed coefficient utility function that is linear in the parameters. The principal aim of this paper is to present the viability of specifying utility functions with random coefficients that are nonlinear in the parameters, in applications of discrete choice models to transport. Nonlinear specifications in the parameters were present in discrete choice theory at its outset, although they have seldom been used in practice until recently. The specification of random coefficients, however, began with the probit and the hedonic models in the 1970s, and, after a period of apparent little practical interest, has burgeoned into a field of intense activity in recent years with the new generation of mixed logit models. In this communication, we present a Box-Cox mixed logit model, original of the authors. It includes the estimation of the Box-Cox exponents in addition to the parameters of the random coefficients distribution. Probability of choose an alternative is an integral that will be calculated by simulation. The estimation of the model is carried out by maximizing the simulated log-likelihood of a sample of observed individual choices between alternatives. The differences between the predictions yielded by models that are inconsistent with real behavior have been studied with simulation experiments.

  9. A Computer-Aided Diagnosis System for Breast Cancer Combining Mammography and Proteomics

    National Research Council Canada - National Science Library

    Jesneck, Jonathan

    2007-01-01

    .... We implemented under 100-fold cross-validation various classification algorithms, including Bayesian probit regression, iterated Bayesian model averaging, linear discriminant analysis, artificial...

  10. A multivariate conditional model for streamflow prediction and spatial precipitation refinement

    Science.gov (United States)

    Liu, Zhiyong; Zhou, Ping; Chen, Xiuzhi; Guan, Yinghui

    2015-10-01

    The effective prediction and estimation of hydrometeorological variables are important for water resources planning and management. In this study, we propose a multivariate conditional model for streamflow prediction and the refinement of spatial precipitation estimates. This model consists of high dimensional vine copulas, conditional bivariate copula simulations, and a quantile-copula function. The vine copula is employed because of its flexibility in modeling the high dimensional joint distribution of multivariate data by building a hierarchy of conditional bivariate copulas. We investigate two cases to evaluate the performance and applicability of the proposed approach. In the first case, we generate one month ahead streamflow forecasts that incorporate multiple predictors including antecedent precipitation and streamflow records in a basin located in South China. The prediction accuracy of the vine-based model is compared with that of traditional data-driven models such as the support vector regression (SVR) and the adaptive neuro-fuzzy inference system (ANFIS). The results indicate that the proposed model produces more skillful forecasts than SVR and ANFIS. Moreover, this probabilistic model yields additional information concerning the predictive uncertainty. The second case involves refining spatial precipitation estimates derived from the tropical rainfall measuring mission precipitationproduct for the Yangtze River basin by incorporating remotely sensed soil moisture data and the observed precipitation from meteorological gauges over the basin. The validation results indicate that the proposed model successfully refines the spatial precipitation estimates. Although this model is tested for specific cases, it can be extended to other hydrometeorological variables for predictions and spatial estimations.

  11. Modeling marrow damage from response data: Morphallaxis from radiation biology to benzene toxicity

    Energy Technology Data Exchange (ETDEWEB)

    Jones, T.D.; Morris, M.D.; Hasan, J.S.

    1995-12-01

    Consensus principles from radiation biology were used to describe a generic set of nonlinear, first-order differential equations for modeling of toxicity-induced compensatory cell kinetics in terms of sublethal injury, repair, direct killing, killing of cells with unrepaired sublethal injury, and repopulation. This cellular model was linked to a probit model of hematopoietic mortality that describes death from infection and/or hemorrhage between {approximately} 5 and 30 days. Mortality data from 27 experiments with 851 doseresponse groups, in which doses were protracted by rate and/or fractionation, were used to simultaneously estimate all rate constants by maximum-likelihood methods. Data used represented 18,940 test animals distributed according to: (mice, 12,827); (rats, 2,925); (sheep, 1,676); (swine, 829); (dogs, 479); and (burros, 204). Although a long-term, repopulating hematopoietic stem cell is ancestral to all lineages needed to restore normal homeostasis, the dose-response data from the protracted irradiations indicate clearly that the particular lineage that is ``critical`` to hematopoietic recovery does not resemble stem-like cells with regard to radiosensitivity and repopulation rates. Instead, the weakest link in the chain of hematopoiesis was found to have an intrinsic radioresistance equal to or greater than stromal cells and to repopulate at the same rates. Model validation has been achieved by predicting the LD{sub 50} and/or fractional group mortality in 38 protracted-dose experiments (rats and mice) that were not used in the fitting of model coefficients.

  12. Modeling marrow damage from response data: Morphallaxis from radiation biology to benzene toxicity

    International Nuclear Information System (INIS)

    Jones, T.D.; Morris, M.D.; Hasan, J.S.

    1995-01-01

    Consensus principles from radiation biology were used to describe a generic set of nonlinear, first-order differential equations for modeling of toxicity-induced compensatory cell kinetics in terms of sublethal injury, repair, direct killing, killing of cells with unrepaired sublethal injury, and repopulation. This cellular model was linked to a probit model of hematopoietic mortality that describes death from infection and/or hemorrhage between ∼ 5 and 30 days. Mortality data from 27 experiments with 851 doseresponse groups, in which doses were protracted by rate and/or fractionation, were used to simultaneously estimate all rate constants by maximum-likelihood methods. Data used represented 18,940 test animals distributed according to: (mice, 12,827); (rats, 2,925); (sheep, 1,676); (swine, 829); (dogs, 479); and (burros, 204). Although a long-term, repopulating hematopoietic stem cell is ancestral to all lineages needed to restore normal homeostasis, the dose-response data from the protracted irradiations indicate clearly that the particular lineage that is ''critical'' to hematopoietic recovery does not resemble stem-like cells with regard to radiosensitivity and repopulation rates. Instead, the weakest link in the chain of hematopoiesis was found to have an intrinsic radioresistance equal to or greater than stromal cells and to repopulate at the same rates. Model validation has been achieved by predicting the LD 50 and/or fractional group mortality in 38 protracted-dose experiments (rats and mice) that were not used in the fitting of model coefficients

  13. [Effect of ethanol extract from Matbuhi Aftimun on blood lipide level in rat hyperilpldemla model].

    Science.gov (United States)

    Islam, Rabigul; Mamat, Yultuz; Rapkat, Haximjan

    2010-07-01

    To investigate the acute toxicity, lipid reducing effect and mechanism of action of ethanol extracts of Matbuhi Aftimun (E-MA), a classic prescription of Uighur medicine, on hyperlipidemia rat model. The LD50 or maximum tolerance of rats to E-MA was determined by simplified probit method. Hyperlipidemia rat model was established in SD rats by feeding high lipid emulsion, then E-MA at different dosages (0.80 g/kg, 1.60 g/kg and 3.20 g/kg) was given orally to them. The effects of E-MA on model rats' serum lipids, including total cholesterol (TC), low density lipoprotein-cholesterol (LDL-C), high density lipoprotein-cholesterol (HDL-C), triglyceride (TG), were observed. And its effects on malondialdehydec (MDA) content, superoxide dismutase (SOD), glutathione peroxidase (GSH-PX), total lipase, including lipoprotein lipase (LPL) and hepato-lipase (HL) activities in the liver homogenate were assayed. The maximum tolerance of rats to E-MA was 64 g (crude drug)/kg. Compared with the hyperlipidemia model rat, the blood TC level was lower (P 0.05), also on the levels of SOD, GSH-PX and total lipase in the liver homogenate (P > 0.05). E-MA shows a serum TC reducing effect on hyperlipidemia rat model with low toxicity in mice.

  14. Modelling asset correlations during the recent financial crisis: A semiparametric approach

    DEFF Research Database (Denmark)

    Aslanidis, Nektarios; Casas, Isabel

    This article proposes alternatives to the Dynamic Conditional Correlation (DCC) model to study assets' correlations during the recent financial crisis. In particular, we adopt a semiparametric and nonparametric approach to estimate the conditional correlations for two interesting portfolios...... multivariate simulations in addition to the bivariate ones. Our simulation results show that the semiparametric and nonparametric models are best in DGPs with gradual changes or structural breaks in correlations. However, in DGPs with rapid changes or constancy in correlations the DCC delivers the best outcome...

  15. Risk factors associated with the practice of child marriage among Roma girls in Serbia.

    Science.gov (United States)

    Hotchkiss, David R; Godha, Deepali; Gage, Anastasia J; Cappa, Claudia

    2016-02-01

    Relatively little research on the issue of child marriage has been conducted in European countries where the overall prevalence of child marriage is relatively low, but relatively high among marginalized ethnic sub-groups. The purpose of this study is to assess the risk factors associated with the practice of child marriage among females living in Roma settlements in Serbia and among the general population and to explore the inter-relationship between child marriage and school enrollment decisions. The study is based on data from a nationally representative household survey in Serbia conducted in 2010 - and a separate survey of households living in Roma settlements in the same year. For each survey, we estimated a bivariate probit model of risk factors associated with being currently married and currently enrolled in school based on girls 15 to 17 years of age in the nationally representative and Roma settlements samples. The practice of child marriage among the Roma was found to be most common among girls who lived in poorer households, who had less education, and who lived in rural locations. The results of the bivariate probit analysis suggest that, among girls in the general population, decisions about child marriage school attendance are inter-dependent in that common unobserved factors were found to influence both decisions. However, among girls living in Roma settlements, there is only weak evidence of simultaneous decision making. The study finds evidence of the interdependence between marriage and school enrollment decisions among the general population and, to a lesser extent, among the Roma. Further research is needed on child marriage among the Roma and other marginalized sub-groups in Europe, and should be based on panel data, combined with qualitative data, to assess the role of community-level factors and the characteristics of households where girls grow up on child marriage and education decisions.

  16. Evidence synthesis for decision making 2: a generalized linear modeling framework for pairwise and network meta-analysis of randomized controlled trials.

    Science.gov (United States)

    Dias, Sofia; Sutton, Alex J; Ades, A E; Welton, Nicky J

    2013-07-01

    We set out a generalized linear model framework for the synthesis of data from randomized controlled trials. A common model is described, taking the form of a linear regression for both fixed and random effects synthesis, which can be implemented with normal, binomial, Poisson, and multinomial data. The familiar logistic model for meta-analysis with binomial data is a generalized linear model with a logit link function, which is appropriate for probability outcomes. The same linear regression framework can be applied to continuous outcomes, rate models, competing risks, or ordered category outcomes by using other link functions, such as identity, log, complementary log-log, and probit link functions. The common core model for the linear predictor can be applied to pairwise meta-analysis, indirect comparisons, synthesis of multiarm trials, and mixed treatment comparisons, also known as network meta-analysis, without distinction. We take a Bayesian approach to estimation and provide WinBUGS program code for a Bayesian analysis using Markov chain Monte Carlo simulation. An advantage of this approach is that it is straightforward to extend to shared parameter models where different randomized controlled trials report outcomes in different formats but from a common underlying model. Use of the generalized linear model framework allows us to present a unified account of how models can be compared using the deviance information criterion and how goodness of fit can be assessed using the residual deviance. The approach is illustrated through a range of worked examples for commonly encountered evidence formats.

  17. Consequence modeling of fire on Methane storage tanks in a gas refinery

    Directory of Open Access Journals (Sweden)

    Sara Shahedi ali abadi

    2016-06-01

    Full Text Available Introduction: using fossil fuels, some hazards such as explosion and fire are probable. This study was aimed to consequence modeling of fire on Methane storage tanks in a gas refinery using analyzing the risk, and modeling and evaluating the related consequences. Method: Hazard analysis by PHA was used to choosing the worst-case scenario. Then, causes of the scenario were determined by FTA. After that, consequence modeling by the PHAST software was applied for the consequence analysis. Results: Based on some criteria, the fire of methane gas tank (V-100 was selected as the worst-case scenario at the refinery. The qualitative fault tree showed three factors including mechanical, process, and human failures contribute in gas leakage. The leakage size and weather conditions were effective on the distance of radiation. Using consequence modeling, thermal radiation was considered as the major outcome of the incident. Finally, for outcome evaluating, probit equations were used to quantify losses and the percentage of fatalities due to the methane gas leakage and fire occurrence. The maximum number of fatalities caused by fire was obtained 23 persons. Conclusions: In conclusion, the methane gas vessel in the refinery can be considered as the main center of hazard, therefore the implementation of the safety rules, eliminating mechanical failures, personal protection and education, and Effective measures to prevent and fighting of fire are proposed for decreasing the probable losses and fatalities.

  18. Latent variable modelling of risk factors associated with childhood diseases: Case study for Nigeria

    Directory of Open Access Journals (Sweden)

    Khaled Khatab

    2011-09-01

    Full Text Available Objective: To investigate the impact of various bio-demographic and socio-economic variables on joint childhood diseases in Nigeria with flexible geoadditive probit models. Methods: Geoadditive latent variable model (LVM was applied where the three observable disease (diarrhea, cough, fever variables were modelled as indicators for the latent individual variable "health status" or "frailty" of a child. This modelling approach allowed us to investigate the common influence of risk factors on individual frailties of children, thereby automatically accounting for association between diseases as indicators for health status. The LVM extended to analyze the impact of risk factors and the spatial effects on the unobservable variable “health status ” of a child less than 5 years of age using the 2003 Demographic and Health Surveys (DHS data for Nigeria. Results: The results suggest some strong underlying spatial patterns of the three ailments with a clear southeastern divide of childhood morbidities and this might be the results in the overlapping of the various risk factors. Conclusions: Comorbidity with conditions such as cough, diarrhoea and fever is common in Nigeria. However, little is known about common risk factors and geographical overlaps in these illnesses. The search for overlapping common risk factors and their spatial effects may improve our understanding of the etiology of diseases for efficient and cost-effective control and planning of the three ailments.

  19. Modeling and forecasting petroleum futures volatility

    International Nuclear Information System (INIS)

    Sadorsky, Perry

    2006-01-01

    Forecasts of oil price volatility are important inputs into macroeconometric models, financial market risk assessment calculations like value at risk, and option pricing formulas for futures contracts. This paper uses several different univariate and multivariate statistical models to estimate forecasts of daily volatility in petroleum futures price returns. The out-of-sample forecasts are evaluated using forecast accuracy tests and market timing tests. The TGARCH model fits well for heating oil and natural gas volatility and the GARCH model fits well for crude oil and unleaded gasoline volatility. Simple moving average models seem to fit well in some cases provided the correct order is chosen. Despite the increased complexity, models like state space, vector autoregression and bivariate GARCH do not perform as well as the single equation GARCH model. Most models out perform a random walk and there is evidence of market timing. Parametric and non-parametric value at risk measures are calculated and compared. Non-parametric models outperform the parametric models in terms of number of exceedences in backtests. These results are useful for anyone needing forecasts of petroleum futures volatility. (author)

  20. Modeling distribution and abundance of soybean aphid in soybean fields using measurements from the surrounding landscape.

    Science.gov (United States)

    Bahlai, C A; Sikkema, S; Hallett, R H; Newman, J; Schaafsma, A W

    2010-02-01

    Soybean aphid (Aphis glycines Matsumura) is a severe pest of soybean in central North America. Outbreaks of the aphid in Ontario are often spotty in distribution, with some geographical areas affected severely and others with few or no aphid populations occurring in soybean for the duration of the season. A. glycines spend summers on soybean and overwinter on buckthorn, a shrub that is widespread in southern Ontario and is commonly found in agricultural hedgerows and at the margins of woodlots. A. glycines likely use both short distance migratory flights from buckthorn and longer distance dispersal flights in the search for acceptable summer hosts. This study aims to model colonization of soybean fields by A. glycines engaged in early-season migration from overwintering hosts. Akaike's information criterion (AIC) was used to rank numerous competing linear and probit models using field parameters to predict aphid presence, colonization, and density. The variable that best modeled aphid density in soybean fields in the early season was the ratio of buckthorn density to field area, although dramatic differences in relationships between the parameters were observed between study years. This study has important applications in predicting areas that are at elevated risk of developing economically damaging populations of soybean aphid and which may act as sources for further infestation.

  1. Calculating the Probability of Returning a Loan with Binary Probability Models

    Directory of Open Access Journals (Sweden)

    Julian Vasilev

    2014-12-01

    Full Text Available The purpose of this article is to give a new approach in calculating the probability of returning a loan. A lot of factors affect the value of the probability. In this article by using statistical and econometric models some influencing factors are proved. The main approach is concerned with applying probit and logit models in loan management institutions. A new aspect of the credit risk analysis is given. Calculating the probability of returning a loan is a difficult task. We assume that specific data fields concerning the contract (month of signing, year of signing, given sum and data fields concerning the borrower of the loan (month of birth, year of birth (age, gender, region, where he/she lives may be independent variables in a binary logistics model with a dependent variable “the probability of returning a loan”. It is proved that the month of signing a contract, the year of signing a contract, the gender and the age of the loan owner do not affect the probability of returning a loan. It is proved that the probability of returning a loan depends on the sum of contract, the remoteness of the loan owner and the month of birth. The probability of returning a loan increases with the increase of the given sum, decreases with the proximity of the customer, increases for people born in the beginning of the year and decreases for people born at the end of the year.

  2. A Capacity-Restraint Transit Assignment Model When a Predetermination Method Indicates the Invalidity of Time Independence

    Directory of Open Access Journals (Sweden)

    Haoyang Ding

    2015-01-01

    Full Text Available The statistical independence of time of every two adjacent bus links plays a crucial role in deciding the feasibility of using many mathematical models to analyze urban transit networks. Traditional research generally ignores the time independence that acts as the ground of their models. Assumption is usually made that time independence of every two adjacent links is sound. This is, however, actually groundless and probably causes problematic conclusions reached by corresponding models. Many transit assignment models such as multinomial probit-based models lose their effects when the time independence is not valid. In this paper, a simple method to predetermine the time independence is proposed. Based on the predetermination method, a modified capacity-restraint transit assignment method aimed at engineering practice is put forward and tested through a small contrived network and a case study in Nanjing city, China, respectively. It is found that the slope of regression equation between the mean and standard deviation of normal distribution acts as the indicator of time independence at the same time. Besides, our modified assignment method performs better than the traditional one with more reasonable results while keeping the property of simplicity well.

  3. Discrete factor approximations in simultaneous equation models: estimating the impact of a dummy endogenous variable on a continuous outcome.

    Science.gov (United States)

    Mroz, T A

    1999-10-01

    This paper contains a Monte Carlo evaluation of estimators used to control for endogeneity of dummy explanatory variables in continuous outcome regression models. When the true model has bivariate normal disturbances, estimators using discrete factor approximations compare favorably to efficient estimators in terms of precision and bias; these approximation estimators dominate all the other estimators examined when the disturbances are non-normal. The experiments also indicate that one should liberally add points of support to the discrete factor distribution. The paper concludes with an application of the discrete factor approximation to the estimation of the impact of marriage on wages.

  4. A dose and time response Markov model for the in-host dynamics of infection with intracellular bacteria following inhalation: with application to Francisella tularensis.

    Science.gov (United States)

    Wood, R M; Egan, J R; Hall, I M

    2014-06-06

    In a novel approach, the standard birth-death process is extended to incorporate a fundamental mechanism undergone by intracellular bacteria, phagocytosis. The model accounts for stochastic interaction between bacteria and cells of the immune system and heterogeneity in susceptibility to infection of individual hosts within a population. Model output is the dose-response relation and the dose-dependent distribution of time until response, where response is the onset of symptoms. The model is thereafter parametrized with respect to the highly virulent Schu S4 strain of Francisella tularensis, in the first such study to consider a biologically plausible mathematical model for early human infection with this bacterium. Results indicate a median infectious dose of about 23 organisms, which is higher than previously thought, and an average incubation period of between 3 and 7 days depending on dose. The distribution of incubation periods is right-skewed up to about 100 organisms and symmetric for larger doses. Moreover, there are some interesting parallels to the hypotheses of some of the classical dose-response models, such as independent action (single-hit model) and individual effective dose (probit model). The findings of this study support experimental evidence and postulations from other investigations that response is, in fact, influenced by both in-host and between-host variability.

  5. Asymptotic analysis for a simple explicit estimator in Barndorff-Nielsen and Shephard stochastic volatility models

    DEFF Research Database (Denmark)

    Hubalek, Friedrich; Posedel, Petra

    expressions for the asymptotic covariance matrix. We develop in detail the martingale estimating function approach for a bivariate model, that is not a diffusion, but admits jumps. We do not use ergodicity arguments. We assume that both, logarithmic returns and instantaneous variance are observed...... on a discrete grid of fixed width, and the observation horizon tends to infinity. This anaysis is a starting point and benchmark for further developments concerning optimal martingale estimating functions, and for theoretical and empirical investigations, that replace the (actually unobserved) variance process...

  6. On the application of copula in modeling maintenance contract

    Science.gov (United States)

    Iskandar, B. P.; Husniah, H.

    2016-02-01

    This paper deals with the application of copula in maintenance contracts for a nonrepayable item. Failures of the item are modeled using a two dimensional approach where age and usage of the item and this requires a bi-variate distribution to modelling failures. When the item fails then corrective maintenance (CM) is minimally repaired. CM can be outsourced to an external agent or done in house. The decision problem for the owner is to find the maximum total profit whilst for the agent is to determine the optimal price of the contract. We obtain the mathematical models of the decision problems for the owner as well as the agent using a Nash game theory formulation.

  7. On the application of copula in modeling maintenance contract

    International Nuclear Information System (INIS)

    Iskandar, B P; Husniah, H

    2016-01-01

    This paper deals with the application of copula in maintenance contracts for a nonrepayable item. Failures of the item are modeled using a two dimensional approach where age and usage of the item and this requires a bi-variate distribution to modelling failures. When the item fails then corrective maintenance (CM) is minimally repaired. CM can be outsourced to an external agent or done in house. The decision problem for the owner is to find the maximum total profit whilst for the agent is to determine the optimal price of the contract. We obtain the mathematical models of the decision problems for the owner as well as the agent using a Nash game theory formulation. (paper)

  8. Parameter Estimation for a Class of Lifetime Models

    Directory of Open Access Journals (Sweden)

    Xinyang Ji

    2014-01-01

    Full Text Available Our purpose in this paper is to present a better method of parametric estimation for a bivariate nonlinear regression model, which takes the performance indicator of rubber aging as the dependent variable and time and temperature as the independent variables. We point out that the commonly used two-step method (TSM, which splits the model and estimate parameters separately, has limitation. Instead, we apply the Marquardt’s method (MM to implement parametric estimation directly for the model and compare these two methods of parametric estimation by random simulation. Our results show that MM has better effect of data fitting, more reasonable parametric estimates, and smaller prediction error compared with TSM.

  9. A menu-driven software package of Bayesian nonparametric (and parametric) mixed models for regression analysis and density estimation.

    Science.gov (United States)

    Karabatsos, George

    2017-02-01

    Most of applied statistics involves regression analysis of data. In practice, it is important to specify a regression model that has minimal assumptions which are not violated by data, to ensure that statistical inferences from the model are informative and not misleading. This paper presents a stand-alone and menu-driven software package, Bayesian Regression: Nonparametric and Parametric Models, constructed from MATLAB Compiler. Currently, this package gives the user a choice from 83 Bayesian models for data analysis. They include 47 Bayesian nonparametric (BNP) infinite-mixture regression models; 5 BNP infinite-mixture models for density estimation; and 31 normal random effects models (HLMs), including normal linear models. Each of the 78 regression models handles either a continuous, binary, or ordinal dependent variable, and can handle multi-level (grouped) data. All 83 Bayesian models can handle the analysis of weighted observations (e.g., for meta-analysis), and the analysis of left-censored, right-censored, and/or interval-censored data. Each BNP infinite-mixture model has a mixture distribution assigned one of various BNP prior distributions, including priors defined by either the Dirichlet process, Pitman-Yor process (including the normalized stable process), beta (two-parameter) process, normalized inverse-Gaussian process, geometric weights prior, dependent Dirichlet process, or the dependent infinite-probits prior. The software user can mouse-click to select a Bayesian model and perform data analysis via Markov chain Monte Carlo (MCMC) sampling. After the sampling completes, the software automatically opens text output that reports MCMC-based estimates of the model's posterior distribution and model predictive fit to the data. Additional text and/or graphical output can be generated by mouse-clicking other menu options. This includes output of MCMC convergence analyses, and estimates of the model's posterior predictive distribution, for selected

  10. Development of discrete choice model considering internal reference points and their effects in travel mode choice context

    Science.gov (United States)

    Sarif; Kurauchi, Shinya; Yoshii, Toshio

    2017-06-01

    In the conventional travel behavior models such as logit and probit, decision makers are assumed to conduct the absolute evaluations on the attributes of the choice alternatives. On the other hand, many researchers in cognitive psychology and marketing science have been suggesting that the perceptions of attributes are characterized by the benchmark called “reference points” and the relative evaluations based on them are often employed in various choice situations. Therefore, this study developed a travel behavior model based on the mental accounting theory in which the internal reference points are explicitly considered. A questionnaire survey about the shopping trip to the CBD in Matsuyama city was conducted, and then the roles of reference points in travel mode choice contexts were investigated. The result showed that the goodness-of-fit of the developed model was higher than that of the conventional model, indicating that the internal reference points might play the major roles in the choice of travel mode. Also shown was that the respondents seem to utilize various reference points: some tend to adopt the lowest fuel price they have experienced, others employ fare price they feel in perceptions of the travel cost.

  11. Comparative analysis of informal borrowing behaviour between ...

    African Journals Online (AJOL)

    Tools of analyses were descriptive statistics of mean and percentages and probit model, The result of the Probit model on the variables influencing borrowing behaviour of male-headed households indicated that the coefficients of household size, farm size, purpose of borrowing, loan duration, interest rate and collateral ...

  12. Exploring unobserved household living conditions in multilevel choice modeling: An application to contraceptive adoption by Indian women.

    Directory of Open Access Journals (Sweden)

    José G Dias

    Full Text Available This research analyzes the effect of the poverty-wealth dimension on contraceptive adoption by Indian women when no direct measures of income/expenditures are available to use as covariates. The index-Household Living Conditions (HLC-is based on household assets and dwelling characteristics and is computed by an item response model simultaneously with the choice model in a new single-step approach. That is, the HLC indicator is treated as a latent covariate measured by a set of items, it depends on a set of concomitant variables, and explains contraceptive choices in a probit regression. Additionally, the model accounts for complex survey design and sample weights in a multilevel framework. Regarding our case study on contraceptive adoption by Indian women, results show that women with better household living conditions tend to adopt contraception more often than their counterparts. This effect is significant after controlling other factors such as education, caste, and religion. The external validation of the indicator shows that it can also be used at aggregate levels of analysis (e.g., county or state whenever no other indicators of household living conditions are available.

  13. Exploring unobserved household living conditions in multilevel choice modeling: An application to contraceptive adoption by Indian women.

    Science.gov (United States)

    Dias, José G; de Oliveira, Isabel Tiago

    2018-01-01

    This research analyzes the effect of the poverty-wealth dimension on contraceptive adoption by Indian women when no direct measures of income/expenditures are available to use as covariates. The index-Household Living Conditions (HLC)-is based on household assets and dwelling characteristics and is computed by an item response model simultaneously with the choice model in a new single-step approach. That is, the HLC indicator is treated as a latent covariate measured by a set of items, it depends on a set of concomitant variables, and explains contraceptive choices in a probit regression. Additionally, the model accounts for complex survey design and sample weights in a multilevel framework. Regarding our case study on contraceptive adoption by Indian women, results show that women with better household living conditions tend to adopt contraception more often than their counterparts. This effect is significant after controlling other factors such as education, caste, and religion. The external validation of the indicator shows that it can also be used at aggregate levels of analysis (e.g., county or state) whenever no other indicators of household living conditions are available.

  14. Assessing Trust and Effectiveness in Virtual Teams: Latent Growth Curve and Latent Change Score Models

    Directory of Open Access Journals (Sweden)

    Michael D. Coovert

    2017-08-01

    Full Text Available Trust plays a central role in the effectiveness of work groups and teams. This is the case for both face-to-face and virtual teams. Yet little is known about the development of trust in virtual teams. We examined cognitive and affective trust and their relationship to team effectiveness as reflected through satisfaction with one’s team and task performance. Latent growth curve analysis reveals both trust types start at a significant level with individual differences in that initial level. Cognitive trust follows a linear growth pattern while affective trust is overall non-linear, but becomes linear once established. Latent change score models are utilized to examine change in trust and also its relationship with satisfaction with the team and team performance. In examining only change in trust and its relationship to satisfaction there appears to be a straightforward influence of trust on satisfaction and satisfaction on trust. However, when incorporated into a bivariate coupling latent change model the dynamics of the relationship are revealed. A similar pattern holds for trust and task performance; however, in the bivariate coupling change model a more parsimonious representation is preferred.

  15. Landslide susceptibility mapping in Mawat area, Kurdistan Region, NE Iraq: a comparison of different statistical models

    Science.gov (United States)

    Othman, A. A.; Gloaguen, R.; Andreani, L.; Rahnama, M.

    2015-03-01

    During the last decades, expansion of settlements into areas prone to landslides in Iraq has increased the importance of accurate hazard assessment. Susceptibility mapping provides information about hazardous locations and thus helps to potentially prevent infrastructure damage due to mass wasting. The aim of this study is to evaluate and compare frequency ratio (FR), weight of evidence (WOE), logistic regression (LR) and probit regression (PR) approaches in combination with new geomorphological indices to determine the landslide susceptibility index (LSI). We tested these four methods in Mawat area, Kurdistan Region, NE Iraq, where landslides occur frequently. For this purpose, we evaluated 16 geomorphological, geological and environmental predicting factors mainly derived from the advanced spaceborne thermal emission and reflection radiometer (ASTER) satellite. The available reference inventory includes 351 landslides representing a cumulative surface of 3.127 km2. This reference inventory was mapped from QuickBird data by manual delineation and partly verified by field survey. The areas under curve (AUC) of the receiver operating characteristic (ROC), and relative landslide density (R index) show that all models perform similarly and that focus should be put on the careful selection of proxies. The results indicate that the lithology and the slope aspects play major roles for landslide occurrences. Furthermore, this paper demonstrates that using hypsometric integral as a prediction factor instead of slope curvature gives better results and increases the accuracy of the LSI.

  16. Econometric modelling of risk adverse behaviours of entrepreneurs in the provision of house fittings in China

    Directory of Open Access Journals (Sweden)

    Rita Yi Man Li

    2012-03-01

    Full Text Available Entrepreneurs have always born the risk of running their business. They reap a profit in return for their risk taking and work. Housing developers are no different. In many countries, such as Australia, the United Kingdom and the United States, they interpret the tastes of the buyers and provide the dwellings they develop with basic fittings such as floor and wall coverings, bathroom fittings and kitchen cupboards. In mainland China, however, in most of the developments, units or houses are sold without floor or wall coverings, kitchen  or bathroom fittings. What is the motive behind this choice? This paper analyses the factors affecting housing developers’ decisions to provide fittings based on 1701 housing developments in Hangzhou, Chongqing and Hangzhou using a Probit model. The results show that developers build a higher proportion of bare units in mainland China when: 1 there is shortage of housing; 2 land costs are high so that the comparative costs of providing fittings become relatively low.

  17. Renal and hepatic histopathology of intraperitoneally administered ...

    African Journals Online (AJOL)

    user

    2016-10-12

    Oct 12, 2016 ... with Coppens fish feed containing 55% crude protein and the water quality was maintained daily with a standard test kit .... Probit value was determined from the probit model developed by Finney (1959). RESULTS ..... interbillary portal vein which have to pass through the sinusoids to get to the central vein.

  18. Advantages of joint modeling of component HIV risk behaviors and non-response: application to randomized trials in cocaine-dependent and methamphetamine-dependent populations

    Directory of Open Access Journals (Sweden)

    Tyson H Holmes

    2011-07-01

    Full Text Available The HIV risk-taking behavior scale (HRBS is an 11-item instrument designed to assess the risks of HIV infection due self-reported injection drug use and sexual behavior. A retrospective analysis was performed on HRBS data collected from approximately 1,000 participants pooled across seven clinical trials of pharmacotherapies for either the treatment of cocaine-dependence or methamphetamine-dependence. Analysis faced three important challenges. The sample contained a high proportion of missing assessments after randomization. Also, the HRBS scale consists of two distinct behavioral components which may or may not coincide in response patterns. In addition, distributions of responses on the subscales were highly concentrated at just a few values (e.g., 0, 6. To address these challenges, a single probit regression model was fit to three outcomes variables simultaneously—the two subscale totals plus an indicator variable for assessments not obtained (non-response. This joint-outcome regression model was able to identify that those who left assessment early had higher self-reported risk of injection-drug use and lower self-reported risky sexual behavior because the model was able to draw on information on associations among the three outcomes collectively. These findings were not identified in analyses performed on each outcome separately. No evidence for an effect of pharmacotherapies was observed, except to reduce missing assessments. Univariate-outcome modeling is not recommended for the HRBS.

  19. Wave Resource Characterization Using an Unstructured Grid Modeling Approach

    Directory of Open Access Journals (Sweden)

    Wei-Cheng Wu

    2018-03-01

    Full Text Available This paper presents a modeling study conducted on the central Oregon coast for wave resource characterization, using the unstructured grid Simulating WAve Nearshore (SWAN model coupled with a nested grid WAVEWATCH III® (WWIII model. The flexibility of models with various spatial resolutions and the effects of open boundary conditions simulated by a nested grid WWIII model with different physics packages were evaluated. The model results demonstrate the advantage of the unstructured grid-modeling approach for flexible model resolution and good model skills in simulating the six wave resource parameters recommended by the International Electrotechnical Commission in comparison to the observed data in Year 2009 at National Data Buoy Center Buoy 46050. Notably, spectral analysis indicates that the ST4 physics package improves upon the ST2 physics package’s ability to predict wave power density for large waves, which is important for wave resource assessment, load calculation of devices, and risk management. In addition, bivariate distributions show that the simulated sea state of maximum occurrence with the ST4 physics package matched the observed data better than with the ST2 physics package. This study demonstrated that the unstructured grid wave modeling approach, driven by regional nested grid WWIII outputs along with the ST4 physics package, can efficiently provide accurate wave hindcasts to support wave resource characterization. Our study also suggests that wind effects need to be considered if the dimension of the model domain is greater than approximately 100 km, or O (102 km.

  20. Genetic analysis of days from calving to first insemination and days open in Danish Holsteins using different models and censoring scenarios

    DEFF Research Database (Denmark)

    Hou, Y; Madsen, P; Labouriau, R

    2009-01-01

    The objectives of this study were to estimate genetic parameters and evaluate models for genetic evaluation of days from calving to first insemination (ICF) and days open (DO). Data including 509,512 first-parity records of Danish Holstein cows were analyzed using 5 alternative sire models...... that dealt with censored records in different ways: 1) a conventional linear model (LM) in which a penalty of 21 d was added to censored records; 2) a bivariate threshold-linear model (TLM), which included a threshold model for censoring status (0, 1) of the observations, and a linear model for ICF or DO...... without any penalty on censored records; 3) a right-censored linear model (CLM); 4) a Weibull proportional hazard model (SMW); and 5) a Cox proportional hazard model (SMC) constructed with piecewise constant baseline hazard function. The variance components for ICF and DO estimated from LM and TLM were...

  1. How Much Math Do Students Need to Succeed in Business and Economics Statistics? An Ordered Probit Analysis

    Science.gov (United States)

    Green, Jeffrey J.; Stone, Courtenay C.; Zegeye, Abera; Charles, Thomas A.

    2009-01-01

    Because statistical analysis requires the ability to use mathematics, students typically are required to take one or more prerequisite math courses prior to enrolling in the business statistics course. Despite these math prerequisites, however, many students find it difficult to learn business statistics. In this study, we use an ordered probit…

  2. Model-based methods for case definitions from administrative health data: application to rheumatoid arthritis.

    Science.gov (United States)

    Kroeker, Kristine; Widdifield, Jessica; Muthukumarana, Saman; Jiang, Depeng; Lix, Lisa M

    2017-06-23

    This research proposes a model-based method to facilitate the selection of disease case definitions from validation studies for administrative health data. The method is demonstrated for a rheumatoid arthritis (RA) validation study. Data were from 148 definitions to ascertain cases of RA in hospital, physician and prescription medication administrative data. We considered: (A) separate univariate models for sensitivity and specificity, (B) univariate model for Youden's summary index and (C) bivariate (ie, joint) mixed-effects model for sensitivity and specificity. Model covariates included the number of diagnoses in physician, hospital and emergency department records, physician diagnosis observation time, duration of time between physician diagnoses and number of RA-related prescription medication records. The most common case definition attributes were: 1+ hospital diagnosis (65%), 2+ physician diagnoses (43%), 1+ specialist physician diagnosis (51%) and 2+ years of physician diagnosis observation time (27%). Statistically significant improvements in sensitivity and/or specificity for separate univariate models were associated with (all p values model produced similar results. Youden's index was associated with these same case definition criteria, except for the length of the physician diagnosis observation time. A model-based method provides valuable empirical evidence to aid in selecting a definition(s) for ascertaining diagnosed disease cases from administrative health data. The choice between univariate and bivariate models depends on the goals of the validation study and number of case definitions. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  3. Developmental relations between vocabulary knowledge and reading comprehension: a latent change score modeling study.

    Science.gov (United States)

    Quinn, Jamie M; Wagner, Richard K; Petscher, Yaacov; Lopez, Danielle

    2015-01-01

    The present study followed a sample of first-grade (N = 316, Mage = 7.05 at first test) through fourth-grade students to evaluate dynamic developmental relations between vocabulary knowledge and reading comprehension. Using latent change score modeling, competing models were fit to the repeated measurements of vocabulary knowledge and reading comprehension to test for the presence of leading and lagging influences. Univariate models indicated growth in vocabulary knowledge, and reading comprehension was determined by two parts: constant yearly change and change proportional to the previous level of the variable. Bivariate models indicated previous levels of vocabulary knowledge acted as leading indicators of reading comprehension growth, but the reverse relation was not found. Implications for theories of developmental relations between vocabulary and reading comprehension are discussed. © 2014 The Authors. Child Development © 2014 Society for Research in Child Development, Inc.

  4. An anatomic risk model to screen post endovascular aneurysm repair patients for aneurysm sac enlargement.

    Science.gov (United States)

    Png, Chien Yi M; Tadros, Rami O; Beckerman, William E; Han, Daniel K; Tardiff, Melissa L; Torres, Marielle R; Marin, Michael L; Faries, Peter L

    2017-09-01

    Follow-up computed tomography angiography (CTA) scans add considerable postimplantation costs to endovascular aneurysm repairs (EVARs) of abdominal aortic aneurysms (AAAs). By building a risk model, we hope to identify patients at low risk for aneurysm sac enlargement to minimize unnecessary CTAs. 895 consecutive patients who underwent EVAR for AAA were reviewed, of which 556 met inclusion criteria. A Probit model was created for aneurysm sac enlargement, with preoperative aneurysm morphology, patient demographics, and operative details as variables. Our final model included 287 patients and had a sensitivity of 100%, a specificity of 68.9%, and an accuracy of 70.4%. Ninety-nine (35%) of patients were assigned to the high-risk group, whereas 188 (65%) of patients were assigned to the low-risk group. Notably, regarding anatomic variables, our model reported that age, pulmonary comorbidities, aortic neck diameter, iliac artery length, and aneurysms were independent predictors of post-EVAR sac enlargement. With the exception of age, all statistically significant variables were qualitatively supported by prior literature. With regards to secondary outcomes, the high-risk group had significantly higher proportions of AAA-related deaths (5.1% versus 1.1%, P = 0.037) and Type 1 endoleaks (9.1% versus 3.2%, P = 0.033). Our model is a decent predictor of patients at low risk for post AAA EVAR aneurysm sac enlargement and associated complications. With additional validation and refinement, it could be applied to practices to cut down on the overall need for postimplantation CTA. Copyright © 2017 Elsevier Inc. All rights reserved.

  5. Do Informal Workers Queue For Formal Jobs in Brazil ?

    OpenAIRE

    Fábio Veras Soares

    2004-01-01

    This paper investigates the existence of a job queue for formal (registered) jobs in the Brazilian labour market in an endogenous switching regression framework. This approach aims at correctly specifying the allocation process in the presence of queuing and getting unbiased wage equation estimates in order evaluate the role of wage differential between formal and informal sector in determining sector allocation. We estimate three types of bivariate probit specifications in order to evaluate ...

  6. Modelling the Dependence Structure of MUR/USD and MUR/INR Exchange Rates using Copula

    Directory of Open Access Journals (Sweden)

    Vandna Jowaheer

    2012-01-01

    Full Text Available American Dollar (USD and Indian Rupee (INR play an important role in Mauritian economy. It is important to model the pattern of dependence in their co-movement with respect to Mauritian Rupee (MUR, as this may indicate the export-import behavior in Mauritius. However, it is known that distributions of exchange rates are usually non-normal and the use of linear correlation as a dependence measure is inappropriate. Moreover it is quite difficult to obtain the joint distribution of such random variables in order to specify the complete covariance matrix to measure their dependence structure. In this paper, we first identify the marginal distributions of the exchange rates of MUR against USD and INR and then select the best fitting copula model for the bivariate series. It is concluded that both the series are asymmetric and fat-tailed following hyperbolic distribution. Their dependence structure is appropriately modeled by t copula.

  7. A long-term/short-term model for daily electricity prices with dynamic volatility

    International Nuclear Information System (INIS)

    In this paper we introduce a new stochastic long-term/short-term model for short-term electricity prices, and apply it to four major European indices, namely to the German, Dutch, UK and Nordic one. We give evidence that all time series contain certain periodic (mostly annual) patterns, and show how to use the wavelet transform, a tool of multiresolution analysis, for filtering purpose. The wavelet transform is also applied to separate the long-term trend from the short-term oscillation in the seasonal-adjusted log-prices. In all time series we find evidence for dynamic volatility, which we incorporate by using a bivariate GARCH model with constant correlation. Eventually we fit various models from the existing literature to the data, and come to the conclusion that our approach performs best. For the error distribution, the Normal Inverse Gaussian distribution shows the best fit. (author)

  8. Price, tax and tobacco product substitution in Zambia.

    Science.gov (United States)

    Stoklosa, Michal; Goma, Fastone; Nargis, Nigar; Drope, Jeffrey; Chelwa, Grieve; Chisha, Zunda; Fong, Geoffrey T

    2018-03-24

    In Zambia, the number of cigarette users is growing, and the lack of strong tax policies is likely an important cause. When adjusted for inflation, levels of tobacco tax have not changed since 2007. Moreover, roll-your-own (RYO) tobacco, a less-costly alternative to factory-made (FM) cigarettes, is highly prevalent. We modelled the probability of FM and RYO cigarette smoking using individual-level data obtained from the 2012 and 2014 waves of the International Tobacco Control (ITC) Zambia Survey. We used two estimation methods: the standard estimation method involving separate random effects probit models and a method involving a system of equations (incorporating bivariate seemingly unrelated random effects probit) to estimate price elasticities of FM and RYO cigarettes and their cross-price elasticities. The estimated price elasticities of smoking prevalence are -0.20 and -0.03 for FM and RYO cigarettes, respectively. FM and RYO are substitutes; that is, when the price of one of the products goes up, some smokers switch to the other product. The effects are stronger for substitution from FM to RYO than vice versa. This study affirms that increasing cigarette tax with corresponding price increases could significantly reduce cigarette use in Zambia. Furthermore, reducing between-product price differences would reduce substitution from FM to RYO. Since RYO use is associated with lower socioeconomic status, efforts to decrease RYO use, including through tax/price approaches and cessation assistance, would decrease health inequalities in Zambian society and reduce the negative economic consequences of tobacco use experienced by the poor. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  9. Parametric overdispersed frailty models for current status data.

    Science.gov (United States)

    Abrams, Steven; Aerts, Marc; Molenberghs, Geert; Hens, Niel

    2017-12-01

    Frailty models have a prominent place in survival analysis to model univariate and multivariate time-to-event data, often complicated by the presence of different types of censoring. In recent years, frailty modeling gained popularity in infectious disease epidemiology to quantify unobserved heterogeneity using Type I interval-censored serological data or current status data. In a multivariate setting, frailty models prove useful to assess the association between infection times related to multiple distinct infections acquired by the same individual. In addition to dependence among individual infection times, overdispersion can arise when the observed variability in the data exceeds the one implied by the model. In this article, we discuss parametric overdispersed frailty models for time-to-event data under Type I interval-censoring, building upon the work by Molenberghs et al. (2010) and Hens et al. (2009). The proposed methodology is illustrated using bivariate serological data on hepatitis A and B from Flanders, Belgium anno 1993-1994. Furthermore, the relationship between individual heterogeneity and overdispersion at a stratum-specific level is studied through simulations. Although it is important to account for overdispersion, one should be cautious when modeling both individual heterogeneity and overdispersion based on current status data as model selection is hampered by the loss of information due to censoring. © 2017, The International Biometric Society.

  10. The delayed pulmonary syndrome following acute high-dose irradiation: a rhesus macaque model.

    Science.gov (United States)

    Garofalo, Michael; Bennett, Alexander; Farese, Ann M; Harper, Jamie; Ward, Amanda; Taylor-Howell, Cheryl; Cui, Wanchang; Gibbs, Allison; Lasio, Giovanni; Jackson, William; MacVittie, Thomas J

    2014-01-01

    Several radiation dose- and time-dependent tissue sequelae develop following acute high-dose radiation exposure. One of the recognized delayed effects of such exposures is lung injury, characterized by respiratory failure as a result of pneumonitis that may subsequently develop into lung fibrosis. Since this pulmonary subsyndrome may be associated with high morbidity and mortality, comprehensive treatment following high-dose irradiation will ideally include treatments that mitigate both the acute hematologic and gastrointestinal subsyndromes as well as the delayed pulmonary syndrome. Currently, there are no drugs approved by the Food and Drug Administration to counteract the effects of acute radiation exposure. Moreover, there are no relevant large animal models of radiation-induced lung injury that permit efficacy testing of new generation medical countermeasures in combination with medical management protocols under the FDA animal rule criteria. Herein is described a nonhuman primate model of delayed lung injury resulting from whole thorax lung irradiation. Rhesus macaques were exposed to 6 MV photon radiation over a dose range of 9.0-12.0 Gy and medical management administered according to a standardized treatment protocol. The primary endpoint was all-cause mortality at 180 d. A comparative multiparameter analysis is provided, focusing on the lethal dose response relationship characterized by a lethal dose50/180 of 10.27 Gy [9.88, 10.66] and slope of 1.112 probits per linear dose. Latency, incidence, and severity of lung injury were evaluated through clinical and radiographic parameters including respiratory rate, saturation of peripheral oxygen, corticosteroid requirements, and serial computed tomography. Gross anatomical and histological analyses were performed to assess radiation-induced injury. The model defines the dose response relationship and time course of the delayed pulmonary sequelae and consequent morbidity and mortality. Therefore, it may provide

  11. Impactos do Programa Bolsa Família federal sobre o trabalho infantil e a frequência escolar Impacts of the Bolsa Família Program on child labor and school attendance

    Directory of Open Access Journals (Sweden)

    Maria Cristina Cacciamali

    2010-08-01

    Full Text Available Este trabalho analisa o impacto do Programa Bolsa Família sobre a incidência de trabalho infantil e a frequência escolar das crianças de famílias pobres no Brasil em 2004, segundo a situação censitária e regional. Para o cálculo dos testes estatísticos, utilizamos um modelo probit bivariado, que estima conjuntamente as opções trabalhar e estudar dos jovens. Os resultados corroboram a eficiência do Programa Bolsa Família em elevar a frequência escolar das crianças; contudo, o Programa apresenta efeitos perversos sobre a incidência de trabalho infantil, elevando a probabilidade de sua ocorrência. Ademais, crianças de famílias pobres situa das em áreas rurais apresentam piores condições em relação àquelas de áreas urbanas, demandando ações específicas a seu favor.This paper analyses the impacts of the Bolsa Família Program on the occurrence of child labor and school attendance of children from poor families in Brazil in 2004, according to census and regional areas. A bivariate probit model was used to estimate the statistical tests. The results corroborate the efficiency of the Bolsa Família to increase the school attendance of children, however, the Program increases the likelihood of occurrence of child labor. Moreover, children of poor households in rural areas have worse conditions than those of urban areas, demanding specific actions to them.

  12. Datamining approaches for modeling tumor control probability.

    Science.gov (United States)

    Naqa, Issam El; Deasy, Joseph O; Mu, Yi; Huang, Ellen; Hope, Andrew J; Lindsay, Patricia E; Apte, Aditya; Alaly, James; Bradley, Jeffrey D

    2010-11-01

    Tumor control probability (TCP) to radiotherapy is determined by complex interactions between tumor biology, tumor microenvironment, radiation dosimetry, and patient-related variables. The complexity of these heterogeneous variable interactions constitutes a challenge for building predictive models for routine clinical practice. We describe a datamining framework that can unravel the higher order relationships among dosimetric dose-volume prognostic variables, interrogate various radiobiological processes, and generalize to unseen data before when applied prospectively. Several datamining approaches are discussed that include dose-volume metrics, equivalent uniform dose, mechanistic Poisson model, and model building methods using statistical regression and machine learning techniques. Institutional datasets of non-small cell lung cancer (NSCLC) patients are used to demonstrate these methods. The performance of the different methods was evaluated using bivariate Spearman rank correlations (rs). Over-fitting was controlled via resampling methods. Using a dataset of 56 patients with primary NCSLC tumors and 23 candidate variables, we estimated GTV volume and V75 to be the best model parameters for predicting TCP using statistical resampling and a logistic model. Using these variables, the support vector machine (SVM) kernel method provided superior performance for TCP prediction with an rs=0.68 on leave-one-out testing compared to logistic regression (rs=0.4), Poisson-based TCP (rs=0.33), and cell kill equivalent uniform dose model (rs=0.17). The prediction of treatment response can be improved by utilizing datamining approaches, which are able to unravel important non-linear complex interactions among model variables and have the capacity to predict on unseen data for prospective clinical applications.

  13. Where does “whichever occurs first” hold for preventive maintenance modelings?

    International Nuclear Information System (INIS)

    Zhao, Xufeng; Liu, Hu-Chen; Nakagawa, Toshio

    2015-01-01

    The purpose of this paper is to observe where the classical assumption “whichever occurs first” holds for preventive maintenance (PM) modelings. We firstly take up a bivariate maintenance policy where “whichever occurs first” and the newly proposed “whichever occurs last” are respectively used. Modification of PM performance is introduced into modelings to avoid interruptions of job executions, that is, PMs are done only at the end of working cycles. From the points of performability and maintenance cost, we secondly compare the optimized “first” and “last” policies in detail and find two critical points of comparisons analytically. Further, by comparing the “first” and “last” policies with the standard maintenance, modified PM costs are obtained to observe whether it is easy to save PM cost for “whichever occurs first”. For a trivariate maintenance policy, we thirdly propose an entirely new assumption “whichever occurs middle” and give another one model that considers both assumptions of “first” and “last”. We analyze maintenance probabilities for each model and then obtain directly their expected maintenance cost rates for further studies. - Highlights: • A bivariate maintenance policy based on “whichever occurs first” is improved. • Two comparisons of “whichever occurs first and last” are made. • Modified maintenance costs are obtained to observe which policy could save more costs. • New assumption “whichever occurs middle” for the trivariate maintenances is proposed. • One policy is modeled by considering both assumptions of “first” and “last”

  14. Wind models for the NSTS ascent trajectory biasing for wind load alleviation

    Science.gov (United States)

    Smith, O. E.; Adelfang, S. I.; Batts, G. W.

    1990-01-01

    New concepts are presented for aerospace vehicle ascent wind profile biasing. The purpose for wind biasing the ascent trajectory is to provide ascent wind loads relief and thus decrease the probability for launch delays due to wind loads exceeding critical limits. Wind biasing trajectories to the the profile of monthly mean winds have been widely used for this purpose. The wind profile models presented give additional alternatives for wind biased trajectories. They are derived from the properties of the bivariate normal probability function using the available wind statistical parameters for the launch site. The analytical expressions are presented to permit generalizations. Specific examples are given to illustrate the procedures. The wind profile models can be used to establish the ascent trajectory steering commands to guide the vehicle through the first stage. For the National Space Transportation System (NSTS) program these steering commands are called I-loads.

  15. A Vector Autoregressive Model for Electricity Prices Subject to Long Memory and Regime Switching

    DEFF Research Database (Denmark)

    Haldrup, Niels; Nielsen, Frank; Nielsen, Morten Ørregaard

    2007-01-01

    A regime dependent VAR model is suggested that allows long memory (fractional integration) in each of the regime states as well as the possibility of fractional cointegra- tion. The model is relevant in describing the price dynamics of electricity prices where the transmission of power is subject...... to occasional congestion periods. For a system of bilat- eral prices non-congestion means that electricity prices are identical whereas congestion makes prices depart. Hence, the joint price dynamics implies switching between essen- tially a univariate price process under non-congestion and a bivariate price...... process under congestion. At the same time it is an empirical regularity that electricity prices tend to show a high degree of fractional integration, and thus that prices may be fractionally cointegrated. An empirical analysis using Nord Pool data shows that even though the prices strongly co-move under...

  16. INTER-TEMPORAL ANALYSIS OF HOUSEHOLD CAR AND MOTORCYCLE OWNERSHIP BEHAVIORS

    Directory of Open Access Journals (Sweden)

    Nobuhiro SANKO

    2009-01-01

    Full Text Available This study investigates household car and motorcycle ownership in Nagoya metropolitan area of Japan. Bivariate ordered probit models of household vehicle ownership were developed using the data from the case study area at three time points, 1981, 1991, and 2001. The accessibility that is generally known to be correlated with vehicle ownership decisions is incorporated as an input for the proposed vehicle ownership model to investigate the potential relationship between them. The mode choice models for the area were first estimated to quantify the accessibility indexes that were later integrated into the vehicle ownership models. Inter-temporal comparison and temporal transferability analysis were conducted. Some of the major findings suggest: 1 that age and gender differences have become less important in modal choices and car ownership as motorization proceeds; 2 that the accessibility seems to have a significant correlation with vehicle ownership; 3 that car and motorcycle ownership may not be independent and may have a complementary relationship; and 4 that the deep insights concerning the model selection are obtained from the viewpoints of the temporal transferability.

  17. Thermal niche for in situ seed germination by Mediterranean mountain streams: model prediction and validation for Rhamnus persicifolia seeds.

    Science.gov (United States)

    Porceddu, Marco; Mattana, Efisio; Pritchard, Hugh W; Bacchetta, Gianluigi

    2013-12-01

    Mediterranean mountain species face exacting ecological conditions of rainy, cold winters and arid, hot summers, which affect seed germination phenology. In this study, a soil heat sum model was used to predict field emergence of Rhamnus persicifolia, an endemic tree species living at the edge of mountain streams of central eastern Sardinia. Seeds were incubated in the light at a range of temperatures (10-25 and 25/10 °C) after different periods (up to 3 months) of cold stratification at 5 °C. Base temperatures (Tb), and thermal times for 50 % germination (θ50) were calculated. Seeds were also buried in the soil in two natural populations (Rio Correboi and Rio Olai), both underneath and outside the tree canopy, and exhumed at regular intervals. Soil temperatures were recorded using data loggers and soil heat sum (°Cd) was calculated on the basis of the estimated Tb and soil temperatures. Cold stratification released physiological dormancy (PD), increasing final germination and widening the range of germination temperatures, indicative of a Type 2 non-deep PD. Tb was reduced from 10·5 °C for non-stratified seeds to 2·7 °C for seeds cold stratified for 3 months. The best thermal time model was obtained by fitting probit germination against log °Cd. θ50 was 2·6 log °Cd for untreated seeds and 2·17-2·19 log °Cd for stratified seeds. When θ50 values were integrated with soil heat sum estimates, field emergence was predicted from March to April and confirmed through field observations. Tb and θ50 values facilitated model development of the thermal niche for in situ germination of R. persicifolia. These experimental approaches may be applied to model the natural regeneration patterns of other species growing on Mediterranean mountain waterways and of physiologically dormant species, with overwintering cold stratification requirement and spring germination.

  18. A comprehensive model to determine the effects of temperature and species fluctuations on reaction rates in turbulent reacting flows

    Science.gov (United States)

    Foy, E.; Ronan, G.; Chinitz, W.

    1982-01-01

    A principal element to be derived from modeling turbulent reacting flows is an expression for the reaction rates of the various species involved in any particular combustion process under consideration. A temperature-derived most-likely probability density function (pdf) was used to describe the effects of temperature fluctuations on the Arrhenius reaction rate constant. A most-likely bivariate pdf described the effects of temperature and species concentrations fluctuations on the reaction rate. A criterion is developed for the use of an "appropriate" temperature pdf. The formulation of models to calculate the mean turbulent Arrhenius reaction rate constant and the mean turbulent reaction rate is considered and the results of calculations using these models are presented.

  19. Dynamic frailty models based on compound birth-death processes.

    Science.gov (United States)

    Putter, Hein; van Houwelingen, Hans C

    2015-07-01

    Frailty models are used in survival analysis to model unobserved heterogeneity. They accommodate such heterogeneity by the inclusion of a random term, the frailty, which is assumed to multiply the hazard of a subject (individual frailty) or the hazards of all subjects in a cluster (shared frailty). Typically, the frailty term is assumed to be constant over time. This is a restrictive assumption and extensions to allow for time-varying or dynamic frailties are of interest. In this paper, we extend the auto-correlated frailty models of Henderson and Shimakura and of Fiocco, Putter and van Houwelingen, developed for longitudinal count data and discrete survival data, to continuous survival data. We present a rigorous construction of the frailty processes in continuous time based on compound birth-death processes. When the frailty processes are used as mixtures in models for survival data, we derive the marginal hazards and survival functions and the marginal bivariate survival functions and cross-ratio function. We derive distributional properties of the processes, conditional on observed data, and show how to obtain the maximum likelihood estimators of the parameters of the model using a (stochastic) expectation-maximization algorithm. The methods are applied to a publicly available data set. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  20. Health insurance for the poor: impact on catastrophic and out-of-pocket health expenditures in Mexico.

    Science.gov (United States)

    Galárraga, Omar; Sosa-Rubí, Sandra G; Salinas-Rodríguez, Aarón; Sesma-Vázquez, Sergio

    2010-10-01

    The goal of Seguro Popular (SP) in Mexico was to improve the financial protection of the uninsured population against excessive health expenditures. This paper estimates the impact of SP on catastrophic health expenditures (CHE), as well as out-of-pocket (OOP) health expenditures, from two different sources. First, we use the SP Impact Evaluation Survey (2005-2006), and compare the instrumental variables (IV) results with the experimental benchmark. Then, we use the same IV methods with the National Health and Nutrition Survey (ENSANUT 2006). We estimate naïve models, assuming exogeneity, and contrast them with IV models that take advantage of the specific SP implementation mechanisms for identification. The IV models estimated included two-stage least squares (2SLS), bivariate probit, and two-stage residual inclusion (2SRI) models. Instrumental variables estimates resulted in comparable estimates against the "gold standard." Instrumental variables estimates indicate a reduction of 54% in catastrophic expenditures at the national level. SP beneficiaries also had lower expenditures on outpatient and medicine expenditures. The selection-corrected protective effect is found not only in the limited experimental dataset, but also at the national level.

  1. Analysis of ordered categorical data with threshold models exemplified by plumage damage scores from laying hens differing in their genotype and rearing environment.

    Science.gov (United States)

    Mielenz, N; Spilke, J; von Borell, E

    2010-11-01

    Plumage damage scores (PDS) were assessed in laying hens of 2 genotypes (Lohmann Tradition and Lohmann Silver) at the 45th and 70th weeks of age, with scores ranging from zero (no damage) to 6 (completely denuded). This ordinally scaled categorical characteristic was recorded from different body regions of 365 hens that had experienced different housing environments (2 enrichment levels) during their rearing and laying periods. The so-called threshold model is an option for analyzing repeated ordered categorical data from individual animals. This model represents a generalized linear mixed model if the linear predictor additionally includes the animal as a random effect. This paper is intended to fill the gap between the theoretical aspects of generalized linear mixed models and their practical application in animal science. A cumulative probit model was adapted for analyzing plumage damage. The variation among birds was considered as a random effect for the analysis of cumulative probabilities. The numerical implementation of the methodology was done based on the NLMIXED procedure of the SAS statistical program. A threshold model with inhomogeneous residual variances for the latent variable was used because less plumage damages were observed up to the 45th week of age compared to the 70th week of age. Differences in PDS were evident between genotypes, age, and enrichment levels during housing periods. However, neither of the 2 enriched environments proved consistent superiority or inferiority across all traits. Major plumage damage (PDS larger than or equal to 5) was observed for the breast region in 56.6% of all birds with the Lohmann Tradition genotype and in 34.4% with the Lohmann Silver genotype when we look at the mean over all treatments. The most severe plumage damage was observed at the 70th week of age for the traits breast and housing environment without additional enrichment.

  2. Joint modelling of flood peaks and volumes: A copula application for the Danube River

    Directory of Open Access Journals (Sweden)

    Papaioannou George

    2016-12-01

    Full Text Available Flood frequency analysis is usually performed as a univariate analysis of flood peaks using a suitable theoretical probability distribution of the annual maximum flood peaks or peak over threshold values. However, other flood attributes, such as flood volume and duration, are necessary for the design of hydrotechnical projects, too. In this study, the suitability of various copula families for a bivariate analysis of peak discharges and flood volumes has been tested. Streamflow data from selected gauging stations along the whole Danube River have been used. Kendall’s rank correlation coefficient (tau quantifies the dependence between flood peak discharge and flood volume settings. The methodology is applied to two different data samples: 1 annual maximum flood (AMF peaks combined with annual maximum flow volumes of fixed durations at 5, 10, 15, 20, 25, 30 and 60 days, respectively (which can be regarded as a regime analysis of the dependence between the extremes of both variables in a given year, and 2 annual maximum flood (AMF peaks with corresponding flood volumes (which is a typical choice for engineering studies. The bivariate modelling of the extracted peak discharge - flood volume couples is achieved with the use of the Ali-Mikhail-Haq (AMH, Clayton, Frank, Joe, Gumbel, Hüsler-Reiss, Galambos, Tawn, Normal, Plackett and FGM copula families. Scatterplots of the observed and simulated peak discharge - flood volume pairs and goodness-of-fit tests have been used to assess the overall applicability of the copulas as well as observing any changes in suitable models along the Danube River. The results indicate that for the second data sampling method, almost all of the considered Archimedean class copula families perform better than the other copula families selected for this study, and that for the first method, only the upper-tail-flat copulas excel (except for the AMH copula due to its inability to model stronger relationships.

  3. Estimation of Causal Mediation Effects for a Dichotomous Outcome in Multiple-Mediator Models using the Mediation Formula

    Science.gov (United States)

    Nelson, Suchitra; Albert, Jeffrey M.

    2013-01-01

    Mediators are intermediate variables in the causal pathway between an exposure and an outcome. Mediation analysis investigates the extent to which exposure effects occur through these variables, thus revealing causal mechanisms. In this paper, we consider the estimation of the mediation effect when the outcome is binary and multiple mediators of different types exist. We give a precise definition of the total mediation effect as well as decomposed mediation effects through individual or sets of mediators using the potential outcomes framework. We formulate a model of joint distribution (probit-normal) using continuous latent variables for any binary mediators to account for correlations among multiple mediators. A mediation formula approach is proposed to estimate the total mediation effect and decomposed mediation effects based on this parametric model. Estimation of mediation effects through individual or subsets of mediators requires an assumption involving the joint distribution of multiple counterfactuals. We conduct a simulation study that demonstrates low bias of mediation effect estimators for two-mediator models with various combinations of mediator types. The results also show that the power to detect a non-zero total mediation effect increases as the correlation coefficient between two mediators increases, while power for individual mediation effects reaches a maximum when the mediators are uncorrelated. We illustrate our approach by applying it to a retrospective cohort study of dental caries in adolescents with low and high socioeconomic status. Sensitivity analysis is performed to assess the robustness of conclusions regarding mediation effects when the assumption of no unmeasured mediator-outcome confounders is violated. PMID:23650048

  4. Tapered composite likelihood for spatial max-stable models

    KAUST Repository

    Sang, Huiyan

    2014-05-01

    Spatial extreme value analysis is useful to environmental studies, in which extreme value phenomena are of interest and meaningful spatial patterns can be discerned. Max-stable process models are able to describe such phenomena. This class of models is asymptotically justified to characterize the spatial dependence among extremes. However, likelihood inference is challenging for such models because their corresponding joint likelihood is unavailable and only bivariate or trivariate distributions are known. In this paper, we propose a tapered composite likelihood approach by utilizing lower dimensional marginal likelihoods for inference on parameters of various max-stable process models. We consider a weighting strategy based on a "taper range" to exclude distant pairs or triples. The "optimal taper range" is selected to maximize various measures of the Godambe information associated with the tapered composite likelihood function. This method substantially reduces the computational cost and improves the efficiency over equally weighted composite likelihood estimators. We illustrate its utility with simulation experiments and an analysis of rainfall data in Switzerland.

  5. A Mathematical Model Captures the Structure of Subjective Affect.

    Science.gov (United States)

    Mattek, Alison M; Wolford, George L; Whalen, Paul J

    2017-05-01

    Although it is possible to observe when another person is having an emotional moment, we also derive information about the affective states of others from what they tell us they are feeling. In an effort to distill the complexity of affective experience, psychologists routinely focus on a simplified subset of subjective rating scales (i.e., dimensions) that capture considerable variability in reported affect: reported valence (i.e., how good or bad?) and reported arousal (e.g., how strong is the emotion you are feeling?). Still, existing theoretical approaches address the basic organization and measurement of these affective dimensions differently. Some approaches organize affect around the dimensions of bipolar valence and arousal (e.g., the circumplex model), whereas alternative approaches organize affect around the dimensions of unipolar positivity and unipolar negativity (e.g., the bivariate evaluative model). In this report, we (a) replicate the data structure observed when collected according to the two approaches described above, and reinterpret these data to suggest that the relationship between each pair of affective dimensions is conditional on valence ambiguity, and (b) formalize this structure with a mathematical model depicting a valence ambiguity dimension that decreases in range as arousal decreases (a triangle). This model captures variability in affective ratings better than alternative approaches, increasing variance explained from ~60% to over 90% without adding parameters.

  6. Risk Measurement and Risk Modelling Using Applications of Vine Copulas

    Directory of Open Access Journals (Sweden)

    David E. Allen

    2017-09-01

    Full Text Available This paper features an application of Regular Vine copulas which are a novel and recently developed statistical and mathematical tool which can be applied in the assessment of composite financial risk. Copula-based dependence modelling is a popular tool in financial applications, but is usually applied to pairs of securities. By contrast, Vine copulas provide greater flexibility and permit the modelling of complex dependency patterns using the rich variety of bivariate copulas which may be arranged and analysed in a tree structure to explore multiple dependencies. The paper features the use of Regular Vine copulas in an analysis of the co-dependencies of 10 major European Stock Markets, as represented by individual market indices and the composite STOXX 50 index. The sample runs from 2005 to the end of 2013 to permit an exploration of how correlations change indifferent economic circumstances using three different sample periods: pre-GFC (January 2005–July 2007, GFC (July 2007– September 2009, and post-GFC periods (September 2009–December 2013. The empirical results suggest that the dependencies change in a complex manner, and are subject to change in different economic circumstances. One of the attractions of this approach to risk modelling is the flexibility in the choice of distributions used to model co-dependencies. The practical application of Regular Vine metrics is demonstrated via an example of the calculation of the VaR of a portfolio made up of the indices.

  7. Toxicоlogical evaluation of the plant products using Brine Shrimp (Artemia salina L. model

    Directory of Open Access Journals (Sweden)

    Меntor R. Hamidi

    2014-04-01

    Full Text Available Many natural products could serve as the starting point in the development of modern medicines because of their numerous biological and pharmacological activities. However, some of them are known to carry toxicological properties as well. In order to achieve a safe treatment with plant products, numerous research studies have recently been focused on both pharmacology and toxicity of medicinal plants. Moreover, these studies employed efforts for alternative biological assays. Brine Shrimp Lethality Assay is the most convenient system for monitoring biological activities of various plant species. This method is very useful for preliminary assessment of toxicity of the plant extracts. Rapidness, simplicity and low requirements are several advantages of this assay. However, several conditions need to be completed, especially in the means of standardized experimental conditions (temperature, pH of the medium, salinity, aeration and light. The toxicity of herbal extracts using this assay has been determined in a concentration range of 10, 100 and 1000 µg/ml of the examined herbal extract. Most toxicity studies which use the Brine Shrimp Lethality Assay determine the toxicity after 24 hours of exposure to the tested sample. The median lethal concentration (LC50 of the test samples is obtained by a plot of percentage of the dead shrimps against the logarithm of the sample concentration. LC50 values are estimated using a probit regression analysis and compared with either Meyer’s or Clarkson’s toxicity criteria. Furthermore, the positive correlation between Meyer’s toxicity scale for Artemia salina and Gosselin, Smith and Hodge’s toxicity scale for higher animal models confirmed that the Brine Shrimp Lethality Assay is an excellent predictive tool for the toxic potential of plant extracts in humans.

  8. Econometric analyses of microfinance credit group formation, contractual risks and welfare impacts in Northern Ethiopia

    NARCIS (Netherlands)

    Berhane Tesfay, G.

    2009-01-01

    Key words Microfinance, joint liability, contractual risk, group formation, risk-matching, impact evaluation, Panel data econometrics, dynamic panel probit, trend models, fixed-effects, composite counterfactuals, propensity score matching, farm households, Ethiopia. Lack of access to credit is a

  9. Socio-economic impacts and determinants of parasitic weed infestation in rainfed rice systems of sub-Saharan Africa

    NARCIS (Netherlands)

    N'cho, A.S.

    2014-01-01

    Keywords: rice; weed; weed management practices, adoption, impact, parasitic weeds; Rhamphicarpa fistulosa; Striga asiatica; Striga hermonthica, double hurdle model; multivariate probit, productivity, stochastic frontier analysis, data envelopment

  10. Comparison between different uncertainty propagation methods in multivariate analysis: An application in the bivariate case

    International Nuclear Information System (INIS)

    Mullor, R.; Sanchez, A.; Martorell, S.; Martinez-Alzamora, N.

    2011-01-01

    Safety related systems performance optimization is classically based on quantifying the effects that testing and maintenance activities have on reliability and cost (R+C). However, R+C quantification is often incomplete in the sense that important uncertainties may not be considered. An important number of studies have been published in the last decade in the field of R+C based optimization considering uncertainties. They have demonstrated that inclusion of uncertainties in the optimization brings the decision maker insights concerning how uncertain the R+C results are and how this uncertainty does matter as it can result in differences in the outcome of the decision making process. Several methods of uncertainty propagation based on the theory of tolerance regions have been proposed in the literature depending on the particular characteristics of the variables in the output and their relations. In this context, the objective of this paper focuses on the application of non-parametric and parametric methods to analyze uncertainty propagation, which will be implemented on a multi-objective optimization problem where reliability and cost act as decision criteria and maintenance intervals act as decision variables. Finally, a comparison of results of these applications and the conclusions obtained are presented.

  11. Mixtures of Gaussians for uncertainty description in bivariate latent heat flux proxies

    NARCIS (Netherlands)

    Wójcik, R.; Troch, P.A.A.; Stricker, J.N.M.; Torfs, P.J.J.F.

    2006-01-01

    This paper proposes a new probabilistic approach for describing uncertainty in the ensembles of latent heat flux proxies. The proxies are obtained from hourly Bowen ratio and satellite-derived measurements, respectively, at several locations in the southern Great Plains region in the United States.

  12. Socioeconomic status and health : A new approach to the measurement of bivariate inequality

    NARCIS (Netherlands)

    Erreygers, G.; Kessels, R.

    2017-01-01

    We suggest an alternative way to construct a family of indices of socioeconomic inequality of health. Our indices belong to the broad category of linear indices. In contrast to rank-dependent indices, which are defined in terms of the ranks of the socioeconomic variable and the levels of the health

  13. Socioeconomic Status and Health: A New Approach to the Measurement of Bivariate Inequality

    OpenAIRE

    Erreygers, Guido; Kessels, Roselinde

    2017-01-01

    Abstract: We suggest an alternative way to construct a family of indices of socioeconomic inequality of health. Our indices belong to the broad category of linear indices. In contrast to rank-dependent indices, which are defined in terms of the ranks of the socioeconomic variable and the levels of the health variable, our indices are based on the levels of both the socioeconomic and the health variable. We also indicate how the indices can be modified in order to introduce sensitivity to ineq...

  14. Bivariate genetic analyses of stuttering and nonfluency in a large sample of 5-year old twins

    NARCIS (Netherlands)

    van Beijsterveldt, C.E.M.; Felsenfeld, S.; Boomsma, D.I.

    2010-01-01

    Purpose: Behavioral genetic studies of speech fluency have focused on participants who present with clinical stuttering. Knowledge about genetic influences on the development and regulation of normal speech fluency is limited. The primary aims of this study were to identify the heritability of

  15. Comparison between different uncertainty propagation methods in multivariate analysis: An application in the bivariate case

    Energy Technology Data Exchange (ETDEWEB)

    Mullor, R. [Dpto. Estadistica e Investigacion Operativa, Universidad Alicante (Spain); Sanchez, A., E-mail: aisanche@eio.upv.e [Dpto. Estadistica e Investigacion Operativa Aplicadas y Calidad, Universidad Politecnica Valencia, Camino de Vera s/n 46022 (Spain); Martorell, S. [Dpto. Ingenieria Quimica y Nuclear, Universidad Politecnica Valencia (Spain); Martinez-Alzamora, N. [Dpto. Estadistica e Investigacion Operativa Aplicadas y Calidad, Universidad Politecnica Valencia, Camino de Vera s/n 46022 (Spain)

    2011-06-15

    Safety related systems performance optimization is classically based on quantifying the effects that testing and maintenance activities have on reliability and cost (R+C). However, R+C quantification is often incomplete in the sense that important uncertainties may not be considered. An important number of studies have been published in the last decade in the field of R+C based optimization considering uncertainties. They have demonstrated that inclusion of uncertainties in the optimization brings the decision maker insights concerning how uncertain the R+C results are and how this uncertainty does matter as it can result in differences in the outcome of the decision making process. Several methods of uncertainty propagation based on the theory of tolerance regions have been proposed in the literature depending on the particular characteristics of the variables in the output and their relations. In this context, the objective of this paper focuses on the application of non-parametric and parametric methods to analyze uncertainty propagation, which will be implemented on a multi-objective optimization problem where reliability and cost act as decision criteria and maintenance intervals act as decision variables. Finally, a comparison of results of these applications and the conclusions obtained are presented.

  16. Socioeconomic Status and Health: A New Approach to the Measurement of Bivariate Inequality.

    Science.gov (United States)

    Erreygers, Guido; Kessels, Roselinde

    2017-06-23

    We suggest an alternative way to construct a family of indices of socioeconomic inequality of health. Our indices belong to the broad category of linear indices. In contrast to rank-dependent indices, which are defined in terms of the ranks of the socioeconomic variable and the levels of the health variable, our indices are based on the levels of both the socioeconomic and the health variable. We also indicate how the indices can be modified in order to introduce sensitivity to inequality in the socioeconomic distribution and to inequality in the health distribution. As an empirical illustration, we make a comparative study of the relation between income and well-being in 16 European countries using data from the Survey of Health, Ageing and Retirement in Europe (SHARE) Wave 4.

  17. Socioeconomic Status and Health: A New Approach to the Measurement of Bivariate Inequality

    Science.gov (United States)

    Kessels, Roselinde

    2017-01-01

    We suggest an alternative way to construct a family of indices of socioeconomic inequality of health. Our indices belong to the broad category of linear indices. In contrast to rank-dependent indices, which are defined in terms of the ranks of the socioeconomic variable and the levels of the health variable, our indices are based on the levels of both the socioeconomic and the health variable. We also indicate how the indices can be modified in order to introduce sensitivity to inequality in the socioeconomic distribution and to inequality in the health distribution. As an empirical illustration, we make a comparative study of the relation between income and well-being in 16 European countries using data from the Survey of Health, Ageing and Retirement in Europe (SHARE) Wave 4. PMID:28644405

  18. Primary testicular failure in Klinefelter's syndrome: the use of bivariate luteinizing hormone-testosterone reference charts

    DEFF Research Database (Denmark)

    Aksglaede, Lise; Andersson, Anna-Maria; Jørgensen, Niels

    2007-01-01

    The diagnosis of androgen deficiency is based on clinical features and confirmatory low serum testosterone levels. In early primary testicular failure, a rise in serum LH levels suggests inadequate androgen action for the individual's physiological requirements despite a serum testosterone level ...

  19. Primary testicular failure in Klinefelter's syndrome: the use of bivariate luteinizing hormone-testosterone reference charts

    DEFF Research Database (Denmark)

    Aksglaede, Lise; Andersson, Anna-Maria; Jørgensen, Niels

    2007-01-01

    The diagnosis of androgen deficiency is based on clinical features and confirmatory low serum testosterone levels. In early primary testicular failure, a rise in serum LH levels suggests inadequate androgen action for the individual's physiological requirements despite a serum testosterone level...... within the normal range. The combined evaluation of serum LH and testosterone levels in the evaluation of testicular failure has not been widely advocated....

  20. Information Decomposition in Bivariate Systems: Theory and Application to Cardiorespiratory Dynamics

    Directory of Open Access Journals (Sweden)

    Luca Faes

    2015-01-01

    Full Text Available In the framework of information dynamics, the temporal evolution of coupled systems can be studied by decomposing the predictive information about an assigned target system into amounts quantifying the information stored inside the system and the information transferred to it. While information storage and transfer are computed through the known self-entropy (SE and transfer entropy (TE, an alternative decomposition evidences the so-called cross entropy (CE and conditional SE (cSE, quantifying the cross information and internal information of the target system, respectively. This study presents a thorough evaluation of SE, TE, CE and cSE as quantities related to the causal statistical structure of coupled dynamic processes. First, we investigate the theoretical properties of these measures, providing the conditions for their existence and assessing the meaning of the information theoretic quantity that each of them reflects. Then, we present an approach for the exact computation of information dynamics based on the linear Gaussian approximation, and exploit this approach to characterize the behavior of SE, TE, CE and cSE in benchmark systems with known dynamics. Finally, we exploit these measures to study cardiorespiratory dynamics measured from healthy subjects during head-up tilt and paced breathing protocols. Our main result is that the combined evaluation of the measures of information dynamics allows to infer the causal effects associated with the observed dynamics and to interpret the alteration of these effects with changing experimental conditions.

  1. Bivariate analysis of basal serum anti-Mullerian hormone measurements and human blastocyst development after IVF

    LENUS (Irish Health Repository)

    Sills, E Scott

    2011-12-02

    Abstract Background To report on relationships among baseline serum anti-Müllerian hormone (AMH) measurements, blastocyst development and other selected embryology parameters observed in non-donor oocyte IVF cycles. Methods Pre-treatment AMH was measured in patients undergoing IVF (n = 79) and retrospectively correlated to in vitro embryo development noted during culture. Results Mean (+\\/- SD) age for study patients in this study group was 36.3 ± 4.0 (range = 28-45) yrs, and mean (+\\/- SD) terminal serum estradiol during IVF was 5929 +\\/- 4056 pmol\\/l. A moderate positive correlation (0.49; 95% CI 0.31 to 0.65) was noted between basal serum AMH and number of MII oocytes retrieved. Similarly, a moderate positive correlation (0.44) was observed between serum AMH and number of early cleavage-stage embryos (95% CI 0.24 to 0.61), suggesting a relationship between serum AMH and embryo development in IVF. Of note, serum AMH levels at baseline were significantly different for patients who did and did not undergo blastocyst transfer (15.6 vs. 10.9 pmol\\/l; p = 0.029). Conclusions While serum AMH has found increasing application as a predictor of ovarian reserve for patients prior to IVF, its roles to estimate in vitro embryo morphology and potential to advance to blastocyst stage have not been extensively investigated. These data suggest that baseline serum AMH determinations can help forecast blastocyst developmental during IVF. Serum AMH measured before treatment may assist patients, clinicians and embryologists as scheduling of embryo transfer is outlined. Additional studies are needed to confirm these correlations and to better define the role of baseline serum AMH level in the prediction of blastocyst formation.

  2. Perceived Social Support and Academic Achievement: Cross-Lagged Panel and Bivariate Growth Curve Analyses

    Science.gov (United States)

    Mackinnon, Sean P.

    2012-01-01

    As students transition to post-secondary education, they experience considerable stress and declines in academic performance. Perceived social support is thought to improve academic achievement by reducing stress. Longitudinal designs with three or more waves are needed in this area because they permit stronger causal inferences and help…

  3. A bivariate approach to the widening of the frontal lobes in the genus Homo.

    Science.gov (United States)

    Bruner, Emiliano; Holloway, Ralph L

    2010-02-01

    Within the genus Homo, the most encephalized taxa (Neandertals and modern humans) show relatively wider frontal lobes than either Homo erectus or australopithecines. The present analysis considers whether these changes are associated with a single size-based or allometric pattern (positive allometry of the width of the anterior endocranial fossa) or with a more specific and non-allometric pattern. The relationship between hemispheric length, maximum endocranial width, and frontal width at Broca's area was investigated in extant and extinct humans. Our results do not support positive allometry for the frontal lobe's width in relation to the main endocranial diameters within modern humans (Homo sapiens). Also, the correlation between frontal width and hemispheric length is lower than the correlation between frontal width and parieto-temporal width. When compared with the australopithecines, the genus Homo could have experienced a non-allometric widening of the brain at the temporo-parietal areas, which is most evident in Neandertals. Modern humans and Neandertals also display a non-allometric widening of the anterior endocranial fossa at the Broca's cap when compared with early hominids, again more prominent in the latter group. Taking into account the contrast between the intra-specific patterns and the between-species differences, the relative widening of the anterior fossa can be interpreted as a definite evolutionary character instead of a passive consequence of brain size increase. This expansion is most likely associated with correspondent increments of the underlying neural mass, or at least with a geometrical reallocation of the frontal cortical volumes. Although different structural changes of the cranial architecture can be related to such variations, the widening of the frontal areas is nonetheless particularly interesting when some neural functions (like language or working memory, decision processing, etc.) and related fronto-parietal cortico-cortical connections are taken into account.

  4. Synchronization as Adjustment of Information Rates: Detection from Bivariate Time Series

    Czech Academy of Sciences Publication Activity Database

    Paluš, Milan; Komárek, V.; Hrnčíř, Z.; Štěrbová, K.

    2001-01-01

    Roč. 63, č. 4 (2001), art. no. 046211 ISSN 1063-651X R&D Projects: GA MZd NF6258 Institutional research plan: AV0Z1030915 Keywords : synchronization detection * EEG analysis * epilepsy Subject RIV: BA - General Mathematics Impact factor: 2.235, year: 2001

  5. Bivariate flow karotyping in human philadelphia-positive chronic myelocytic leukemia

    NARCIS (Netherlands)

    Arkesteijn, G.J.A.; Martens, A.C.M.; Hagenbeek, A.

    1988-01-01

    Chromosome analysis on clinical leukemia material was done by means of flow cytometry (flow karyotyping) to investigate the applicability of this technique in the detection of leukemia-associated abnormalities. Flow karyotyping was performed on blood or bone marrow samples from eight patients with

  6. Analysis of input variables of an artificial neural network using bivariate correlation and canonical correlation

    International Nuclear Information System (INIS)

    Costa, Valter Magalhaes

    2011-01-01

    The monitoring of variables and diagnosis of sensor fault in nuclear power plants or processes industries is very important because an early diagnosis allows the correction of the fault and, like this, do not cause the production interruption, improving operator's security and it's not provoking economics losses. The objective of this work is, in the whole of all variables monitor of a nuclear power plant, to build a set, not necessary minimum, which will be the set of input variables of an artificial neural network and, like way, to monitor the biggest number of variables. This methodology was applied to the IEA-R1 Research Reactor at IPEN. For this, the variables Power, Rate of flow of primary circuit, Rod of control/security and Difference in pressure in the core of the reactor ( Δ P) was grouped, because, for hypothesis, almost whole of monitoring variables have relation with the variables early described or its effect can be result of the interaction of two or more. The Power is related to the increasing and decreasing of temperatures as well as the amount radiation due fission of the uranium; the Rods are controls of power and influence in the amount of radiation and increasing and decreasing of temperatures and the Rate of flow of primary circuit has function of the transport of energy by removing of heat of the nucleus Like this, labeling B= {Power, Rate of flow of Primary Circuit, Rod of Control/Security and Δ P} was computed the correlation between B and all another variables monitoring (coefficient of multiple correlation), that is, by the computer of the multiple correlation, that is tool of Theory of Canonical Correlations, was possible to computer how much the set B can predict each variable. Due the impossibility of a satisfactory approximation by B in the prediction of some variables, it was included one or more variables that have high correlation with this variable to improve the quality of prediction. In this work an artificial neural network was trained and the results were satisfactory since the IEA-R1 Data Acquisition System reactor monitors 64 variables and, with a set of 9 input variables resulting from the correlation analysis, it was possible to monitor 51 variables using neural networks. (author)

  7. Attenuation of vagal modulation with aging: Univariate and bivariate analysis of HRV.

    Science.gov (United States)

    Junior, E C; Oliveira, F M

    2017-07-01

    The aging process leads to diverse changes in the human organism, including in autonomic system modulation. In this study, we calculated indices of HRV in frequency (power spectral density, PSD) and time (the impulse response (IR) method) domains, using data from healthy young and elderly volunteers (Fantasia database from Physionet). The results obtained showed that aging leads to an attenuation of vagal modulation of elderly individuals when compared to young volunteers.

  8. On the matched pairs sign test using bivariate ranked set sampling ...

    African Journals Online (AJOL)

    Administrator

    2008-01-15

    Jan 15, 2008 ... control confounding in the design stage of a study, matching is a strategy that must include elements of both design and analysis”. (Hennekens and Buring 1987). For example, a two-year .... used the RSS sampling method to improve parametric and non- parametric statistical inference. For non-parametric ...

  9. Modeling heterogeneous (co)variances from adjacent-SNP groups improves genomic prediction for milk protein composition traits

    DEFF Research Database (Denmark)

    Gebreyesus, Grum; Lund, Mogens Sandø; Buitenhuis, Albert Johannes

    2017-01-01

    Accurate genomic prediction requires a large reference population, which is problematic for traits that are expensive to measure. Traits related to milk protein composition are not routinely recorded due to costly procedures and are considered to be controlled by a few quantitative trait loci...... of large effect. The amount of variation explained may vary between regions leading to heterogeneous (co)variance patterns across the genome. Genomic prediction models that can efficiently take such heterogeneity of (co)variances into account can result in improved prediction reliability. In this study, we...... developed and implemented novel univariate and bivariate Bayesian prediction models, based on estimates of heterogeneous (co)variances for genome segments (BayesAS). Available data consisted of milk protein composition traits measured on cows and de-regressed proofs of total protein yield derived for bulls...

  10. Introducing Catastrophe-QSAR. Application on Modeling Molecular Mechanisms of Pyridinone Derivative-Type HIV Non-Nucleoside Reverse Transcriptase Inhibitors

    Science.gov (United States)

    Putz, Mihai V.; Lazea, Marius; Putz, Ana-Maria; Duda-Seiman, Corina

    2011-01-01

    The classical method of quantitative structure-activity relationships (QSAR) is enriched using non-linear models, as Thom’s polynomials allow either uni- or bi-variate structural parameters. In this context, catastrophe QSAR algorithms are applied to the anti-HIV-1 activity of pyridinone derivatives. This requires calculation of the so-called relative statistical power and of its minimum principle in various QSAR models. A new index, known as a statistical relative power, is constructed as an Euclidian measure for the combined ratio of the Pearson correlation to algebraic correlation, with normalized t-Student and the Fisher tests. First and second order inter-model paths are considered for mono-variate catastrophes, whereas for bi-variate catastrophes the direct minimum path is provided, allowing the QSAR models to be tested for predictive purposes. At this stage, the max-to-min hierarchies of the tested models allow the interaction mechanism to be identified using structural parameter succession and the typical catastrophes involved. Minimized differences between these catastrophe models in the common structurally influential domains that span both the trial and tested compounds identify the “optimal molecular structural domains” and the molecules with the best output with respect to the modeled activity, which in this case is human immunodeficiency virus type 1 HIV-1 inhibition. The best molecules are characterized by hydrophobic interactions with the HIV-1 p66 subunit protein, and they concur with those identified in other 3D-QSAR analyses. Moreover, the importance of aromatic ring stacking interactions for increasing the binding affinity of the inhibitor-reverse transcriptase ligand-substrate complex is highlighted. PMID:22272148

  11. Joint meteorological and hydrological drought model: a management tool for proactive water resources planning of semi-arid regions

    Science.gov (United States)

    Modaresi Rad, Arash; Ahmadi Ardakani, Samira; Ghahremani, Zahra; Ghahreman, Bijan; Khalili, Davar

    2016-04-01

    Conventionally drought analysis has been limited to single drought category. Utilization of models incorporating multiple drought categories, can relax this limitation. A copula-based model is proposed, which uses meteorological and hydrological drought indices to assess drought events for ultimate management of water resources, at small scales, i.e., sub-watersheds. The study area is a sub basin located at Karkheh watershed (western Iran), utilizing 41-year data of 4 raingauge stations and one hydrometric station located upstream and at the outlet respectively. Prior to drought analysis, time series of precipitation and streamflow records are investigated for possible dependency/significant trend. Considering the semi-arid nature of the study area, boxplots are utilized to graphically capture the rainy months, which used to evaluate the degree of correlation between streamflow and precipitation records via nonparametric correlations and bivariate tail dependence. Time scales of 3- and 12-month are considered, which are used to study vulnerability of early vegetation establishment and long-term ecosystem resilience, respectively. Among four common goodness of fit tests, the Cramér-von-Mises is found preferable for defining copula distribution functions through Akaike & Bayesian information criteria and coefficient of determination. Furthermore the uncertainty associated with different copula models is measured using the concept of entropy. A new bivariate drought modeling approach is proposed through copulas. The proposed index, named standardized precipitation-streamflow index (SPSI) is compared with two separate indices of streamflow drought index (SDI) and standardized precipitation index (SPI). According to results, the SPSI could detect onset of droughts dominated by precipitation as is similarly indicated by SPI index. It also captures discordant case of normal period precipitation with dry period streamflow and vice versa. Finally, combination of severity

  12. A geostatical model for USA uranium deposits

    International Nuclear Information System (INIS)

    Drew, M.W.

    1979-01-01

    Evidence exists which suggests that the frequency distributions of both grade and size of metal deposits may be well approximated by lognormal distribution functions. Using data on presently viable deposits and a simplified function which links production cost to deposit grade and size, a bivariate lognormal deposit grade/size distribution may be calibrated for a given geological environment. Exploration is introduced by assuming that the proportion discovered of the potential uranium reserve available at or below a given production can be represented by a fraction of the average deposit size and the limit exploration expenditure. As output, the model derives estimates of total reserves linked to maximum production costs and to exploration expenditure where the latter may be expressed either as expenditure per lb of mineral discovered or as a given percentage of operating profit. Reserve/price functions have been derived for the USA based on USAEC data. Tentative conclusions which may be drawn from the results are: (1) Assuming that a similar proportion of profits continues to be allocated to exploration in the future, then the USA should be able to meet its own national demand for uranium up to the end of the century (say 2 M tons U) at prices up to US$35/lb U 3 O 8 (1.1.75$ values). (2) If instead of all exploration being funded from a fixed maximum proportion of mining company profits, consumers were to fund additional exploration separately, then it is possible that the total unit cost of uranium to the consumers would thereby be reduced. It should be stressed that these conclusions are tentative and are only as reliable as the input data and assumptions of the model. In particular no account is taken of commercial or political forces which could artificially restrict supplies or raise prices. The model should be regarded as a first attempt and is offered as a basis for discussion leading to further development. (author)

  13. Modelling of strongly coupled particle growth and aggregation

    International Nuclear Information System (INIS)

    Gruy, F; Touboul, E

    2013-01-01

    The mathematical modelling of the dynamics of particle suspension is based on the population balance equation (PBE). PBE is an integro-differential equation for the population density that is a function of time t, space coordinates and internal parameters. Usually, the particle is characterized by a unique parameter, e.g. the matter volume v. PBE consists of several terms: for instance, the growth rate and the aggregation rate. So, the growth rate is a function of v and t. In classical modelling, the growth and the aggregation are independently considered, i.e. they are not coupled. However, current applications occur where the growth and the aggregation are coupled, i.e. the change of the particle volume with time is depending on its initial value v 0 , that in turn is related to an aggregation event. As a consequence, the dynamics of the suspension does not obey the classical Von Smoluchowski equation. This paper revisits this problem by proposing a new modelling by using a bivariate PBE (with two internal variables: v and v 0 ) and by solving the PBE by means of a numerical method and Monte Carlo simulations. This is applied to a physicochemical system with a simple growth law and a constant aggregation kernel.

  14. Protein Structure Classification and Loop Modeling Using Multiple Ramachandran Distributions

    KAUST Repository

    Najibi, Seyed Morteza

    2017-02-08

    Recently, the study of protein structures using angular representations has attracted much attention among structural biologists. The main challenge is how to efficiently model the continuous conformational space of the protein structures based on the differences and similarities between different Ramachandran plots. Despite the presence of statistical methods for modeling angular data of proteins, there is still a substantial need for more sophisticated and faster statistical tools to model the large-scale circular datasets. To address this need, we have developed a nonparametric method for collective estimation of multiple bivariate density functions for a collection of populations of protein backbone angles. The proposed method takes into account the circular nature of the angular data using trigonometric spline which is more efficient compared to existing methods. This collective density estimation approach is widely applicable when there is a need to estimate multiple density functions from different populations with common features. Moreover, the coefficients of adaptive basis expansion for the fitted densities provide a low-dimensional representation that is useful for visualization, clustering, and classification of the densities. The proposed method provides a novel and unique perspective to two important and challenging problems in protein structure research: structure-based protein classification and angular-sampling-based protein loop structure prediction.

  15. Care-Seeking Patterns and Direct Economic Burden of Injuries in Bangladesh.

    Science.gov (United States)

    Alfonso, Natalia Y; Alonge, Olakunle; Hoque, Dewan Md Emdadul; Baset, Kamran Ul; Hyder, Adnan A; Bishai, David

    2017-04-29

    This study provides a comprehensive review of the care-seeking patterns and direct economic burden of injuries from the victims' perspective in rural Bangladesh using a 2013 household survey covering 1.17 million people. Descriptive statistics and bivariate analyses were used to derive rates and test the association between variables. An analytic model was used to estimate total injury out-of-pocket (OOP) payments and a multivariate probit regression model assessed the relationship between financial distress and injury type. Results show non-fatal injuries occur to 1 in 5 people in our sample per year. With average household size of 4.5 in Bangladesh--every household has an injury every year. Most non-fatally injured patients sought healthcare from drug sellers. Less than half of fatal injuries sought healthcare and half of those with care were hospitalized. Average OOP payments varied significantly (range: $8-$830) by injury type and outcome (fatal vs. non-fatal). Total injury OOP expenditure was $$355,795 and $5000 for non-fatal and fatal injuries, respectively, per 100,000 people. The majority of household heads with injuries reported financial distress. This study can inform injury prevention advocates on disparities in healthcare usage, OOP costs and financial distress. Reallocation of resources to the most at risk populations can accelerate reduction of preventable injuries and prevent injury related catastrophic payments and impoverishment.

  16. PDEAR model prediction of Protea species in years 2070-2100

    Science.gov (United States)

    Guo, Danni; Guo, Renkuan; Midgley, Guy F.; Rebelo, A. G.

    2009-10-01

    Global warming and climate changes are changing the environment and therefore changing the distribution and behaviour of the plant species. Plant species often move and change their distributions as they find their original habitats are no longer suitable to their needs. It is therefore important to establish a statistical model to catch up the movement and patterns of the endangered species in order to effectively manage environmental protection under the inevitable biodiversity changes that are taking place. In this paper, we are focusing on the population category of rare Proteas that has an estimated population size from 1 to 10 per sample site, which is very small. We used the partial differential equation associated regression (PDEAR) model, which merges the partial differential equation theory, (statistical) linear model theory and random fuzzy variable theory together into a efficient small-sample oriented model, for the spatial pattern changing analysis. The regression component in a PDEAR model is in nature a special random fuzzy multivariate regression model. We developed a bivariate model for investigating the impacts from rainfall and temperature on the Protea species in average sense in the population size of 1 to 10, in the Cape Floristic Region, from 1992 to 2002, South Africa. Under same the average biodiversity structure assumptions, we explore the future spatial change patterns of Protea species in the population size of 1 to 10 with future (average) predicted rainfall and temperature. The spatial distribution and patterns are clearly will help us to explore global climate changing impacts on endangered species.

  17. In vitro burn model illustrating heat conduction patterns using compressed thermal papers.

    Science.gov (United States)

    Lee, Jun Yong; Jung, Sung-No; Kwon, Ho

    2015-01-01

    To date, heat conduction from heat sources to tissue has been estimated by complex mathematical modeling. In the present study, we developed an intuitive in vitro skin burn model that illustrates heat conduction patterns inside the skin. This was composed of tightly compressed thermal papers with compression frames. Heat flow through the model left a trace by changing the color of thermal papers. These were digitized and three-dimensionally reconstituted to reproduce the heat conduction patterns in the skin. For standardization, we validated K91HG-CE thermal paper using a printout test and bivariate correlation analysis. We measured the papers' physical properties and calculated the estimated depth of heat conduction using Fourier's equation. Through contact burns of 5, 10, 15, 20, and 30 seconds on porcine skin and our burn model using a heated brass comb, and comparing the burn wound and heat conduction trace, we validated our model. The heat conduction pattern correlation analysis (intraclass correlation coefficient: 0.846, p heat conduction depth correlation analysis (intraclass correlation coefficient: 0.93, p model. Our model showed good correlation with porcine skin burn injury and replicated its heat conduction patterns. © 2014 by the Wound Healing Society.

  18. Do objective neighbourhood characteristics relate to residents' preferences for certain sports locations? A cross-sectional study using a discrete choice modelling approach.

    Science.gov (United States)

    Deelen, Ineke; Jansen, Marijke; Dogterom, Nico J; Kamphuis, Carlijn B M; Ettema, Dick

    2017-12-11

    The number of sports facilities, sports clubs, or city parks in a residential neighbourhood may affect the likelihood that people participate in sports and their preferences for a certain sports location. This study aimed to assess whether objective physical and socio-spatial neighbourhood characteristics relate to sports participation and preferences for sports locations. Data from Dutch adults (N = 1201) on sports participation, their most-used sports location, and socio-demographic characteristics were collected using an online survey. Objective land-use data and the number of sports facilities were gathered for each participant using a 2000-m buffer around their home locations, whereas socio-spatial neighbourhood characteristics (i.e., density, socio-economic status, and safety) were determined at the neighbourhood level. A discrete choice-modelling framework (multinomial probit model) was used to model the associations between neighbourhood characteristics and sports participation and location. Higher proportions of green space, blue space, and the number of sports facilities were positively associated with sports participation in public space, at sports clubs, and at other sports facilities. Higher degrees of urbanization were negatively associated with sports participation at public spaces, sports clubs, and other sports facilities. Those with more green space, blue space or sports facilities in their residential neighbourhood were more likely to participate in sports, but these factors did not affect their preference for a certain sports location. Longitudinal study designs are necessary to assess causality: do active people choose to live in sports-facilitating neighbourhoods, or do neighbourhood characteristics affect sports participation?

  19. Modeling the probability distribution of positional errors incurred by residential address geocoding

    Directory of Open Access Journals (Sweden)

    Mazumdar Soumya

    2007-01-01

    Full Text Available Abstract Background The assignment of a point-level geocode to subjects' residences is an important data assimilation component of many geographic public health studies. Often, these assignments are made by a method known as automated geocoding, which attempts to match each subject's address to an address-ranged street segment georeferenced within a streetline database and then interpolate the position of the address along that segment. Unfortunately, this process results in positional errors. Our study sought to model the probability distribution of positional errors associated with automated geocoding and E911 geocoding. Results Positional errors were determined for 1423 rural addresses in Carroll County, Iowa as the vector difference between each 100%-matched automated geocode and its true location as determined by orthophoto and parcel information. Errors were also determined for 1449 60%-matched geocodes and 2354 E911 geocodes. Huge (> 15 km outliers occurred among the 60%-matched geocoding errors; outliers occurred for the other two types of geocoding errors also but were much smaller. E911 geocoding was more accurate (median error length = 44 m than 100%-matched automated geocoding (median error length = 168 m. The empirical distributions of positional errors associated with 100%-matched automated geocoding and E911 geocoding exhibited a distinctive Greek-cross shape and had many other interesting features that were not capable of being fitted adequately by a single bivariate normal or t distribution. However, mixtures of t distributions with two or three components fit the errors very well. Conclusion Mixtures of bivariate t distributions with few components appear to be flexible enough to fit many positional error datasets associated with geocoding, yet parsimonious enough to be feasible for nascent applications of measurement-error methodology to spatial epidemiology.

  20. The effects of preoperative cardiology consultation prior to elective abdominal aortic aneurysm repair on patient morbidity.

    Science.gov (United States)

    Boniakowski, Anna E; Davis, Frank M; Phillips, Amanda R; Robinson, Adina B; Coleman, Dawn M; Henke, Peter K

    2017-08-01

    Objectives The relationship between preoperative medical consultations and postoperative complications has not been extensively studied. Thus, we investigated the impact of preoperative consultation on postoperative morbidity following elective abdominal aortic aneurysm repair. Methods A retrospective review was conducted on 469 patients (mean age 72 years, 20% female) who underwent elective abdominal aortic aneurysm repair from June 2007 to July 2014. Data elements included detailed medical history, preoperative cardiology consultation, and postoperative complications. Primary outcomes included 30-day morbidity, consult-specific morbidity, and mortality. A bivariate probit regression model accounting for the endogeneity of binary preoperative medical consult and patient variability was estimated with a maximum likelihood function. Results Eighty patients had preoperative medical consults (85% cardiology); thus, our analysis focuses on the effect of cardiac-related preoperative consults. Hyperlipidemia, increased aneurysm size, and increased revised cardiac risk index increased likelihood of referral to cardiology preoperatively. Surgery type (endovascular versus open repair) was not significant in development of postoperative complications when controlling for revised cardiac risk index ( p = 0.295). After controlling for patient comorbidities, there was no difference in postoperative cardiac-related complications between patients who did and did not undergo cardiology consultation preoperatively ( p = 0.386). Conclusions When controlling for patient disease severity using revised cardiac risk index risk stratification, preoperative cardiology consultation is not associated with postoperative cardiac morbidity.

  1. Women's autonomy and reproductive health care utilisation: empirical evidence from Tajikistan.

    Science.gov (United States)

    Kamiya, Yusuke

    2011-10-01

    Women's autonomy is widely considered to be a key to improving maternal health in developing countries, whereas there is no consistent empirical evidence to support this claim. This paper examines whether or not and how women's autonomy within the household affects the use of reproductive health care, using a household survey data from Tajikistan. Estimation is performed by the bivariate probit model whereby woman's use of health services and the level of women's autonomy are recursively and simultaneously determined. The data is from a sample of women aged 15-49 from the Tajikistan Living Standard Measurement Survey 2007. Women's autonomy as measured by women's decision-making on household financial matters increase the likelihood that a woman receives antenatal and delivery care, whilst it has a negative effect on the probability of attending to four or more antenatal consultations. The hypothesis that women's autonomy and reproductive health care utilisation are independently determined is rejected for most of the estimation specifications, indicating the importance of taking into account the endogenous nature of women's autonomy when assessing its effect on health care use. The empirical results reconfirm the assertion that women's status within the household is closely linked to reproductive health care utilisation in developing countries. Policymakers therefore need not only to implement not only direct health interventions but also to focus on broader social policies which address women's empowerment. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  2. Association between clean indoor air laws and voluntary smokefree rules in homes and cars.

    Science.gov (United States)

    Cheng, Kai-Wen; Okechukwu, Cassandra A; McMillen, Robert; Glantz, Stanton A

    2015-03-01

    This study examines the influence that smokefree workplaces, restaurants and bars have on the adoption of smokefree rules in homes and cars, and whether there is an association with adopting smokefree rules in homes and cars. Bivariate probit models were used to jointly estimate the likelihood of living in a smokefree home and having a smokefree car as a function of law coverage and other variables. Household data were obtained from the nationally representative Social Climate Survey of Tobacco Control 2001, 2002 and 2004-2009; clean indoor air law data were from the American Nonsmokers' Rights Foundation Tobacco Control Laws Database. 'Full coverage' and 'partial coverage' smokefree legislation is associated with an increased likelihood of having voluntary home and car smokefree rules compared with 'no coverage'. The association between 'full coverage' and smokefree rule in homes and cars is 5% and 4%, respectively, and the association between 'partial coverage' and smokefree rules in homes and cars is 3% and 4%, respectively. There is a positive association between the adoption of smokefree rules in homes and cars. Clean indoor air laws provide the additional benefit of encouraging voluntary adoption of smokefree rules in homes and cars. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  3. Global Obesity Study on Drivers for Weight Reduction Strategies

    Directory of Open Access Journals (Sweden)

    Carola Grebitus

    2015-01-01

    Full Text Available Objective: To assess factors determining the reaction of individuals to the threats of overweight and obesity and to examine the interdependencies between weight-reducing strategies. Methods: Cross-country survey covering 19 countries and 13,155 interviews. Data were analysed using a bivariate probit model that allows simultaneously analysing two weight-reducing strategies. Results: Results show that weight-reducing strategies chosen are not independent from each other. Findings also reveal that different strategies are chosen by different population segments. Women are more likely to change their dietary patterns and less likely to become physically active after surpassing a weight threshold. In addition, the probability of a dietary change in case of overweight differs considerably between countries. The study also reveals that attitudes are an important factor for the strategy choice. Conclusions: It is vital for public health policies to understand determinants of citizens' engagement in weight reduction strategies once a certain threshold is reached. Thus, results can support the design of public health campaigns and programmes that aim to change community or national health behaviour trends taking into account, e.g., national differences.

  4. The effect of poverty and caregiver education on perceived need and access to health services among children with special health care needs.

    Science.gov (United States)

    Porterfield, Shirley L; McBride, Timothy D

    2007-02-01

    We examined the association between several variables and the use of specialist physician services, developmental therapies, and prescription medications among children with special health care needs (N=38866). We used a bivariate probit model to estimate whether a given child needed specialized services and whether that child accessed those services; we controlled for activity limitations and severity of special needs. Variables included family income, mother's (or other caregiver's) educational level, health insurance coverage, and perceived need for specialized services. We used data from the 2001 National Survey of Children with Special Health Care Needs. Lower-income and less-educated parents were less likely than higher-income and more-educated parents to say their special needs children needed specialized health services. The probability of accessing specialized health services-when needed-increased with both higher family income and insurance coverage. Children with special health care needs have less access to health services because their parents do not recognize the need for those services. An intervention in the form of information at the family level may be an appropriate policy response.

  5. Awareness and Adoption of Soil and Water Conservation Technologies in a Developing Country: A Case of Nabajuzi Watershed in Central Uganda

    Science.gov (United States)

    Kagoya, Sarah; Paudel, Krishna P.; Daniel, Nadhomi L.

    2018-02-01

    Soil and water conservation technologies have been widely available in most parts of Uganda. However, not only has the adoption rate been low but also many farmers seem not to be aware of these technologies. This study aims at identifying the factors that influence awareness and adoption of soil and water conservation technologies in Nabajuzi watershed in central Uganda. A bivariate probit model was used to examine farmers' awareness and adoption of soil and water conservation technologies in the watershed. We use data collected from the interview of 400 households located in the watershed to understand the factors affecting the awareness and adoption of these technologies in the study area. Findings indicate that the likelihood of being aware and adopting the technologies are explained by the age of household head, being a tenant, and number of years of access to farmland. To increase awareness and adoption of technologies in Uganda, policymakers may expedite the process of land titling as farmers may feel secure about landholding and thus adopt these technologies to increase profitability and productivity in the long run. Incentive payments to farmers residing in the vulnerable region to adopt these considered technologies may help to alleviate soil deterioration problems in the affected area.

  6. Economic analysis of the intangible impacts of informal care for people with Alzheimer's disease and other mental disorders.

    Science.gov (United States)

    Gervès, Chloé; Bellanger, Martine Marie; Ankri, Joël

    2013-01-01

    Valuation of the intangible impacts of informal care remains a great challenge for economic evaluation, especially in the framework of care recipients with cognitive impairment. Our main objective was to explore the influence of intangible impacts of caring on both informal caregivers' ability to estimate their willingness to pay (WTP) to be replaced and their WTP value. We mapped characteristics that influence ability or inability to estimate WTP by using a multiple correspondence analysis. We ran a bivariate probit model with sample selection to further analyze the caregivers' WTP value conditional on their ability to estimate their WTP. A distinction exists between the opportunity costs of the caring dimension and those of the intangible costs and benefits of caring. Informal caregivers' ability to estimate WTP is negatively influenced by both intangible benefits from caring (P Caregivers' WTP value is negatively associated with positive intangible impacts of informal care (P Informal caregivers' WTP and their ability to estimate WTP are both influenced by intangible burden and benefit of caring. These results call into question the relevance of a hypothetical generalized financial compensation system as the optimal way to motivate caregivers to continue providing care. Copyright © 2013 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  7. Determinants of induced abortion: an analysis of individual, household and contextual factors in Rajasthan, India.

    Science.gov (United States)

    Elul, Batya

    2011-01-01

    In the developing world, little is known about the risk and precipitating factors for abortion, due to a dearth of community-based surveys. Most analyses of determinants of induced abortion consider only a small set of household and individual socio-demographic factors and treat abortion as an isolated outcome, which ignores its relationship with prior reproductive health behaviours and experiences. In this paper, data from a cross-sectional survey of abortion knowledge, attitudes and practices among 2571 currently married women of reproductive age in Rajasthan, India, were used to examine contextual-, household- and individual-level determinants of abortion. Bivariate probit models, which jointly determine the probability of pregnancy and the conditional probability of abortion, were used to reflect the probability of abortion as a result of interrelated and sequential events. Increased socioeconomic status and life-cycle factors were associated with both the probability of pregnancy and with the conditional likelihood of abortion. Women who reported personal networks were also more likely to terminate pregnancies, particularly if their network members purportedly had abortion experience. Community knowledge of sex-selective abortion also exerted a significant positive effect on the propensity to terminate a pregnancy. For rural women only, community beliefs regarding spousal consent requirements pre-abortion were also significantly associated with abortion.

  8. Part-time sick leave as a treatment method for individuals with musculoskeletal disorders.

    Science.gov (United States)

    Andrén, Daniela; Svensson, Mikael

    2012-09-01

    There is increasing evidence that staying active is an important part of a recovery process for individuals on sick leave due to musculoskeletal disorders (MSDs). It has been suggested that using part-time sick-leave rather than full-time sick leave will enhance the possibility of full recovery to the workforce, and several countries actively favor this policy. The aim of this paper is to examine if it is beneficial for individuals on sick leave due to MSDs to be on part-time sick leave compared to full-time sick leave. A sample of 1,170 employees from the RFV-LS (register) database of the Social Insurance Agency of Sweden is used. The effect of being on part-time sick leave compared to full-time sick leave is estimated for the probability of returning to work with full recovery of lost work capacity. A two-stage recursive bivariate probit model is used to deal with the endogeneity problem. The results indicate that employees assigned to part-time sick leave do recover to full work capacity with a higher probability than those assigned to full-time sick leave. The average treatment effect of part-time sick leave is 25 percentage points. Considering that part-time sick leave may also be less expensive than assigning individuals to full-time sick leave, this would imply efficiency improvements from assigning individuals, when possible, to part-time sick leave.

  9. Preferences for groundnut products among urban residents in Ghana.

    Science.gov (United States)

    Meng, Ting; Florkowski, Wojciech J; Klepacka, Anna M; Sarpong, Daniel B; Resurreccion, Anna V A; Chinnan, Manjeet S; Ekielski, Adam

    2018-01-01

    The present study identifies factors influencing preferences for common groundnut products using information about product perceptions from residents in Ghana's cities collected in 2011. In Ghana, domestically produced groundnuts, processed into a variety of groundnut products, are a vital source of protein and other nutrients. Response summaries provide insights about the eating frequency of various products, whereas a bivariate ordered probit model identifies factors influencing preferences for groundnut paste and roasted groundnuts. Attributes such as taste, protein content and healthfulness are important for roasted groundnuts, whereas aroma, taste and protein content are associated with a preference for groundnut paste. Large households prefer paste, whereas the less educated and those from households with children prefer roasted groundnuts. Adding a child (4-12 years old) increases probability of 'liking very much' roasted groundnuts and an additional adult at home changes that probability regarding groundnut paste. College-educated consumers prefer groundnut paste less than those with less education. Consumers from Tamale and Takoradi prefer roasted groundnuts and groundnut paste more than Accra households. Taste and protein content are attributes of groundnut paste and roasted groundnuts preferred by consumers. Location is a significant factor shaping preference for roasted groundnuts and groundnut paste. © 2017 Society of Chemical Industry. © 2017 Society of Chemical Industry.

  10. Evaluation of field development plans using 3-D reservoir modelling

    Energy Technology Data Exchange (ETDEWEB)

    Seifert, D.; Lewis, J.J.M. [Heriot-Watt Univ., Edinburgh (United Kingdom); Newbery, J.D.H. [Conoco, UK Ltd., Aberdeen (United Kingdom)] [and others

    1997-08-01

    Three-dimensional reservoir modelling has become an accepted tool in reservoir description and is used for various purposes, such as reservoir performance prediction or integration and visualisation of data. In this case study, a small Northern North Sea turbiditic reservoir was to be developed with a line drive strategy utilising a series of horizontal producer and injector pairs, oriented north-south. This development plan was to be evaluated and the expected outcome of the wells was to be assessed and risked. Detailed analyses of core, well log and analogue data has led to the development of two geological {open_quotes}end member{close_quotes} scenarios. Both scenarios have been stochastically modelled using the Sequential Indicator Simulation method. The resulting equiprobable realisations have been subjected to detailed statistical well placement optimisation techniques. Based upon bivariate statistical evaluation of more than 1000 numerical well trajectories for each of the two scenarios, it was found that the wells inclinations and lengths had a great impact on the wells success, whereas the azimuth was found to have only a minor impact. After integration of the above results, the actual well paths were redesigned to meet external drilling constraints, resulting in substantial reductions in drilling time and costs.

  11. Bayesian models for cost-effectiveness analysis in the presence of structural zero costs.

    Science.gov (United States)

    Baio, Gianluca

    2014-05-20

    Bayesian modelling for cost-effectiveness data has received much attention in both the health economics and the statistical literature, in recent years. Cost-effectiveness data are characterised by a relatively complex structure of relationships linking a suitable measure of clinical benefit (e.g. quality-adjusted life years) and the associated costs. Simplifying assumptions, such as (bivariate) normality of the underlying distributions, are usually not granted, particularly for the cost variable, which is characterised by markedly skewed distributions. In addition, individual-level data sets are often characterised by the presence of structural zeros in the cost variable. Hurdle models can be used to account for the presence of excess zeros in a distribution and have been applied in the context of cost data. We extend their application to cost-effectiveness data, defining a full Bayesian specification, which consists of a model for the individual probability of null costs, a marginal model for the costs and a conditional model for the measure of effectiveness (given the observed costs). We presented the model using a working example to describe its main features. © 2013 The Authors. Statistics in Medicine published by John Wiley & Sons, Ltd.

  12. Model(ing) Law

    DEFF Research Database (Denmark)

    Carlson, Kerstin

    The International Criminal Tribunal for the former Yugoslavia (ICTY) was the first and most celebrated of a wave of international criminal tribunals (ICTs) built in the 1990s designed to advance liberalism through international criminal law. Model(ing) Justice examines the case law of the ICTY...

  13. Models and role models

    NARCIS (Netherlands)

    ten Cate, J.M.

    2015-01-01

    Developing experimental models to understand dental caries has been the theme in our research group. Our first, the pH-cycling model, was developed to investigate the chemical reactions in enamel or dentine, which lead to dental caries. It aimed to leverage our understanding of the fluoride mode of

  14. Dynamic least-squares kernel density modeling of Fokker-Planck equations with application to neural population.

    Science.gov (United States)

    Shotorban, Babak

    2010-04-01

    The dynamic least-squares kernel density (LSQKD) model [C. Pantano and B. Shotorban, Phys. Rev. E 76, 066705 (2007)] is used to solve the Fokker-Planck equations. In this model the probability density function (PDF) is approximated by a linear combination of basis functions with unknown parameters whose governing equations are determined by a global least-squares approximation of the PDF in the phase space. In this work basis functions are set to be Gaussian for which the mean, variance, and covariances are governed by a set of partial differential equations (PDEs) or ordinary differential equations (ODEs) depending on what phase-space variables are approximated by Gaussian functions. Three sample problems of univariate double-well potential, bivariate bistable neurodynamical system [G. Deco and D. Martí, Phys. Rev. E 75, 031913 (2007)], and bivariate Brownian particles in a nonuniform gas are studied. The LSQKD is verified for these problems as its results are compared against the results of the method of characteristics in nondiffusive cases and the stochastic particle method in diffusive cases. For the double-well potential problem it is observed that for low to moderate diffusivity the dynamic LSQKD well predicts the stationary PDF for which there is an exact solution. A similar observation is made for the bistable neurodynamical system. In both these problems least-squares approximation is made on all phase-space variables resulting in a set of ODEs with time as the independent variable for the Gaussian function parameters. In the problem of Brownian particles in a nonuniform gas, this approximation is made only for the particle velocity variable leading to a set of PDEs with time and particle position as independent variables. Solving these PDEs, a very good performance by LSQKD is observed for a wide range of diffusivities.

  15. Dynamic least-squares kernel density modeling of Fokker-Planck equations with application to neural population

    Science.gov (United States)

    Shotorban, Babak

    2010-04-01

    The dynamic least-squares kernel density (LSQKD) model [C. Pantano and B. Shotorban, Phys. Rev. E 76, 066705 (2007)] is used to solve the Fokker-Planck equations. In this model the probability density function (PDF) is approximated by a linear combination of basis functions with unknown parameters whose governing equations are determined by a global least-squares approximation of the PDF in the phase space. In this work basis functions are set to be Gaussian for which the mean, variance, and covariances are governed by a set of partial differential equations (PDEs) or ordinary differential equations (ODEs) depending on what phase-space variables are approximated by Gaussian functions. Three sample problems of univariate double-well potential, bivariate bistable neurodynamical system [G. Deco and D. Martí, Phys. Rev. E 75, 031913 (2007)], and bivariate Brownian particles in a nonuniform gas are studied. The LSQKD is verified for these problems as its results are compared against the results of the method of characteristics in nondiffusive cases and the stochastic particle method in diffusive cases. For the double-well potential problem it is observed that for low to moderate diffusivity the dynamic LSQKD well predicts the stationary PDF for which there is an exact solution. A similar observation is made for the bistable neurodynamical system. In both these problems least-squares approximation is made on all phase-space variables resulting in a set of ODEs with time as the independent variable for the Gaussian function parameters. In the problem of Brownian particles in a nonuniform gas, this approximation is made only for the particle velocity variable leading to a set of PDEs with time and particle position as independent variables. Solving these PDEs, a very good performance by LSQKD is observed for a wide range of diffusivities.

  16. Post-processing of multi-hydrologic model simulations for improved streamflow projections

    Science.gov (United States)

    khajehei, sepideh; Ahmadalipour, Ali; Moradkhani, Hamid

    2016-04-01

    Hydrologic model outputs are prone to bias and uncertainty due to knowledge deficiency in model and data. Uncertainty in hydroclimatic projections arises due to uncertainty in hydrologic model as well as the epistemic or aleatory uncertainties in GCM parameterization and development. This study is conducted to: 1) evaluate the recently developed multi-variate post-processing method for historical simulations and 2) assess the effect of post-processing on uncertainty and reliability of future streamflow projections in both high-flow and low-flow conditions. The first objective is performed for historical period of 1970-1999. Future streamflow projections are generated for 10 statistically downscaled GCMs from two widely used downscaling methods: Bias Corrected Statistically Downscaled (BCSD) and Multivariate Adaptive Constructed Analogs (MACA), over the period of 2010-2099 for two representative concentration pathways of RCP4.5 and RCP8.5. Three semi-distributed hydrologic models were employed and calibrated at 1/16 degree latitude-longitude resolution for over 100 points across the Columbia River Basin (CRB) in the pacific northwest USA. Streamflow outputs are post-processed through a Bayesian framework based on copula functions. The post-processing approach is relying on a transfer function developed based on bivariate joint distribution between the observation and simulation in historical period. Results show that application of post-processing technique leads to considerably higher accuracy in historical simulations and also reducing model uncertainty in future streamflow projections.

  17. On Diagnostic Checking of Vector ARMA-GARCH Models with Gaussian and Student-t Innovations

    Directory of Open Access Journals (Sweden)

    Yongning Wang

    2013-04-01

    Full Text Available This paper focuses on the diagnostic checking of vector ARMA (VARMA models with multivariate GARCH errors. For a fitted VARMA-GARCH model with Gaussian or Student-t innovations, we derive the asymptotic distributions of autocorrelation matrices of the cross-product vector of standardized residuals. This is different from the traditional approach that employs only the squared series of standardized residuals. We then study two portmanteau statistics, called Q1(M and Q2(M, for model checking. A residual-based bootstrap method is provided and demonstrated as an effective way to approximate the diagnostic checking statistics. Simulations are used to compare the performance of the proposed statistics with other methods available in the literature. In addition, we also investigate the effect of GARCH shocks on checking a fitted VARMA model. Empirical sizes and powers of the proposed statistics are investigated and the results suggest a procedure of using jointly Q1(M and Q2(M in diagnostic checking. The bivariate time series of FTSE 100 and DAX index returns is used to illustrate the performance of the proposed portmanteau statistics. The results show that it is important to consider the cross-product series of standardized residuals and GARCH effects in model checking.

  18. Model of Cholera Forecasting Using Artificial Neural Network in Chabahar City, Iran

    Directory of Open Access Journals (Sweden)

    Zahra Pezeshki

    2016-02-01

    Full Text Available Background: Cholera as an endemic disease remains a health issue in Iran despite decrease in incidence. Since forecasting epidemic diseases provides appropriate preventive actions in disease spread, different forecasting methods including artificial neural networks have been developed to study parameters involved in incidence and spread of epidemic diseases such as cholera. Objectives: In this study, cholera in rural area of Chabahar, Iran was investigated to achieve a proper forecasting model. Materials and Methods: Data of cholera was gathered from 465 villages, of which 104 reported cholera during ten years period of study. Logistic regression modeling and correlate bivariate were used to determine risk factors and achieve possible predictive model one-hidden-layer perception neural network with backpropagation training algorithm and the sigmoid activation function was trained and tested between the two groups of infected and non-infected villages after preprocessing. For determining validity of prediction, the ROC diagram was used. The study variables included climate conditions and geographical parameters. Results: After determining significant variables of cholera incidence, the described artificial neural network model was capable of forecasting cholera event among villages of test group with accuracy up to 80%. The highest accuracy was achieved when model was trained with variables that were significant in statistical analysis describing that the two methods confirm the result of each other. Conclusions: Application of artificial neural networking assists forecasting cholera for adopting protective measures. For a more accurate prediction, comprehensive information is required including data on hygienic, social and demographic parameters.

  19. A unification of models for meta-analysis of diagnostic accuracy studies without a gold standard.

    Science.gov (United States)

    Liu, Yulun; Chen, Yong; Chu, Haitao

    2015-06-01

    Several statistical methods for meta-analysis of diagnostic accuracy studies have been discussed in the presence of a gold standard. However, in practice, the selected reference test may be imperfect due to measurement error, non-existence, invasive nature, or expensive cost of a gold standard. It has been suggested that treating an imperfect reference test as a gold standard can lead to substantial bias in the estimation of diagnostic test accuracy. Recently, two models have been proposed to account for imperfect reference test, namely, a multivariate generalized linear mixed model (MGLMM) and a hierarchical summary receiver operating characteristic (HSROC) model. Both models are very flexible in accounting for heterogeneity in accuracies of tests across studies as well as the dependence between tests. In this article, we show that these two models, although with different formulations, are closely related and are equivalent in the absence of study-level covariates. Furthermore, we provide the exact relations between the parameters of these two models and assumptions under which two models can be reduced to equivalent submodels. On the other hand, we show that some submodels of the MGLMM do not have corresponding equivalent submodels of the HSROC model, and vice versa. With three real examples, we illustrate the cases when fitting the MGLMM and HSROC models leads to equivalent submodels and hence identical inference, and the cases when the inferences from two models are slightly different. Our results generalize the important relations between the bivariate generalized linear mixed model and HSROC model when the reference test is a gold standard. © 2014, The International Biometric Society.

  20. Direction of Effects in Multiple Linear Regression Models.

    Science.gov (United States)

    Wiedermann, Wolfgang; von Eye, Alexander

    2015-01-01

    Previous studies analyzed asymmetric properties of the Pearson correlation coefficient using higher than second order moments. These asymmetric properties can be used to determine the direction of dependence in a linear regression setting (i.e., establish which of two variables is more likely to be on the outcome side) within the framework of cross-sectional observational data. Extant approaches are restricted to the bivariate regression case. The present contribution extends the direction of dependence methodology to a multiple linear regression setting by analyzing distributional properties of residuals of competing multiple regression models. It is shown that, under certain conditions, the third central moments of estimated regression residuals can be used to decide upon direction of effects. In addition, three different approaches for statistical inference are discussed: a combined D'Agostino normality test, a skewness difference test, and a bootstrap difference test. Type I error and power of the procedures are assessed using Monte Carlo simulations, and an empirical example is provided for illustrative purposes. In the discussion, issues concerning the quality of psychological data, possible extensions of the proposed methods to the fourth central moment of regression residuals, and potential applications are addressed.

  1. A Gaussian graphical model approach to climate networks

    Energy Technology Data Exchange (ETDEWEB)

    Zerenner, Tanja, E-mail: tanjaz@uni-bonn.de [Meteorological Institute, University of Bonn, Auf dem Hügel 20, 53121 Bonn (Germany); Friederichs, Petra; Hense, Andreas [Meteorological Institute, University of Bonn, Auf dem Hügel 20, 53121 Bonn (Germany); Interdisciplinary Center for Complex Systems, University of Bonn, Brühler Straße 7, 53119 Bonn (Germany); Lehnertz, Klaus [Department of Epileptology, University of Bonn, Sigmund-Freud-Straße 25, 53105 Bonn (Germany); Helmholtz Institute for Radiation and Nuclear Physics, University of Bonn, Nussallee 14-16, 53115 Bonn (Germany); Interdisciplinary Center for Complex Systems, University of Bonn, Brühler Straße 7, 53119 Bonn (Germany)

    2014-06-15

    Distinguishing between direct and indirect connections is essential when interpreting network structures in terms of dynamical interactions and stability. When constructing networks from climate data the nodes are usually defined on a spatial grid. The edges are usually derived from a bivariate dependency measure, such as Pearson correlation coefficients or mutual information. Thus, the edges indistinguishably represent direct and indirect dependencies. Interpreting climate data fields as realizations of Gaussian Random Fields (GRFs), we have constructed networks according to the Gaussian Graphical Model (GGM) approach. In contrast to the widely used method, the edges of GGM networks are based on partial correlations denoting direct dependencies. Furthermore, GRFs can be represented not only on points in space, but also by expansion coefficients of orthogonal basis functions, such as spherical harmonics. This leads to a modified definition of network nodes and edges in spectral space, which is motivated from an atmospheric dynamics perspective. We construct and analyze networks from climate data in grid point space as well as in spectral space, and derive the edges from both Pearson and partial correlations. Network characteristics, such as mean degree, average shortest path length, and clustering coefficient, reveal that the networks posses an ordered and strongly locally interconnected structure rather than small-world properties. Despite this, the network structures differ strongly depending on the construction method. Straightforward approaches to infer networks from climate data while not regarding any physical processes may contain too strong simplifications to describe the dynamics of the climate system appropriately.

  2. A Gaussian graphical model approach to climate networks

    International Nuclear Information System (INIS)

    Zerenner, Tanja; Friederichs, Petra; Hense, Andreas; Lehnertz, Klaus

    2014-01-01

    Distinguishing between direct and indirect connections is essential when interpreting network structures in terms of dynamical interactions and stability. When constructing networks from climate data the nodes are usually defined on a spatial grid. The edges are usually derived from a bivariate dependency measure, such as Pearson correlation coefficients or mutual information. Thus, the edges indistinguishably represent direct and indirect dependencies. Interpreting climate data fields as realizations of Gaussian Random Fields (GRFs), we have constructed networks according to the Gaussian Graphical Model (GGM) approach. In contrast to the widely used method, the edges of GGM networks are based on partial correlations denoting direct dependencies. Furthermore, GRFs can be represented not only on points in space, but also by expansion coefficients of orthogonal basis functions, such as spherical harmonics. This leads to a modified definition of network nodes and edges in spectral space, which is motivated from an atmospheric dynamics perspective. We construct and analyze networks from climate data in grid point space as well as in spectral space, and derive the edges from both Pearson and partial correlations. Network characteristics, such as mean degree, average shortest path length, and clustering coefficient, reveal that the networks posses an ordered and strongly locally interconnected structure rather than small-world properties. Despite this, the network structures differ strongly depending on the construction method. Straightforward approaches to infer networks from climate data while not regarding any physical processes may contain too strong simplifications to describe the dynamics of the climate system appropriately

  3. Technical Note: Assessing predictive capacity and conditional independence of landslide predisposing factors for shallow landslide susceptibility models

    Directory of Open Access Journals (Sweden)

    S. Pereira

    2012-04-01

    Full Text Available The aim of this study is to identify the landslide predisposing factors' combination using a bivariate statistical model that best predicts landslide susceptibility. The best model is one that has simultaneously good performance in terms of suitability and predictive power and has been developed using variables that are conditionally independent. The study area is the Santa Marta de Penaguião council (70 km2 located in the Northern Portugal.

    In order to identify the best combination of landslide predisposing factors, all possible combinations using up to seven predisposing factors were performed, which resulted in 120 predictions that were assessed with a landside inventory containing 767 shallow translational slides. The best landslide susceptibility model was selected according to the model degree of fitness and on the basis of a conditional independence criterion. The best model was developed with only three landslide predisposing factors (slope angle, inverse wetness index, and land use and was compared with a model developed using all seven landslide predisposing factors.

    Results showed that it is possible to produce a reliable landslide susceptibility model using fewer landslide predisposing factors, which contributes towards higher conditional independence.

  4. Cognitive modeling

    OpenAIRE

    Zandbelt, Bram

    2017-01-01

    Introductory presentation on cognitive modeling for the course ‘Cognitive control’ of the MSc program Cognitive Neuroscience at Radboud University. It addresses basic questions, such as 'What is a model?', 'Why use models?', and 'How to use models?'

  5. Modelling the models

    CERN Multimedia

    Anaïs Schaeffer

    2012-01-01

    By analysing the production of mesons in the forward region of LHC proton-proton collisions, the LHCf collaboration has provided key information needed to calibrate extremely high-energy cosmic ray models.   Average transverse momentum (pT) as a function of rapidity loss ∆y. Black dots represent LHCf data and the red diamonds represent SPS experiment UA7 results. The predictions of hadronic interaction models are shown by open boxes (sibyll 2.1), open circles (qgsjet II-03) and open triangles (epos 1.99). Among these models, epos 1.99 shows the best overall agreement with the LHCf data. LHCf is dedicated to the measurement of neutral particles emitted at extremely small angles in the very forward region of LHC collisions. Two imaging calorimeters – Arm1 and Arm2 – take data 140 m either side of the ATLAS interaction point. “The physics goal of this type of analysis is to provide data for calibrating the hadron interaction models – the well-known &...

  6. Continuous time modelling with individually varying time intervals for oscillating and non-oscillating processes.

    Science.gov (United States)

    Voelkle, Manuel C; Oud, Johan H L

    2013-02-01

    When designing longitudinal studies, researchers often aim at equal intervals. In practice, however, this goal is hardly ever met, with different time intervals between assessment waves and different time intervals between individuals being more the rule than the exception. One of the reasons for the introduction of continuous time models by means of structural equation modelling has been to deal with irregularly spaced assessment waves (e.g., Oud & Delsing, 2010). In the present paper we extend the approach to individually varying time intervals for oscillating and non-oscillating processes. In addition, we show not only that equal intervals are unnecessary but also that it can be advantageous to use unequal sampling intervals, in particular when the sampling rate is low. Two examples are provided to support our arguments. In the first example we compare a continuous time model of a bivariate coupled process with varying time intervals to a standard discrete time model to illustrate the importance of accounting for the exact time intervals. In the second example the effect of different sampling intervals on estimating a damped linear oscillator is investigated by means of a Monte Carlo simulation. We conclude that it is important to account for individually varying time intervals, and encourage researchers to conceive of longitudinal studies with different time intervals within and between individuals as an opportunity rather than a problem. © 2012 The British Psychological Society.

  7. Reducing Uncertainties of Hydrologic Model Predictions Using a New Ensemble Pre-Processing Approach

    Science.gov (United States)

    Khajehei, S.; Moradkhani, H.

    2015-12-01

    Ensemble Streamflow Prediction (ESP) was developed to characterize the uncertainty in hydrologic predictions. However, ESP outputs are still prone to bias due to the uncertainty in the forcing data, initial condition, and model structure. Among these, uncertainty in forcing data has a major impact on the reliability of hydrologic simulations/forecasts. Major steps have been taken in generating less uncertain precipitation forecasts such as the Ensemble Pre-Processing (EPP) to achieve this goal. EPP is introduced as a statistical procedure based on the bivariate joint distribution between observation and forecast to generate ensemble climatologic forecast from single-value forecast. The purpose of this study is to evaluate the performance of pre-processed ensemble precipitation forecast in generating ensemble streamflow predictions. Copula functions used in EPP, model the multivariate joint distribution between univariate variables with any level of dependency. Accordingly, ESP is generated by employing both raw ensemble precipitation forecast as well as pre-processed ensemble precipitation. The ensemble precipitation forecast is taken from Climate Forecast System (CFS) generated by National Weather Service's (NWS) National Centers for Environmental Prediction (NCEP) models. Study is conducted using the precipitation Runoff Modeling System (PRMS) over two basins in the Pacific Northwest USA for the period of 1979 to 2013. Results reveal that applying this new EPP will lead to reduction of uncertainty and overall improvement in the ESP.

  8. The reciprocal relationship between compounding awareness and vocabulary knowledge in Chinese: a latent growth model study

    Directory of Open Access Journals (Sweden)

    Yahua eCheng

    2015-04-01

    Full Text Available The aim of this study is to examine the developmental relationship between compounding awareness and vocabulary knowledge from grades 1 to 2 in Chinese children. In this study, 149 Chinese children were tested on compounding awareness and vocabulary knowledge from Time 1 to Time 4, with nonverbal IQ, working memory, phonological awareness, orthographical awareness, and rapid automatized naming at Time 1 as control variables. Latent growth modeling was conducted to analyze the data. Univariate models separately calculated children’s initial levels and growth rates in compounding awareness and vocabulary knowledge. Bivariate model was used to examine the direction of the developmental relationships between the two variables with other cognitive and linguistic variables and the autoregression controlled. The results demonstrated that the initial level of compounding awareness predicted the growth rate of vocabulary knowledge, and the reverse relation was also found, after controlling for other cognitive and linguistic variables and the autoregression. The results suggested a reciprocal developmental relationship between children’s compounding awareness and vocabulary knowledge for Chinese children, a finding that informs current models of the relationship between morphological awareness and vocabulary knowledge.

  9. The reciprocal relationship between compounding awareness and vocabulary knowledge in Chinese: a latent growth model study.

    Science.gov (United States)

    Cheng, Yahua; Li, Liping; Wu, Xinchun

    2015-01-01

    The aim of this study is to examine the developmental relationship between compounding awareness and vocabulary knowledge from grades 1 to 2 in Chinese children. In this study, 149 Chinese children were tested on compounding awareness and vocabulary knowledge from Time 1 to Time 4, with non-verbal IQ, working memory, phonological awareness, orthographical awareness, and rapid automatized naming at Time 1 as control variables. Latent growth modeling was conducted to analyze the data. Univariate models separately calculated children's initial levels and growth rates in compounding awareness and vocabulary knowledge. Bivariate model was used to examine the direction of the developmental relationships between the two variables with other cognitive and linguistic variables and the autoregression controlled. The results demonstrated that the initial level of compounding awareness predicted the growth rate of vocabulary knowledge, and the reverse relation was also found, after controlling for other cognitive and linguistic variables and the autoregression. The results suggested a reciprocal developmental relationship between children's compounding awareness and vocabulary knowledge for Chinese children, a finding that informs current models of the relationship between morphological awareness and vocabulary knowledge.

  10. Modelling Practice

    DEFF Research Database (Denmark)

    Cameron, Ian; Gani, Rafiqul

    2011-01-01

    This chapter deals with the practicalities of building, testing, deploying and maintaining models. It gives specific advice for each phase of the modelling cycle. To do this, a modelling framework is introduced which covers: problem and model definition; model conceptualization; model data...... requirements; model construction; model solution; model verification; model validation and finally model deployment and maintenance. Within the adopted methodology, each step is discussedthrough the consideration of key issues and questions relevant to the modelling activity. Practical advice, based on many...... years of experience is providing in directing the reader in their activities.Traps and pitfalls are discussed and strategies also given to improve model development towards “fit-for-purpose” models. The emphasis in this chapter is the adoption and exercise of a modelling methodology that has proven very...

  11. Classification of very high resolution SAR images of urban areas by dictionary-based mixture models, copulas, and Markov random fields using textural features

    Science.gov (United States)

    Voisin, Aurélie; Moser, Gabriele; Krylov, Vladimir A.; Serpico, Sebastiano B.; Zerubia, Josiane

    2010-10-01

    This paper addresses the problem of the classification of very high resolution (VHR) SAR amplitude images of urban areas. The proposed supervised method combines a finite mixture technique to estimate class-conditional probability density functions, Bayesian classification, and Markov random fields (MRFs). Textural features, such as those extracted by the greylevel co-occurrency method, are also integrated in the technique, as they allow to improve the discrimination of urban areas. Copulas are applied to estimate bivariate joint class-conditional statistics, merging the marginal distributions of both textural and SAR amplitude features. The resulting joint distribution estimates are plugged into a hidden MRF model, endowed with a modified Metropolis dynamics scheme for energy minimization. Experimental results with COSMO-SkyMed and TerraSAR-X images point out the accuracy of the proposed method, also as compared with previous contextual classifiers.

  12. Thermal niche for in situ seed germination by Mediterranean mountain streams: model prediction and validation for Rhamnus persicifolia seeds

    Science.gov (United States)

    Porceddu, Marco; Mattana, Efisio; Pritchard, Hugh W.; Bacchetta, Gianluigi

    2013-01-01

    Background and Aims Mediterranean mountain species face exacting ecological conditions of rainy, cold winters and arid, hot summers, which affect seed germination phenology. In this study, a soil heat sum model was used to predict field emergence of Rhamnus persicifolia, an endemic tree species living at the edge of mountain streams of central eastern Sardinia. Methods Seeds were incubated in the light at a range of temperatures (10–25 and 25/10 °C) after different periods (up to 3 months) of cold stratification at 5 °C. Base temperatures (Tb), and thermal times for 50 % germination (θ50) were calculated. Seeds were also buried in the soil in two natural populations (Rio Correboi and Rio Olai), both underneath and outside the tree canopy, and exhumed at regular intervals. Soil temperatures were recorded using data loggers and soil heat sum (°Cd) was calculated on the basis of the estimated Tb and soil temperatures. Key Results Cold stratification released physiological dormancy (PD), increasing final germination and widening the range of germination temperatures, indicative of a Type 2 non-deep PD. Tb was reduced from 10·5 °C for non-stratified seeds to 2·7 °C for seeds cold stratified for 3 months. The best thermal time model was obtained by fitting probit germination against log °Cd. θ50 was 2·6 log °Cd for untreated seeds and 2·17–2·19 log °Cd for stratified seeds. When θ50 values were integrated with soil heat sum estimates, field emergence was predicted from March to April and confirmed through field observations. Conclusions Tb and θ50 values facilitated model development of the thermal niche for in situ germination of R. persicifolia. These experimental approaches may be applied to model the natural regeneration patterns of other species growing on Mediterranean mountain waterways and of physiologically dormant species, with overwintering cold stratification requirement and spring germination. PMID:24201139

  13. Statistical model of global uranium resources and long-term availability

    International Nuclear Information System (INIS)

    Monnet, A.; Gabriel, S.; Percebois, J.

    2016-01-01

    Most recent studies on the long-term supply of uranium make simplistic assumptions on the available resources and their production costs. Some consider the whole uranium quantities in the Earth's crust and then estimate the production costs based on the ore grade only, disregarding the size of ore bodies and the mining techniques. Other studies consider the resources reported by countries for a given cost category, disregarding undiscovered or unreported quantities. In both cases, the resource estimations are sorted following a cost merit order. In this paper, we describe a methodology based on 'geological environments'. It provides a more detailed resource estimation and it is more flexible regarding cost modelling. The global uranium resource estimation introduced in this paper results from the sum of independent resource estimations from different geological environments. A geological environment is defined by its own geographical boundaries, resource dispersion (average grade and size of ore bodies and their variance), and cost function. With this definition, uranium resources are considered within ore bodies. The deposit breakdown of resources is modelled using a bivariate statistical approach where size and grade are the two random variables. This makes resource estimates possible for individual projects. Adding up all geological environments provides a distribution of all Earth's crust resources in which ore bodies are sorted by size and grade. This subset-based estimation is convenient to model specific cost structures. (authors)

  14. Studying Individual Differences in Predictability With Gamma Regression and Nonlinear Multilevel Models.

    Science.gov (United States)

    Culpepper, Steven Andrew

    2010-01-29

    Statistical prediction remains an important tool for decisions in a variety of disciplines. An equally important issue is identifying factors that contribute to more or less accurate predictions. The time series literature includes well developed methods for studying predictability and volatility over time. This article develops distribution-appropriate methods for studying individual differences in predictability for settings in psychological research. Specifically, 3 different approaches are discussed for modeling predictability. The 1st is a bivariate measure of predictability discussed previously in the psychology literature, the squared or absolute valued difference between criterion and predictor, which is shown to follow the gamma distribution. The 2nd method extended limitations of previous research and involved understanding predictability in regression models. The 3rd method used nonlinear multilevel models to study predictability in settings where participants are nested within clusters. An application was presented using SAS NLMIXED to understand the predictability of college grade point average by student demographic characteristics. The findings from the application suggest that the 1st-year college performance of English as a second language students were, on average, less predictable whereas females and Whites tended to demonstrate more predictable academic performance than their male or racial/ethnic minority counterparts.

  15. An integrated, ethically driven environmental model of clinical decision making in emergency settings.

    Science.gov (United States)

    Wolf, Lisa

    2013-02-01

    To explore the relationship between multiple variables within a model of critical thinking and moral reasoning. A quantitative descriptive correlational design using a purposive sample of 200 emergency nurses. Measured variables were accuracy in clinical decision-making, moral reasoning, perceived care environment, and demographics. Analysis was by bivariate correlation using Pearson's product-moment correlation coefficients, chi square and multiple linear regression analysis. The elements as identified in the integrated ethically-driven environmental model of clinical decision-making (IEDEM-CD) corrected depict moral reasoning and environment of care as factors significantly affecting accuracy in decision-making. The integrated, ethically driven environmental model of clinical decision making is a framework useful for predicting clinical decision making accuracy for emergency nurses in practice, with further implications in education, research and policy. A diagnostic and therapeutic framework for identifying and remediating individual and environmental challenges to accurate clinical decision making. © 2012, The Author. International Journal of Nursing Knowledge © 2012, NANDA International.

  16. Does Eating Out Make Elderly People Depressed? Empirical Evidence from National Health and Nutrition Survey in Taiwan.

    Science.gov (United States)

    Chang, Hung-Hao; Saeliw, Kannika

    2017-06-01

    OBJECTIVES: This study investigates the association between eating out and depressive symptoms among elderly people. Potential mediators that may link to elderly eating out and depressive symptoms are also discussed. METHODS: A unique dataset of 1,184 individuals aged 65 and older was drawn from the National Health and Nutrition Survey in 2008 in Taiwan. A bivariate probit model and an instrumental variable probit model were estimated to account for correlated, unmeasured factors that may be associated with both the decision and frequency of eating out and depressive symptoms in the elderly. An additional analysis is conducted to check whether the nutrient intakes and body weights can be seen as mediators that link the association between eating out and depressive symptoms of the elderly. RESULTS: Elderly people who eat out are 38 percent points more likely to have depressive symptoms than their counterparts who do not eat out, after controlling for socio-demographic characteristics and other factors. A positive association between the frequency of eating out and the likelihood of having depressive symptoms of the elderly is also found. It is evident that one additional meal away from home is associated with an increase of the likelihood of being depressed by 3.8 percentage points. With respect to the mediations, we find that nutrient intakes and body weight are likely to serve as mediators for the positive relationship between eating out and depressive symptoms in the elderly. CONCLUSION: Our results show that elderly who eat out have a higher chance of having depressive symptoms. To prevent depressive symptoms in the elderly, policy makers should be aware of the relationship among psychological status, physical health and nutritional health when assisting the elderly to better manage their food consumption away from home. LIMITATONS AND IMPLICATIONS FOR FUTURE RESEARCH: Our study have some caveats. First, the interpretation of our results on the causality issue

  17. Multivariate methods for indoor PM10 and PM2.5 modelling in naturally ventilated schools buildings

    Science.gov (United States)

    Elbayoumi, Maher; Ramli, Nor Azam; Md Yusof, Noor Faizah Fitri; Yahaya, Ahmad Shukri Bin; Al Madhoun, Wesam; Ul-Saufie, Ahmed Zia

    2014-09-01

    In this study the concentrations of PM10, PM2.5, CO and CO2 concentrations and meteorological variables (wind speed, air temperature, and relative humidity) were employed to predict the annual and seasonal indoor concentration of PM10 and PM2.5 using multivariate statistical methods. The data have been collected in twelve naturally ventilated schools in Gaza Strip (Palestine) from October 2011 to May 2012 (academic year). The bivariate correlation analysis showed that the indoor PM10 and PM2.5 were highly positive correlated with outdoor concentration of PM10 and PM2.5. Further, Multiple linear regression (MLR) was used for modelling and R2 values for indoor PM10 were determined as 0.62 and 0.84 for PM10 and PM2.5 respectively. The Performance indicators of MLR models indicated that the prediction for PM10 and PM2.5 annual models were better than seasonal models. In order to reduce the number of input variables, principal component analysis (PCA) and principal component regression (PCR) were applied by using annual data. The predicted R2 were 0.40 and 0.73 for PM10 and PM2.5, respectively. PM10 models (MLR and PCR) show the tendency to underestimate indoor PM10 concentrations as it does not take into account the occupant's activities which highly affect the indoor concentrations during the class hours.

  18. STRUCTURAL MODELLING

    Directory of Open Access Journals (Sweden)

    Tea Ya. Danelyan

    2014-01-01

    Full Text Available The article states the general principles of structural modeling in aspect of the theory of systems and gives the interrelation with other types of modeling to adjust them to the main directions of modeling. Mathematical methods of structural modeling, in particular method of expert evaluations are considered.

  19. (HEV) Model

    African Journals Online (AJOL)

    Moatez Billah HARIDA

    The use of the simulator “Hybrid Electrical Vehicle Model Balances Fidelity and. Speed (HEVMBFS)” and the global control strategy make it possible to achieve encouraging results. Key words: Series parallel hybrid vehicle - nonlinear model - linear model - Diesel engine - Engine modelling -. HEV simulator - Predictive ...

  20. Constitutive Models

    DEFF Research Database (Denmark)

    Sales-Cruz, Mauricio; Piccolo, Chiara; Heitzig, Martina

    2011-01-01

    This chapter presents various types of constitutive models and their applications. There are 3 aspects dealt with in this chapter, namely: creation and solution of property models, the application of parameter estimation and finally application examples of constitutive models. A systematic...... procedure is introduced for the analysis and solution of property models. Models that capture and represent the temperature dependent behaviour of physical properties are introduced, as well as equation of state models (EOS) such as the SRK EOS. Modelling of liquid phase activity coefficients are also...

  1. Development and Internal Validation of a Prediction Model to Estimate the Probability of Needing Aggressive Immunosuppressive Therapy With Cytostatics in de Novo Lupus Nephritis Patients.

    Science.gov (United States)

    Restrepo-Escobar, Mauricio; Granda-Carvajal, Paula Andrea; Jaimes, Fabián

    2017-07-18

    To develop a multivariable clinical prediction model for the requirement of aggressive immunosuppression with cytostatics, based on simple clinical record data and lab tests. The model is defined in accordance with the result of the kidney biopsies. Retrospective study conducted with data from patients 16 years and older, with SLE and nephritis with less than 6 months of evolution. An initial bivariate analysis was conducted to select the variables to be included in a multiple logistic regression model. Goodness of fit was evaluated using a Hosmer-Lemeshow test (H-L) and the discrimination capacity of the model by means of the area under the ROC (AUC) curve. Data from 242 patients was gathered; of these, 18.2% (n=44) did not need an addition of cytostatics according to the findings of their kidney biopsies. The variables included in the final model were 24-h proteinuria, diastolic blood pressure, creatinine, C3 complement and the interaction of hematuria with leukocyturia in urinary sediment. The model showed excellent discrimination (AUC=0.929; 95% CI=0.894-0.963) and adequate calibration (H-L, P=.959). In recent-onset LN patients, the decision to use or not to use intensive immunosuppressive therapy could be performed based on our prediction model as an alternative to kidney biopsies. Copyright © 2017 Elsevier España, S.L.U. and Sociedad Española de Reumatología y Colegio Mexicano de Reumatología. All rights reserved.

  2. Model theory

    CERN Document Server

    Chang, CC

    2012-01-01

    Model theory deals with a branch of mathematical logic showing connections between a formal language and its interpretations or models. This is the first and most successful textbook in logical model theory. Extensively updated and corrected in 1990 to accommodate developments in model theoretic methods - including classification theory and nonstandard analysis - the third edition added entirely new sections, exercises, and references. Each chapter introduces an individual method and discusses specific applications. Basic methods of constructing models include constants, elementary chains, Sko

  3. Labor Supply of Poor Residents in Metropolitan Miami, Florida: The Role of Depression and the Co-Morbid Effects of Substance Use.

    Science.gov (United States)

    Alexandre, Pierre K.; French, Michael T.

    2001-12-01

    use and problem drinking as defined by the 10-item Michigan Alcoholism Screening Test (MAST-10). Based on criteria defined in the MAST-10, 26 percent of the depressed individuals were problematic alcohol users (PAUs) compared to about 16 percent of the non-depressed sample.METHODS: The labor supply measures included employment in the past 30 days and number of weeks worked in the past 12 months. The analysis estimated a univariate probit model of employment as well as a bivariate probit model of depression and employment, which accounted for the possible correlation between the unobserved determinants of depression and employment. The annual weeks worked specification was estimated by a standard Tobit model as well as an instrumental variable (IV) Tobit model, which, in addition to the censoring of the observations, accounted for the possible endogeneity of depression. The stability of the estimated effects of depression to comorbid illicit drug and alcohol use was assessed, by controlling for CDU and PAU in these models. RESULTS: Results from both the univariate probit and the bivariate probit models indicate that depression significantly decreased the probability of being employed. Specifically, depression reduced the probability of employment by an average of 19 percentage points in both models, from a sample average of 43 percent for the non- depressed to 24 percent for the depressed. Estimates from the Tobit models revealed that depression also significantly reduced the number of weeks worked. Conditional on being employed, depressed individuals worked an average of 7 fewer annual weeks than the non-depressed sample in the univariate Tobit model and 8 fewer weeks in the IV Tobit. The findings also showed that the effects of depression on employment and annual weeks worked may be over-estimated if the analysis does not account for the comorbid influence of substance use. IMPLICATIONS FOR HEALTH CARE PROVISION AND USE: The results suggest that prevention and

  4. Is There a Critical Distance for Fickian Transport? - a Statistical Approach to Sub-Fickian Transport Modelling in Porous Media

    Science.gov (United States)

    Most, S.; Nowak, W.; Bijeljic, B.

    2014-12-01

    Transport processes in porous media are frequently simulated as particle movement. This process can be formulated as a stochastic process of particle position increments. At the pore scale, the geometry and micro-heterogeneities prohibit the commonly made assumption of independent and normally distributed increments to represent dispersion. Many recent particle methods seek to loosen this assumption. Recent experimental data suggest that we have not yet reached the end of the need to generalize, because particle increments show statistical dependency beyond linear correlation and over many time steps. The goal of this work is to better understand the validity regions of commonly made assumptions. We are investigating after what transport distances can we observe: A statistical dependence between increments, that can be modelled as an order-k Markov process, boils down to order 1. This would be the Markovian distance for the process, where the validity of yet-unexplored non-Gaussian-but-Markovian random walks would start. A bivariate statistical dependence that simplifies to a multi-Gaussian dependence based on simple linear correlation (validity of correlated PTRW). Complete absence of statistical dependence (validity of classical PTRW/CTRW). The approach is to derive a statistical model for pore-scale transport from a powerful experimental data set via copula analysis. The model is formulated as a non-Gaussian, mutually dependent Markov process of higher order, which allows us to investigate the validity ranges of simpler models.

  5. Galactic models

    International Nuclear Information System (INIS)

    Buchler, J.R.; Gottesman, S.T.; Hunter, J.H. Jr.

    1990-01-01

    Various papers on galactic models are presented. Individual topics addressed include: observations relating to galactic mass distributions; the structure of the Galaxy; mass distribution in spiral galaxies; rotation curves of spiral galaxies in clusters; grand design, multiple arm, and flocculent spiral galaxies; observations of barred spirals; ringed galaxies; elliptical galaxies; the modal approach to models of galaxies; self-consistent models of spiral galaxies; dynamical models of spiral galaxies; N-body models. Also discussed are: two-component models of galaxies; simulations of cloudy, gaseous galactic disks; numerical experiments on the stability of hot stellar systems; instabilities of slowly rotating galaxies; spiral structure as a recurrent instability; model gas flows in selected barred spiral galaxies; bar shapes and orbital stochasticity; three-dimensional models; polar ring galaxies; dynamical models of polar rings

  6. Effect of Boarding on Mortality in ICUs.

    Science.gov (United States)

    Stretch, Robert; Della Penna, Nicolás; Celi, Leo Anthony; Landon, Bruce E

    2018-04-01

    Hospitals use a variety of strategies to maximize the availability of limited ICU beds. Boarding, which involves assigning patients to an open bed in a different subspecialty ICU, is one such practice employed when ICU occupancy levels are high, and beds in a particular unit are unavailable. Boarding disrupts the normal geographic colocation of patients and care teams, exposing patients to nursing staff with different training and expertise to those caring for nonboarders. We analyzed whether medical ICU patients boarding in alternative specialty ICUs are at increased risk of mortality. Retrospective cohort study using an instrumental variable analysis to control for unmeasured confounding. A semiparametric bivariate probit estimation strategy was employed for the instrumental model. Propensity score matching and standard logistic regression (generalized linear modeling) were used as robustness checks. The medical ICU of a tertiary care nonprofit hospital in the United States between 2002 and 2012. All medical ICU admissions during the specified time period. None. The study population consisted of 8,429 patients of whom 1,871 were boarders. The instrumental variable model demonstrated a relative risk of 1.18 (95% CI, 1.01-1.38) for ICU stay mortality for boarders. The relative risk of in-hospital mortality among boarders was 1.22 (95% CI, 1.00-1.49). GLM and propensity score matching without use of the instrument yielded similar estimates. Instrumental variable estimates are for marginal patients, whereas generalized linear modeling and propensity score matching yield population average effects. Mortality increased with boarding of critically ill patients. Further research is needed to identify safer practices for managing patients during periods of high ICU occupancy.

  7. Identifying the factors affecting bike-sharing usage and degree of satisfaction in Ningbo, China.

    Science.gov (United States)

    Guo, Yanyong; Zhou, Jibiao; Wu, Yao; Li, Zhibin

    2017-01-01

    The boom in bike-sharing is receiving growing attention as societies become more aware of the importance of active non-motorized traffic modes. However, the low usage of this transport mode in China raises concerns. The primary objective of this study is to explore factors affecting bike-sharing usage and satisfaction degree of bike-sharing among the bike-sharing user population in China. Data were collected by a questionnaire survey in Ningbo. A bivariate ordered probit (BOP) model was developed to examine simultaneously those factors associated with both bike-sharing usage and satisfaction degree of bike-sharing among users. Marginal effects for contributory factors were calculated to quantify their impacts on the outcomes. The results showed that the BOP model can account for commonly shared unobserved characteristics within usage and satisfaction of bike-sharing. The BOP model results showed that the usage of bike-sharing was affected by gender, household bicycle/e-bike ownership, trip model, travel time, bike-sharing stations location, and users' perception of bike-sharing. The satisfaction degree of bike-sharing was affected by household income, bike-sharing stations location, and users' perception of bike-sharing. It is also found that bike-sharing usage and satisfaction degree are strongly correlated and positive in direction. The results can enhance our comprehension of the factors that affect usage and satisfaction degree of bike-sharing. Based on the results, some suggestions regarding planning, engineering, and public advocacy were discussed to increase the usage of bike-sharing in Ningbo, China.

  8. Identifying the factors affecting bike-sharing usage and degree of satisfaction in Ningbo, China.

    Directory of Open Access Journals (Sweden)

    Yanyong Guo

    Full Text Available The boom in bike-sharing is receiving growing attention as societies become more aware of the importance of active non-motorized traffic modes. However, the low usage of this transport mode in China raises concerns. The primary objective of this study is to explore factors affecting bike-sharing usage and satisfaction degree of bike-sharing among the bike-sharing user population in China. Data were collected by a questionnaire survey in Ningbo. A bivariate ordered probit (BOP model was developed to examine simultaneously those factors associated with both bike-sharing usage and satisfaction degree of bike-sharing among users. Marginal effects for contributory factors were calculated to quantify their impacts on the outcomes. The results showed that the BOP model can account for commonly shared unobserved characteristics within usage and satisfaction of bike-sharing. The BOP model results showed that the usage of bike-sharing was affected by gender, household bicycle/e-bike ownership, trip model, travel time, bike-sharing stations location, and users' perception of bike-sharing. The satisfaction degree of bike-sharing was affected by household income, bike-sharing stations location, and users' perception of bike-sharing. It is also found that bike-sharing usage and satisfaction degree are strongly correlated and positive in direction. The results can enhance our comprehension of the factors that affect usage and satisfaction degree of bike-sharing. Based on the results, some suggestions regarding planning, engineering, and public advocacy were discussed to increase the usage of bike-sharing in Ningbo, China.

  9. Significance tests to determine the direction of effects in linear regression models.

    Science.gov (United States)

    Wiedermann, Wolfgang; Hagmann, Michael; von Eye, Alexander

    2015-02-01

    Previous studies have discussed asymmetric interpretations of the Pearson correlation coefficient and have shown that higher moments can be used to decide on the direction of dependence in the bivariate linear regression setting. The current study extends this approach by illustrating that the third moment of regression residuals may also be used to derive conclusions concerning the direction of effects. Assuming non-normally distributed variables, it is shown that the distribution of residuals of the correctly specified regression model (e.g., Y is regressed on X) is more symmetric than the distribution of residuals of the competing model (i.e., X is regressed on Y). Based on this result, 4 one-sample tests are discussed which can be used to decide which variable is more likely to be the response and which one is more likely to be the explanatory variable. A fifth significance test is proposed based on the differences of skewness estimates, which leads to a more direct test of a hypothesis that is compatible with direction of dependence. A Monte Carlo simulation study was performed to examine the behaviour of the procedures under various degrees of associations, sample sizes, and distributional properties of the underlying population. An empirical example is given which illustrates the application of the tests in practice. © 2014 The British Psychological Society.

  10. The relationships between behavioral addictions and the five-factor model of personality.

    Science.gov (United States)

    Andreassen, Cecilie Schou; Griffiths, Mark D; Gjertsen, Siri Renate; Krossbakken, Elfrid; Kvam, Siri; Pallesen, Ståle

    2013-06-01

    Aims Although relationships between addiction and personality have previously been explored, no study has ever simultaneously investigated the interrelationships between several behavioral addictions, and related these to the main dimensions of the five-factor model of personality. Methods In this study, 218 university students completed questionnaires assessing seven different behavioral addictions (i.e., Facebook addiction, video game addiction, Internet addiction, exercise addiction, mobile phone addiction, compulsive buying, and study addiction) as well as an instrument assessing the main dimensions of the five-factor model of personality. Results Of the 21 bivariate intercorrelations between the seven behavioral addictions, all were positive (and nine significantly). The results also showed that (i) Neuroticism was positively associated with Internet addiction, exercise addiction, compulsive buying, and study addiction, (ii) Extroversion was positively associated with Facebook addiction, exercise addiction, mobile phone addiction, and compulsive buying, (iii) Openness to experience was negatively associated with Facebook addiction and mobile phone addiction, (iv) Agreeableness was negatively associated with Internet addiction, exercise addiction, mobile phone addiction, and compulsive buying, and (v) Conscientiousness was negatively associated with Facebook addiction, video game addiction, Internet addiction, and compulsive buying and positively associated with exercise addiction and study addiction. Conclusions The positive associations between the seven behavioral addictions suggest one or several underlying pathological factors. Hierarchical multiple regressions showed that personality traits explained between 6% and 17% of the variance in the seven behavioral addictions, suggesting that personality to a varying degree explains scores on measures of addictive behaviors.

  11. A cluster randomized theory-guided oral hygiene trial in adolescents-A latent growth model.

    Science.gov (United States)

    Aleksejūnienė, J; Brukienė, V

    2018-05-01

    (i) To test whether theory-guided interventions are more effective than conventional dental instruction (CDI) for changing oral hygiene in adolescents and (ii) to examine whether such interventions equally benefit both genders and different socio-economic (SES) groups. A total of 244 adolescents were recruited from three schools, and cluster randomization allocated adolescents to one of the three types of interventions: two were theory-based interventions (Precaution Adoption Process Model or Authoritative Parenting Model) and CDI served as an active control. Oral hygiene levels % (OH) were assessed at baseline, after 3 months and after 12 months. A complete data set was available for 166 adolescents (the total follow-up rate: 69%). There were no significant differences in baseline OH between those who participated throughout the study and those who dropped out. Bivariate and multivariate analyses showed that theory-guided interventions produced significant improvements in oral hygiene and that there were no significant gender or socio-economic differences. Theory-guided interventions produced more positive changes in OH than CDI, and these changes did not differ between gender and SES groups. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  12. Predictors of Oral Health Behaviors in Female Students: An Application of the Health Belief Model.

    Science.gov (United States)

    Rahmati-Najarkolaei, Fatemeh; Rahnama, Parvin; Gholami Fesharaki, Mohammad; Behnood, Vahid

    2016-11-01

    Oral and dental health diseases can affect the general health of students. The aim of this study was to identify the predictors of oral and dental health behavior using the health belief model (HBM) in female students in Teheran, Iran. This was a cross-sectional study framed by the HBM, including 400 female students living in district 5 of Tehran, Iran. The sampling technique used in this study was multi-stage stratified random sampling. The data on the HBM constructs (perceived severity, perceived susceptibility, perceived benefits, perceived barriers, cues to action, and self-efficacy) and demographic characteristics were collected using a self-administered questionnaire. Descriptive statistics, bivariate correlations, and linear regression were performed to analyze the data, using the SPSS software, version 18. The results showed that there were relationships between the knowledge, perceived barriers, cues to action, and mother's education with oral health behaviors. A multivariate hierarchical regression analysis was conducted with the barrier entered at step one, knowledge at step two, and cues to action at step three. Finally, the three variables accounted for 17% of the total variance in the oral and dental health behavior. The current study provided evidence for the utility of the belief-based model in the prediction of oral health behaviors. It could be suggested that oral health behavior can be promoted by reducing the perceived barriers and enhancing the students' knowledge of oral and dental hygiene.

  13. Interface models

    DEFF Research Database (Denmark)

    Ravn, Anders P.; Staunstrup, Jørgen

    1994-01-01

    This paper proposes a model for specifying interfaces between concurrently executing modules of a computing system. The model does not prescribe a particular type of communication protocol and is aimed at describing interfaces between both software and hardware modules or a combination of the two....... The model describes both functional and timing properties of an interface...

  14. Genetic parameters for feather pecking and aggressive behavior in a large F2-cross of laying hens using generalized linear mixed models.

    Science.gov (United States)

    Bennewitz, J; Bögelein, S; Stratz, P; Rodehutscord, M; Piepho, H P; Kjaer, J B; Bessei, W

    2014-04-01

    Feather pecking and aggressive pecking is a well-known problem in egg production. In the present study, genetic parameters for 4 feather-pecking-related traits were estimated using generalized linear mixed models. The traits were bouts of feather pecking delivered (FPD), bouts of feather pecking received (FPR), bouts of aggressive pecking delivered (APD), and bouts of aggressive pecking received (APR). An F2-design was established from 2 divergent selected founder lines. The lines were selected for low or high feather pecking for 10 generations. The number of F2 hens was 910. They were housed in pens with around 40 birds. Each pen was observed in 21 sessions of 20 min, distributed over 3 consecutive days. An animal model was applied that treated the bouts observed within 20 min as repeated observations. An over-dispersed Poisson distribution was assumed for observed counts and the link function was a log link. The model included a random animal effect, a random permanent environment effect, and a random day-by-hen effect. Residual variance was approximated on the link scale by the delta method. The results showed a heritability around 0.10 on the link scale for FPD and APD and of 0.04 for APR. The heritability of FPR was zero. For all behavior traits, substantial permanent environmental effects were observed. The approximate genetic correlation between FPD and APD (FPD and APR) was 0.81 (0.54). Egg production and feather eating records were collected on the same hens as well and were analyzed with a generalized linear mixed model, assuming a binomial distribution and using a probit link function. The heritability on the link scale for egg production was 0.40 and for feather eating 0.57. The approximate genetic correlation between FPD and egg production was 0.50 and between FPD and feather eating 0.73. Selection might help to reduce feather pecking, but this might result in an unfavorable correlated selection response reducing egg production. Feather eating and

  15. Assessment of Granger causality by nonlinear model identification: application to short-term cardiovascular variability.

    Science.gov (United States)

    Faes, Luca; Nollo, Giandomenico; Chon, Ki H

    2008-03-01

    A method for assessing Granger causal relationships in bivariate time series, based on nonlinear autoregressive (NAR) and nonlinear autoregressive exogenous (NARX) models is presented. The method evaluates bilateral interactions between two time series by quantifying the predictability improvement (PI) of the output time series when the dynamics associated with the input time series are included, i.e., moving from NAR to NARX prediction. The NARX model identification was performed by the optimal parameter search (OPS) algorithm, and its results were compared to the least-squares method to determine the most appropriate method to be used for experimental data. The statistical significance of the PI was assessed using a surrogate data technique. The proposed method was tested with simulation examples involving short realizations of linear stochastic processes and nonlinear deterministic signals in which either unidirectional or bidirectional coupling and varying strengths of interactions were imposed. It was found that the OPS-based NARX model was accurate and sensitive in detecting imposed Granger causality conditions. In addition, the OPS-based NARX model was more accurate than the least squares method. Application to the systolic blood pressure and heart rate variability signals demonstrated the feasibility of the method. In particular, we found a bilateral causal relationship between the two signals as evidenced by the significant reduction in the PI values with the NARX model prediction compared to the NAR model prediction, which was also confirmed by the surrogate data analysis. Furthermore, we found significant reduction in the complexity of the dynamics of the two causal pathways of the two signals as the body position was changed from the supine to upright. The proposed is a general method, thus, it can be applied to a wide variety of physiological signals to better understand causality and coupling that may be different between normal and diseased

  16. Application of the Fokker-Planck molecular mixing model to turbulent scalar mixing using moment methods

    Science.gov (United States)

    Madadi-Kandjani, E.; Fox, R. O.; Passalacqua, A.

    2017-06-01

    An extended quadrature method of moments using the β kernel density function (β -EQMOM) is used to approximate solutions to the evolution equation for univariate and bivariate composition probability distribution functions (PDFs) of a passive scalar for binary and ternary mixing. The key element of interest is the molecular mixing term, which is described using the Fokker-Planck (FP) molecular mixing model. The direct numerical simulations (DNSs) of Eswaran and Pope ["Direct numerical simulations of the turbulent mixing of a passive scalar," Phys. Fluids 31, 506 (1988)] and the amplitude mapping closure (AMC) of Pope ["Mapping closures for turbulent mixing and reaction," Theor. Comput. Fluid Dyn. 2, 255 (1991)] are taken as reference solutions to establish the accuracy of the FP model in the case of binary mixing. The DNSs of Juneja and Pope ["A DNS study of turbulent mixing of two passive scalars," Phys. Fluids 8, 2161 (1996)] are used to validate the results obtained for ternary mixing. Simulations are performed with both the conditional scalar dissipation rate (CSDR) proposed by Fox [Computational Methods for Turbulent Reacting Flows (Cambridge University Press, 2003)] and the CSDR from AMC, with the scalar dissipation rate provided as input and obtained from the DNS. Using scalar moments up to fourth order, the ability of the FP model to capture the evolution of the shape of the PDF, important in turbulent mixing problems, is demonstrated. Compared to the widely used assumed β -PDF model [S. S. Girimaji, "Assumed β-pdf model for turbulent mixing: Validation and extension to multiple scalar mixing," Combust. Sci. Technol. 78, 177 (1991)], the β -EQMOM solution to the FP model more accurately describes the initial mixing process with a relatively small increase in computational cost.

  17. Hydrological models are mediating models

    Science.gov (United States)

    Babel, L. V.; Karssenberg, D.

    2013-08-01

    Despite the increasing role of models in hydrological research and decision-making processes, only few accounts of the nature and function of models exist in hydrology. Earlier considerations have traditionally been conducted while making a clear distinction between physically-based and conceptual models. A new philosophical account, primarily based on the fields of physics and economics, transcends classes of models and scientific disciplines by considering models as "mediators" between theory and observations. The core of this approach lies in identifying models as (1) being only partially dependent on theory and observations, (2) integrating non-deductive elements in their construction, and (3) carrying the role of instruments of scientific enquiry about both theory and the world. The applicability of this approach to hydrology is evaluated in the present article. Three widely used hydrological models, each showing a different degree of apparent physicality, are confronted to the main characteristics of the "mediating models" concept. We argue that irrespective of their kind, hydrological models depend on both theory and observations, rather than merely on one of these two domains. Their construction is additionally involving a large number of miscellaneous, external ingredients, such as past experiences, model objectives, knowledge and preferences of the modeller, as well as hardware and software resources. We show that hydrological models convey the role of instruments in scientific practice by mediating between theory and the world. It results from these considerations that the traditional distinction between physically-based and conceptual models is necessarily too simplistic and refers at best to the stage at which theory and observations are steering model construction. The large variety of ingredients involved in model construction would deserve closer attention, for being rarely explicitly presented in peer-reviewed literature. We believe that devoting

  18. Informal cash payments for birth in Hungary: Are women paying to secure a known provider, respect, or quality of care?

    Science.gov (United States)

    Baji, Petra; Rubashkin, Nicholas; Szebik, Imre; Stoll, Kathrin; Vedam, Saraswathi

    2017-09-01

    In Central and Eastern Europe, many women make informal cash payments to ensure continuity of provider, i.e., to have a "chosen" doctor who provided their prenatal care, be present for birth. High rates of obstetric interventions and disrespectful maternity care are also common to the region. No previous study has examined the associations among informal payments, intervention rates, and quality of maternity care. We distributed an online cross-sectional survey in 2014 to a nationally representative sample of Hungarian internet-using women (N = 600) who had given birth in the last 5 years. The survey included items related to socio-demographics, type of provider, obstetric interventions, and experiences of care. Women reported if they paid informally, and how much. We built a two-part model, where a bivariate probit model was used to estimate conditional probabilities of women paying informally, and a GLM model to explore the amount of payments. We calculated marginal effects of the covariates (provider choice, interventions, respectful care). Many more women (79%) with a chosen doctor paid informally (191 euros on average) compared to 17% of women without a chosen doctor (86 euros). Based on regression analysis, the chosen doctor's presence at birth was the principal determinant of payment. Intervention and procedure rates were significantly higher for women with a chosen doctor versus without (cesareans 45% vs. 33%; inductions 32% vs. 19%; episiotomy 75% vs. 62%; epidural 13% vs. 5%), but had no direct effect on payments. Half of the sample (42% with a chosen doctor, 62% without) reported some form of disrespectful care, but this did not reduce payments. Despite reporting disrespect and higher rates of interventions, women rewarded the presence of a chosen doctor with informal payments. They may be unaware of evidence-based standards, and trust that their chosen doctor provided high quality maternity care. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Risk Pricing in Emerging Economies: Credit Scoring and Private Banking in Iran

    Directory of Open Access Journals (Sweden)

    Yiannis Anagnostopoulos

    2016-01-01

    Full Text Available Iran’s banking industry as a developing country is comparatively very new to risk management practices. An inevitable predictive implication of this rapid growth is the growing concerns with regard to credit risk management which is the motivation of conducting this research. The paper focuses on the credit scoring aspect of credit risk management using both logit and probit regression approaches. Real data on corporate customers are available for conducting this research which is also a contribution to this area for all other developing countries. Our questions focus on how future customers can be classified in terms of credibility, which models and methods are more effective in better capturing risks. Findings suggest that probit approaches are more effective in capturing the significance of variables and goodness-of-fitness tests. Seven variables of the Ohlson O-Score model are used: CL_CA, INTWO, OENEG, TA_TL, SIZE, WCAP_TA, and ROA; two were found to be statistically significant in logit (ROA, TL_TA and three were statistically significant in probit (ROA, TL_TA, SIZE. Also, CL_CA, ROA, and WCAP_TA were the three variables with an unexpected correlation to the probability of default. The prediction power with the cut-off point is set equal to 26% and 56.91% for defaulted customers in both logit and probit models. However, logit achieved 54.85% correct estimation of defaulted assets, 0.37% more than what probit estimated.

  20. Distributed multi-criteria model evaluation and spatial association analysis

    Science.gov (United States)

    Scherer, Laura; Pfister, Stephan

    2015-04-01

    Model performance, if evaluated, is often communicated by a single indicator and at an aggregated level; however, it does not embrace the trade-offs between different indicators and the inherent spatial heterogeneity of model efficiency. In this study, we simulated the water balance of the Mississippi watershed using the Soil and Water Assessment Tool (SWAT). The model was calibrated against monthly river discharge at 131 measurement stations. Its time series were bisected to allow for subsequent validation at the same gauges. Furthermore, the model was validated against evapotranspiration which was available as a continuous raster based on remote sensing. The model performance was evaluated for each of the 451 sub-watersheds using four different criteria: 1) Nash-Sutcliffe efficiency (NSE), 2) percent bias (PBIAS), 3) root mean square error (RMSE) normalized to standard deviation (RSR), as well as 4) a combined indicator of the squared correlation coefficient and the linear regression slope (bR2). Conditions that might lead to a poor model performance include aridity, a very flat and steep relief, snowfall and dams, as indicated by previous research. In an attempt to explain spatial differences in model efficiency, the goodness of the model was spatially compared to these four phenomena by means of a bivariate spatial association measure which combines Pearson's correlation coefficient and Moran's index for spatial autocorrelation. In order to assess the model performance of the Mississippi watershed as a whole, three different averages of the sub-watershed results were computed by 1) applying equal weights, 2) weighting by the mean observed river discharge, 3) weighting by the upstream catchment area and the square root of the time series length. Ratings of model performance differed significantly in space and according to efficiency criterion. The model performed much better in the humid Eastern region than in the arid Western region which was confirmed by the

  1. ICRF modelling

    International Nuclear Information System (INIS)

    Phillips, C.K.

    1985-12-01

    This lecture provides a survey of the methods used to model fast magnetosonic wave coupling, propagation, and absorption in tokamaks. The validity and limitations of three distinct types of modelling codes, which will be contrasted, include discrete models which utilize ray tracing techniques, approximate continuous field models based on a parabolic approximation of the wave equation, and full field models derived using finite difference techniques. Inclusion of mode conversion effects in these models and modification of the minority distribution function will also be discussed. The lecture will conclude with a presentation of time-dependent global transport simulations of ICRF-heated tokamak discharges obtained in conjunction with the ICRF modelling codes. 52 refs., 15 figs

  2. Bivariate return periods of temperature and precipitation explain a large fraction of European crop yields

    OpenAIRE

    J. Zscheischler; R. Orth; S. I. Seneviratne

    2017-01-01

    Crops are vital for human society. Crop yields vary with climate and it is important to understand how climate and crop yields are linked to ensure future food security. Temperature and precipitation are among the key driving factors of crop yield variability. Previous studies have investigated mostly linear relationships between temperature and precipitation and crop yield variability. Other research has highlighted the adverse impacts of climate extremes, such as drought and heat waves, on ...

  3. Interactive Graphics System for the Study of Variance/Covariance Structures of Bivariate and Multivariate Normal Populations

    Science.gov (United States)

    1992-09-01

    students (who later become workers) who are complacent and uncreative . What is needed Is an educational process that promotes active learning. An educational...concepts Figure 1.1. Typical Mathematics Learning Process the student and encourages passive and uncreative behavior. Such a learning system promotes...text(7); If length(number) = 0 then Cell_Rite(strg(miatrx.data[i~j])) E8 else Cell Rite(numiber); end; { Re Writ ) procedure initialize; var lij: integer

  4. Gender and Achievement Differences in Secondary Students' Verbal Self-Concepts: A Closer Look beyond Bivariate Comparison

    Science.gov (United States)

    Faber, Gunter

    2013-01-01

    Introduction: Against the background of contradictory research findings in the field the present study aimed at unraveling the structural complexities of gender differences in secondary students' verbal self-concepts and, thus, analyzing possible gender x achievement interaction effects in the L1 German and L2 English language subject. According…

  5. Modelling in Business Model design

    NARCIS (Netherlands)

    Simonse, W.L.

    2013-01-01

    It appears that business model design might not always produce a design or model as the expected result. However when designers are involved, a visual model or artefact is produced. To assist strategic managers in thinking about how they can act, the designers challenge is to combine strategy and

  6. The Relation between the Fear-Avoidance Model and Constructs from the Social Cognitive Theory in Acute WAD.

    Science.gov (United States)

    Sandborgh, Maria; Johansson, Ann-Christin; Söderlund, Anne

    2016-01-01

    In the fear-avoidance (FA) model social cognitive constructs could add to explaining the disabling process in whiplash associated disorder (WAD). The aim was to exemplify the possible input from Social Cognitive Theory on the FA model. Specifically the role of functional self-efficacy and perceived responses from a spouse/intimate partner was studied. A cross-sectional and correlational design was used. Data from 64 patients with acute WAD were used. Measures were pain intensity measured with a numerical rating scale, the Pain Disability Index, support, punishing responses, solicitous responses, and distracting responses subscales from the Multidimensional Pain Inventory, the Catastrophizing subscale from the Coping Strategies Questionnaire, the Tampa Scale of Kinesiophobia, and the Self-Efficacy Scale. Bivariate correlational, simple linear regression, and multiple regression analyses were used. In the statistical prediction models high pain intensity indicated high punishing responses, which indicated high catastrophizing. High catastrophizing indicated high fear of movement, which indicated low self-efficacy. Low self-efficacy indicated high disability, which indicated high pain intensity. All independent variables together explained 66.4% of the variance in pain disability, p social environment, perceived punishing responses from a spouse/intimate partner, pain intensity, and catastrophizing. Further, results support a mediating role of self-efficacy between fear of movement and disability in WAD.

  7. Compatible Models of Carbon Content of Individual Trees on a Cunninghamia lanceolata Plantation in Fujian Province, China.

    Directory of Open Access Journals (Sweden)

    Lin Zhuo

    Full Text Available We tried to establish compatible carbon content models of individual trees for a Chinese fir (Cunninghamia lanceolata (Lamb. Hook. plantation from Fujian province in southeast China. In general, compatibility requires that the sum of components equal the whole tree, meaning that the sum of percentages calculated from component equations should equal 100%. Thus, we used multiple approaches to simulate carbon content in boles, branches, foliage leaves, roots and the whole individual trees. The approaches included (i single optimal fitting (SOF, (ii nonlinear adjustment in proportion (NAP and (iii nonlinear seemingly unrelated regression (NSUR. These approaches were used in combination with variables relating diameter at breast height (D and tree height (H, such as D, D2H, DH and D&H (where D&H means two separate variables in bivariate model. Power, exponential and polynomial functions were tested as well as a new general function model was proposed by this study. Weighted least squares regression models were employed to eliminate heteroscedasticity. Model performances were evaluated by using mean residuals, residual variance, mean square error and the determination coefficient. The results indicated that models with two dimensional variables (DH, D2H and D&H were always superior to those with a single variable (D. The D&H variable combination was found to be the most useful predictor. Of all the approaches, SOF could establish a single optimal model separately, but there were deviations in estimating results due to existing incompatibilities, while NAP and NSUR could ensure predictions compatibility. Simultaneously, we found that the new general model had better accuracy than others. In conclusion, we recommend that the new general model be used to estimate carbon content for Chinese fir and considered for other vegetation types as well.

  8. Ventilation Model

    International Nuclear Information System (INIS)

    Yang, H.

    1999-01-01

    The purpose of this analysis and model report (AMR) for the Ventilation Model is to analyze the effects of pre-closure continuous ventilation in the Engineered Barrier System (EBS) emplacement drifts and provide heat removal data to support EBS design. It will also provide input data (initial conditions, and time varying boundary conditions) for the EBS post-closure performance assessment and the EBS Water Distribution and Removal Process Model. The objective of the analysis is to develop, describe, and apply calculation methods and models that can be used to predict thermal conditions within emplacement drifts under forced ventilation during the pre-closure period. The scope of this analysis includes: (1) Provide a general description of effects and heat transfer process of emplacement drift ventilation. (2) Develop a modeling approach to simulate the impacts of pre-closure ventilation on the thermal conditions in emplacement drifts. (3) Identify and document inputs to be used for modeling emplacement ventilation. (4) Perform calculations of temperatures and heat removal in the emplacement drift. (5) Address general considerations of the effect of water/moisture removal by ventilation on the repository thermal conditions. The numerical modeling in this document will be limited to heat-only modeling and calculations. Only a preliminary assessment of the heat/moisture ventilation effects and modeling method will be performed in this revision. Modeling of moisture effects on heat removal and emplacement drift temperature may be performed in the future

  9. Meta-analysis for the comparison of two diagnostic tests to a common gold standard: A generalized linear mixed model approach.

    Science.gov (United States)

    Hoyer, Annika; Kuss, Oliver

    2016-08-02

    Meta-analysis of diagnostic studies is still a rapidly developing area of biostatistical research. Especially, there is an increasing interest in methods to compare different diagnostic tests to a common gold standard. Restricting to the case of two diagnostic tests, in these meta-analyses the parameters of interest are the differences of sensitivities and specificities (with their corresponding confidence intervals) between the two diagnostic tests while accounting for the various associations across single studies and between the two tests. We propose statistical models with a quadrivariate response (where sensitivity of test 1, specificity of test 1, sensitivity of test 2, and specificity of test 2 are the four responses) as a sensible approach to this task. Using a quadrivariate generalized linear mixed model naturally generalizes the common standard bivariate model of meta-analysis for a single diagnostic test. If information on several thresholds of the tests is available, the quadrivariate model can be further generalized to yield a comparison of full receiver operating characteristic (ROC) curves. We illustrate our model by an example where two screening methods for the diagnosis of type 2 diabetes are compared. © The Author(s) 2016.

  10. Turbulence modelling

    International Nuclear Information System (INIS)

    Laurence, D.

    1997-01-01

    This paper is an introduction course in modelling turbulent thermohydraulics, aimed at computational fluid dynamics users. No specific knowledge other than the Navier Stokes equations is required beforehand. Chapter I (which those who are not beginners can skip) provides basic ideas on turbulence physics and is taken up in a textbook prepared by the teaching team of the ENPC (Benque, Viollet). Chapter II describes turbulent viscosity type modelling and the 2k-ε two equations model. It provides details of the channel flow case and the boundary conditions. Chapter III describes the 'standard' (R ij -ε) Reynolds tensions transport model and introduces more recent models called 'feasible'. A second paper deals with heat transfer and the effects of gravity, and returns to the Reynolds stress transport model. (author)

  11. Mathematical modelling

    CERN Document Server

    2016-01-01

    This book provides a thorough introduction to the challenge of applying mathematics in real-world scenarios. Modelling tasks rarely involve well-defined categories, and they often require multidisciplinary input from mathematics, physics, computer sciences, or engineering. In keeping with this spirit of modelling, the book includes a wealth of cross-references between the chapters and frequently points to the real-world context. The book combines classical approaches to modelling with novel areas such as soft computing methods, inverse problems, and model uncertainty. Attention is also paid to the interaction between models, data and the use of mathematical software. The reader will find a broad selection of theoretical tools for practicing industrial mathematics, including the analysis of continuum models, probabilistic and discrete phenomena, and asymptotic and sensitivity analysis.

  12. Modelling Overview

    DEFF Research Database (Denmark)

    Larsen, Lars Bjørn; Vesterager, Johan

    sharing many of the characteristics of a virtual enterprise. This extended enterprise will have the following characteristics: The extended enterprise is focused on satisfying the current customer requirement so that it has a limited life expectancy, but should be capable of being recreated to deal....... One or more units from beyond the network may complement the extended enterprise. The common reference model for this extended enterprise will utilise GERAM (Generalised Enterprise Reference Architecture and Methodology) to provide an architectural framework for the modelling carried out within......This report provides an overview of the existing models of global manufacturing, describes the required modelling views and associated methods and identifies tools, which can provide support for this modelling activity.The model adopted for global manufacturing is that of an extended enterprise...

  13. Mathematical modelling

    DEFF Research Database (Denmark)

    Blomhøj, Morten

    2004-01-01

    modelling, however, can be seen as a practice of teaching that place the relation between real life and mathematics into the centre of teaching and learning mathematics, and this is relevant at all levels. Modelling activities may motivate the learning process and help the learner to establish cognitive......Developing competences for setting up, analysing and criticising mathematical models are normally seen as relevant only from and above upper secondary level. The general belief among teachers is that modelling activities presuppose conceptual understanding of the mathematics involved. Mathematical...... roots for the construction of important mathematical concepts. In addition competences for setting up, analysing and criticising modelling processes and the possible use of models is a formative aim in this own right for mathematics teaching in general education. The paper presents a theoretical...

  14. Event Modeling

    DEFF Research Database (Denmark)

    Bækgaard, Lars

    2001-01-01

    The purpose of this chapter is to discuss conceptual event modeling within a context of information modeling. Traditionally, information modeling has been concerned with the modeling of a universe of discourse in terms of information structures. However, most interesting universes of discourse...... are dynamic and we present a modeling approach that can be used to model such dynamics. We characterize events as both information objects and change agents (Bækgaard 1997). When viewed as information objects events are phenomena that can be observed and described. For example, borrow events in a library can...... be characterized by their occurrence times and the participating books and borrowers. When we characterize events as information objects we focus on concepts like information structures. When viewed as change agents events are phenomena that trigger change. For example, when borrow event occurs books are moved...

  15. Model : making

    OpenAIRE

    Bottle, Neil

    2013-01-01

    The Model : making exhibition was curated by Brian Kennedy in collaboration with Allies & Morrison in September 2013. For the London Design Festival, the Model : making exhibition looked at the increased use of new technologies by both craft-makers and architectural model makers. In both practices traditional ways of making by hand are increasingly being combined with the latest technologies of digital imaging, laser cutting, CNC machining and 3D printing. This exhibition focussed on ...

  16. Spherical models

    CERN Document Server

    Wenninger, Magnus J

    2012-01-01

    Well-illustrated, practical approach to creating star-faced spherical forms that can serve as basic structures for geodesic domes. Complete instructions for making models from circular bands of paper with just a ruler and compass. Discusses tessellation, or tiling, and how to make spherical models of the semiregular solids and concludes with a discussion of the relationship of polyhedra to geodesic domes and directions for building models of domes. "". . . very pleasant reading."" - Science. 1979 edition.

  17. An improved Rosetta pedotransfer function and evaluation in earth system models

    Science.gov (United States)

    Zhang, Y.; Schaap, M. G.

    2017-12-01

    Soil hydraulic parameters are often difficult and expensive to measure, leading to the pedotransfer functions (PTFs) an alternative to predict those parameters. Rosetta (Schaap et al., 2001, denoted as Rosetta1) are widely used PTFs, which is based on artificial neural network (ANN) analysis coupled with the bootstrap re-sampling method, allowing the estimation of van Genuchten water retention parameters (van Genuchten, 1980, abbreviated here as VG), saturated hydraulic conductivity (Ks), as well as their uncertainties. We present an improved hierarchical pedotransfer functions (Rosetta3) that unify the VG water retention and Ks submodels into one, thus allowing the estimation of uni-variate and bi-variate probability distributions of estimated parameters. Results show that the estimation bias of moisture content was reduced significantly. Rosetta1 and Posetta3 were implemented in the python programming language, and the source code are available online. Based on different soil water retention equations, there are diverse PTFs used in different disciplines of earth system modelings. PTFs based on Campbell [1974] or Clapp and Hornberger [1978] are frequently used in land surface models and general circulation models, while van Genuchten [1980] based PTFs are more widely used in hydrology and soil sciences. We use an independent global scale soil database to evaluate the performance of diverse PTFs used in different disciplines of earth system modelings. PTFs are evaluated based on different soil characteristics and environmental characteristics, such as soil textural data, soil organic carbon, soil pH, as well as precipitation and soil temperature. This analysis provides more quantitative estimation error information for PTF predictions in different disciplines of earth system modelings.

  18. Factors associated with past research participation among low-income persons living with HIV.

    Science.gov (United States)

    Slomka, Jacquelyn; Kypriotakis, Georgios; Atkinson, John; Diamond, Pamela M; Williams, Mark L; Vidrine, Damon J; Andrade, Roberto; Arduino, Roberto

    2012-08-01

    We described influences on past research participation among low-income persons living with HIV (PLWH) and examined whether such influences differed by study type. We analyzed a convenience sample of individuals from a large, urban clinic specializing in treating low-income PLWH. Using a computer-assisted survey, we elicited perceptions of research and participating in research, barriers, benefits, "trigger" influences, and self-efficacy in participating in research. Of 193 participants, we excluded 14 who did not identify any type of study participation, and 17 who identified "other" as study type, resulting in 162 cases for analysis. We compared results among four groups (i.e., 6 comparisons): past medical participants (n=36, 22%), past behavioral participants (n=49, 30%), individuals with no past research participation (n=52, 32%), and persons who had participated in both medical and behavioral studies (n=25, 15%). Data were analyzed using chi-square tests for categorical variables and ANOVA for continuous variables. We employed a multinomial probit (MNP) model to examine the association of multiple factors with the outcome. Confidence in ability to keep appointments, and worry about being a 'guinea pig' showed statistical differences in bivariate analyses. The MNP regression analysis showed differences between and across all 6 comparison groups. Fewer differences were seen across groupings of medical participants, behavioral participants, and those with no past research experience, than in comparisons with the medical-behavioral group. In the MNP regression model 'age' and level of certainty regarding 'keeping yourself from being a guinea pig' showed significant differences between past medical participants and past behavioral participants.

  19. Disparities in work, risk and health between immigrants and native-born Spaniards.

    Science.gov (United States)

    Solé, Meritxell; Diaz-Serrano, Luis; Rodríguez, Marisol

    2013-01-01

    The probability of acquiring a permanent disability is partly determined by working and contractual conditions, particularly exposure to job risks. We postulate a model in which this impact is mediated by the choice of occupation, with a level of risk associated with it. We assume this choice is endogenous and that it depends on preferences and opportunities in the labour market, both of which may differ between immigrants and natives. To test this hypothesis we apply a bivariate probit model, in which we control for personal and firm characteristics, to data for 2006 from the Continuous Sample of Working Lives provided by the Spanish Social Security system, containing records for over a million workers. We find that risk exposure increases the probability of permanent disability--arising from any cause--by almost 5%. Temporary employment and low-skilled jobs also have a positive impact. Increases in education reduce the likelihood of disability, even after controlling for the impact of education on the choice of (lower) risk. Females have a greater probability of becoming disabled. Migrant status--with differences among regions of origin--significantly affects both disability and the probability of being employed in a high-risk occupation. In spite of immigrants' working conditions being objectively worse, they exhibit a lower probability of becoming disabled than natives because the impact of such conditions on disability is much smaller in their case. Time elapsed since first enrolment in the Social Security system increases the probability of disability in a proportion similar to that of natives, which is consistent with the immigrant assimilation hypothesis. We finally conclude that our theoretical hypothesis that disability and risk are jointly determined is only valid for natives and not valid for immigrants, in the sense that, for them, working conditions are not a matter of choice in terms of health. Copyright © 2012 Elsevier Ltd. All rights reserved.

  20. National South African HIV prevalence estimates robust despite substantial test non-participation

    Directory of Open Access Journals (Sweden)

    Guy Harling

    2017-07-01

    Full Text Available Background. South African (SA national HIV seroprevalence estimates are of crucial policy relevance in the country, and for the worldwide HIV response. However, the most recent nationally representative HIV test survey in 2012 had 22% test non-participation, leaving the potential for substantial bias in current seroprevalence estimates, even after controlling for selection on observed factors. Objective. To re-estimate national HIV prevalence in SA, controlling for bias due to selection on both observed and unobserved factors in the 2012 SA National HIV Prevalence, Incidence and Behaviour Survey. Methods. We jointly estimated regression models for consent to test and HIV status in a Heckman-type bivariate probit framework. As selection variable, we used assigned interviewer identity, a variable known to predict consent but highly unlikely to be associated with interviewees’ HIV status. From these models, we estimated the HIV status of interviewed participants who did not test. Results. Of 26 710 interviewed participants who were invited to test for HIV, 21.3% of females and 24.3% of males declined. Interviewer identity was strongly correlated with consent to test for HIV; declining a test was weakly associated with HIV serostatus. Our HIV prevalence estimates were not significantly different from those using standard methods to control for bias due to selection on observed factors: 15.1% (95% confidence interval (CI 12.1 - 18.6 v. 14.5% (95% CI 12.8 - 16.3 for 15 - 49-year-old males; 23.3% (95% CI 21.7 - 25.8 v. 23.2% (95% CI 21.3 - 25.1 for 15 - 49-year-old females. Conclusion. The most recent SA HIV prevalence estimates are robust under the strongest available test for selection bias due to missing data. Our findings support the reliability of inferences drawn from such data.