WorldWideScience

Sample records for bivariate probit models

  1. The bivariate probit model of uncomplicated control of tumor: a heuristic exposition of the methodology

    International Nuclear Information System (INIS)

    Herbert, Donald

    1997-01-01

    Purpose: To describe the concept, models, and methods for the construction of estimates of joint probability of uncomplicated control of tumors in radiation oncology. Interpolations using this model can lead to the identification of more efficient treatment regimens for an individual patient. The requirement to find the treatment regimen that will maximize the joint probability of uncomplicated control of tumors suggests a new class of evolutionary experimental designs--Response Surface Methods--for clinical trials in radiation oncology. Methods and Materials: The software developed by Lesaffre and Molenberghs is used to construct bivariate probit models of the joint probability of uncomplicated control of cancer of the oropharynx from a set of 45 patients for each of whom the presence/absence of recurrent tumor (the binary event E-bar 1 /E 1 ) and the presence/absence of necrosis (the binary event E 2 /E-bar 2 ) of the normal tissues of the target volume is recorded, together with the treatment variables dose, time, and fractionation. Results: The bivariate probit model can be used to select a treatment regime that will give a specified probability, say P(S) = 0.60, of uncomplicated control of tumor by interpolation within a set of treatment regimes with known outcomes of recurrence and necrosis. The bivariate probit model can be used to guide a sequence of clinical trials to find the maximum probability of uncomplicated control of tumor for patients in a given prognostic stratum using Response Surface methods by extrapolation from an initial set of treatment regimens. Conclusions: The design of treatments for individual patients and the design of clinical trials might be improved by use of a bivariate probit model and Response Surface Methods

  2. Assessing characteristics related to the use of seatbelts and cell phones by drivers: application of a bivariate probit model.

    Science.gov (United States)

    Russo, Brendan J; Kay, Jonathan J; Savolainen, Peter T; Gates, Timothy J

    2014-06-01

    The effects of cell phone use and safety belt use have been an important focus of research related to driver safety. Cell phone use has been shown to be a significant source of driver distraction contributing to substantial degradations in driver performance, while safety belts have been demonstrated to play a vital role in mitigating injuries to crash-involved occupants. This study examines the prevalence of cell phone use and safety belt non-use among the driving population through direct observation surveys. A bivariate probit model is developed to simultaneously examine the factors that affect cell phone and safety belt use among motor vehicle drivers. The results show that several factors may influence drivers' decision to use cell phones and safety belts, and that these decisions are correlated. Understanding the factors that affect both cell phone use and safety belt non-use is essential to targeting policy and programs that reduce such behavior. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. The Effect of Supplemental Instruction on Retention: A Bivariate Probit Model

    Science.gov (United States)

    Bowles, Tyler J.; Jones, Jason

    2004-01-01

    Single equation regression models have been used to test the effect of Supplemental Instruction (SI) on student retention. These models, however, fail to account for the two salient features of SI attendance and retention: (1) both SI attendance and retention are categorical variables, and (2) are jointly determined endogenous variables. Adopting…

  4. Effectiveness of enforcement levels of speed limit and drink driving laws and associated factors – Exploratory empirical analysis using a bivariate ordered probit model

    Directory of Open Access Journals (Sweden)

    Behram Wali

    2017-06-01

    Full Text Available The contemporary traffic safety research comprises little information on quantifying the simultaneous association between drink driving and speeding among fatally injured drivers. Potential correlation between driver's drink driving and speeding behavior poses a substantial methodological concern which needs investigation. This study therefore focused on investigating the simultaneous impact of socioeconomic factors, fatalities, vehicle ownership, health services and highway agency road safety policies on enforcement levels of speed limit and drink driving laws. The effectiveness of enforcement levels of speed limit and drink driving laws has been investigated through development of bivariate ordered probit model using data extricated from WHO's global status report on road safety in 2013. The consistent and intuitive parameter estimates along with statistically significant correlation between response outcomes validates the statistical supremacy of bivariate ordered probit model. The results revealed that fatalities per thousand registered vehicles, hospital beds per hundred thousand population and road safety policies are associated with a likely medium or high effectiveness of enforcement levels of speed limit and drink driving laws, respectively. Also, the model encapsulates the effect of several other agency related variables and socio-economic status on the response outcomes. Marginal effects are reported for analyzing the impact of such factors on intermediate categories of response outcomes. The results of this study are expected to provide necessary insights to elemental enforcement programs. Also, marginal effects of explanatory variables may provide useful directions for formulating effective policy countermeasures for overcoming driver's speeding and drink driving behavior.

  5. Modelling the vicious circle between obesity and physical activity in children and adolescents using a bivariate probit model with endogenous regressors.

    Science.gov (United States)

    Yeh, C-Y; Chen, L-J; Ku, P-W; Chen, C-M

    2015-01-01

    The increasing prevalence of obesity in children and adolescents has become one of the most important public health issues around the world. Lack of physical activity is a risk factor for obesity, while being obese could reduce the likelihood of participating in physical activity. Failing to account for the endogeneity between obesity and physical activity would result in biased estimation. This study investigates the relationship between overweight and physical activity by taking endogeneity into consideration. It develops an endogenous bivariate probit model estimated by the maximum likelihood method. The data included 4008 boys and 4197 girls in the 5th-9th grades in Taiwan in 2007-2008. The relationship between overweight and physical activity is significantly negative in the endogenous model, but insignificant in the comparative exogenous model. This endogenous relationship presents a vicious circle in which lower levels of physical activity lead to overweight, while those who are already overweight engage in less physical activity. The results not only reveal the importance of endogenous treatment, but also demonstrate the robust negative relationship between these two factors. An emphasis should be put on overweight and obese children and adolescents in order to break the vicious circle. Promotion of physical activity by appropriate counselling programmes and peer support could be effective in reducing the prevalence of obesity in children and adolescents.

  6. Evaluation of Factors Affecting E-Bike Involved Crash and E-Bike License Plate Use in China Using a Bivariate Probit Model

    Directory of Open Access Journals (Sweden)

    Yanyong Guo

    2017-01-01

    Full Text Available The primary objective of this study is to evaluate factors affecting e-bike involved crash and license plate use in China. E-bike crashes data were collected from police database and completed through a telephone interview. Noncrash samples were collected by a questionnaire survey. A bivariate probit (BP model was developed to simultaneously examine the significant factors associated with e-bike involved crash and e-bike license plate and to account for the correlations between them. Marginal effects for contributory factors were calculated to quantify their impacts on the outcomes. The results show that several contributory factors, including gender, age, education level, driver license, car in household, experiences in using e-bike, law compliance, and aggressive driving behaviors, are found to have significant impacts on both e-bike involved crash and license plate use. Moreover, type of e-bike, frequency of using e-bike, impulse behavior, degree of riding experience, and risk perception scale are found to be associated with e-bike involved crash. It is also found that e-bike involved crash and e-bike license plate use are strongly correlated and are negative in direction. The result enhanced our comprehension of the factors related to e-bike involved crash and e-bike license plate use.

  7. The Role of Wealth and Health in Insurance Choice: Bivariate Probit Analysis in China

    Directory of Open Access Journals (Sweden)

    Yiding Yue

    2014-01-01

    Full Text Available This paper captures the correlation between the choices of health insurance and pension insurance using the bivariate probit model and then studies the effect of wealth and health on insurance choice. Our empirical evidence shows that people who participate in a health care program are more likely to participate in a pension plan at the same time, while wealth and health have different effects on the choices of the health care program and the pension program. Generally, the higher an individual’s wealth level is, the more likelihood he will participate in a health care program; but wealth has no effect on the participation of pension. Health status has opposite effects on choices of health care programs and pension plans; the poorer an individual’s health is, the more likely he is to participate in health care programs, while the better health he enjoys, the more likely he is to participate in pension plans. When the investigation scope narrows down to commercial insurance, there is only a significant effect of health status on commercial health insurance. The commercial insurance choice and the insurance choice of the agricultural population are more complicated.

  8. A Multinomial Probit Model with Latent Factors

    DEFF Research Database (Denmark)

    Piatek, Rémi; Gensowski, Miriam

    2017-01-01

    be meaningfully linked to an economic model. We provide sufficient conditions that make this structure identified and interpretable. For inference, we design a Markov chain Monte Carlo sampler based on marginal data augmentation. A simulation exercise shows the good numerical performance of our sampler......We develop a parametrization of the multinomial probit model that yields greater insight into the underlying decision-making process, by decomposing the error terms of the utilities into latent factors and noise. The latent factors are identified without a measurement system, and they can...

  9. Nonparametric Bayesian models through probit stick-breaking processes.

    Science.gov (United States)

    Rodríguez, Abel; Dunson, David B

    2011-03-01

    We describe a novel class of Bayesian nonparametric priors based on stick-breaking constructions where the weights of the process are constructed as probit transformations of normal random variables. We show that these priors are extremely flexible, allowing us to generate a great variety of models while preserving computational simplicity. Particular emphasis is placed on the construction of rich temporal and spatial processes, which are applied to two problems in finance and ecology.

  10. A Probit Model for the State of the Greek GDP Growth

    Directory of Open Access Journals (Sweden)

    Stavros Degiannakis

    2015-08-01

    Full Text Available The paper provides probability estimates of the state of the GDP growth. A regime-switching model defines the probability of the Greek GDP being in boom or recession. Then probit models extract the predictive information of a set of explanatory (economic and financial variables regarding the state of the GDP growth. A contemporaneous, as well as a lagged, relationship between the explanatory variables and the state of the GDP growth is conducted. The mean absolute distance (MAD between the probability of not being in recession and the probability estimated by the probit model is the function that evaluates the performance of the models. The probit model with the industrial production index and the realized volatility as the explanatory variables has the lowest MAD value of 6.43% (7.94% in the contemporaneous (lagged relationship.

  11. The Use of a Probit Model for the Validation of Selection Procedures.

    Science.gov (United States)

    Dagenais, Denyse L.

    1984-01-01

    After a review of the disadvantages of linear models for estimating the probability of academic success from previous school records and admission test results, the use of a probit model is proposed. The model is illustrated with admissions data from the Ecole des Hautes Etudes Commerciales in Montreal. (Author/BW)

  12. Interpreting and Understanding Logits, Probits, and other Non-Linear Probability Models

    DEFF Research Database (Denmark)

    Breen, Richard; Karlson, Kristian Bernt; Holm, Anders

    2018-01-01

    Methods textbooks in sociology and other social sciences routinely recommend the use of the logit or probit model when an outcome variable is binary, an ordered logit or ordered probit when it is ordinal, and a multinomial logit when it has more than two categories. But these methodological...... guidelines take little or no account of a body of work that, over the past 30 years, has pointed to problematic aspects of these nonlinear probability models and, particularly, to difficulties in interpreting their parameters. In this chapterreview, we draw on that literature to explain the problems, show...

  13. Parameter Estimation in Probit Model for Multivariate Multinomial Response Using SMLE

    Directory of Open Access Journals (Sweden)

    Jaka Nugraha

    2012-02-01

    Full Text Available In  the  research  field  of  transportation,  market  research and  politics,  often involving  the  response  of  the multinomial multivariate  observations.  In  this  paper, we discused  a  modeling  of  multivariate  multinomial  responses  using  probit  model.  The estimated  parameters  were  calculated  using Maximum  Likelihood  Estimations  (MLE based  on  the  GHK  simulation.  method  known  as Simulated  Maximum  Likelihood Estimations (SMLE. Likelihood function on the Probit model contains probability values that must be resolved by simulation. By using  the GHK simulation algorithm,  the estimator equation has been obtained for the parameters in the model Probit  Keywords : Probit Model, Newton-Raphson Iteration,  GHK simulator, MLE, simulated log-likelihood

  14. Evaluating the performance of simple estimators for probit models with two dummy endogenous regressors

    DEFF Research Database (Denmark)

    Holm, Anders; Nielsen, Jacob Arendt

    2013-01-01

    This study considers the small sample performance of approximate but simple two-stage estimators for probit models with two endogenous binary covariates. Monte Carlo simulations showthat all the considered estimators, including the simulated maximum-likelihood (SML) estimation, of the trivariate ...

  15. Another Look at the Method of Y-Standardization in Logit and Probit Models

    DEFF Research Database (Denmark)

    Karlson, Kristian Bernt

    2015-01-01

    This paper takes another look at the derivation of the method of Y-standardization used in sociological analysis involving comparisons of coefficients across logit or probit models. It shows that the method can be derived under less restrictive assumptions than hitherto suggested. Rather than...

  16. PENERAPAN REGRESI PROBIT BIVARIAT UNTUK MENDUGA FAKTOR-FAKTOR YANG MEMENGARUHI KELULUSAN MAHASISWA (Studi Kasus: Mahasiswa Fakultas MIPA Unversitas Udayana

    Directory of Open Access Journals (Sweden)

    NI GUSTI KETUT TRISNA PRADNYANTARI

    2015-06-01

    Full Text Available The aim of this research to estimate the factors that affect students graduation using bivariate probit regression. Bivariate probit regression is a statistical method that involves two response variables which are qualitative and the independent variables are qualitative, quantitative, or a combination of both. In bivariate probit regression model, the result obtained is the probability of the response variable. The result of this research are the factors that affect significantly for students graduation based on study period are majors, sex, and duration of the thesis, while the factors that significantly for students graduation based on GPA are the entry system, duration of the thesis and the number of parents’ dependents.

  17. On bivariate geometric distribution

    Directory of Open Access Journals (Sweden)

    K. Jayakumar

    2013-05-01

    Full Text Available Characterizations of bivariate geometric distribution using univariate and bivariate geometric compounding are obtained. Autoregressive models with marginals as bivariate geometric distribution are developed. Various bivariate geometric distributions analogous to important bivariate exponential distributions like, Marshall-Olkin’s bivariate exponential, Downton’s bivariate exponential and Hawkes’ bivariate exponential are presented.

  18. DETERMINANTS OF SOVEREIGN RATING: FACTOR BASED ORDERED PROBIT MODELS FOR PANEL DATA ANALYSIS MODELING FRAMEWORK

    Directory of Open Access Journals (Sweden)

    Dilek Teker

    2013-01-01

    Full Text Available The aim of this research is to compose a new rating methodology and provide credit notches to 23 countries which of 13 are developed and 10 are emerging. There are various literature that explains the determinants of credit ratings. Following the literature, we select 11 variables for our model which of 5 are eliminated by the factor analysis. We use specific dummies to investigate the structural breaks in time and cross section such as pre crises, post crises, BRIC membership, EU membership, OPEC membership, shipbuilder country and platinum reserved country. Then we run an ordered probit model and give credit notches to the countries. We use FITCH ratings as benchmark. Thus, at the end we compare the notches of FITCH with the ones we derive out of our estimated model.

  19. Estimating a graphical intra-class correlation coefficient (GICC) using multivariate probit-linear mixed models.

    Science.gov (United States)

    Yue, Chen; Chen, Shaojie; Sair, Haris I; Airan, Raag; Caffo, Brian S

    2015-09-01

    Data reproducibility is a critical issue in all scientific experiments. In this manuscript, the problem of quantifying the reproducibility of graphical measurements is considered. The image intra-class correlation coefficient (I2C2) is generalized and the graphical intra-class correlation coefficient (GICC) is proposed for such purpose. The concept for GICC is based on multivariate probit-linear mixed effect models. A Markov Chain Monte Carlo EM (mcm-cEM) algorithm is used for estimating the GICC. Simulation results with varied settings are demonstrated and our method is applied to the KIRBY21 test-retest dataset.

  20. The individual tolerance concept is not the sole explanation for the probit dose-effect model

    Energy Technology Data Exchange (ETDEWEB)

    Newman, M.C.; McCloskey, J.T.

    2000-02-01

    Predominant methods for analyzing dose- or concentration-effect data (i.e., probit analysis) are based on the concept of individual tolerance or individual effective dose (IED, the smallest characteristic dose needed to kill an individual). An alternative explanation (stochasticity hypothesis) is that individuals do not have unique tolerances: death results from stochastic processes occurring similarly in all individuals. These opposing hypotheses were tested with two types of experiments. First, time to stupefaction (TTS) was measured for zebra fish (Brachydanio rerio) exposed to benzocaine. The same 40 fish were exposed during five trials to test if the same order for TTS was maintained among trials. The IED hypothesis was supported with a minor stochastic component being present. Second, eastern mosquitofish (Gambusia holbrooki) were exposed to sublethal or lethal NaCl concentrations until a large portion of the lethally exposed fish died. After sufficient time for recovery, fish sublethally exposed and fish surviving lethal exposure were exposed simultaneously to lethal NaCl concentrations. No statistically significant effect was found of previous exposure on survival time but a large stochastic component to the survival dynamics was obvious. Repetition of this second type of test with pentachlorophenol also provided no support for the IED hypothesis. The authors conclude that neither hypothesis alone was the sole or dominant explanation for the lognormal (probit) model. Determination of the correct explanation (IED or stochastic) or the relative contributions of each is crucial to predicting consequences to populations after repeated or chronic exposures to any particular toxicant.

  1. Extended probit mortality model for zooplankton against transient change of PCO(2).

    Science.gov (United States)

    Sato, Toru; Watanabe, Yuji; Toyota, Koji; Ishizaka, Joji

    2005-09-01

    The direct injection of CO(2) in the deep ocean is a promising way to mitigate global warming. One of the uncertainties in this method, however, is its impact on marine organisms in the near field. Since the concentration of CO(2), which organisms experience in the ocean, changes with time, it is required to develop a biological impact model for the organisms against the unsteady change of CO(2) concentration. In general, the LC(50) concept is widely applied for testing a toxic agent for the acute mortality. Here, we regard the probit-transformed mortality as a linear function not only of the concentration of CO(2) but also of exposure time. A simple mathematical transform of the function gives a damage-accumulation mortality model for zooplankton. In this article, this model was validated by the mortality test of Metamphiascopsis hirsutus against the transient change of CO(2) concentration.

  2. Bivariate Kumaraswamy Models via Modified FGM Copulas: Properties and Applications

    Directory of Open Access Journals (Sweden)

    Indranil Ghosh

    2017-11-01

    Full Text Available A copula is a useful tool for constructing bivariate and/or multivariate distributions. In this article, we consider a new modified class of FGM (Farlie–Gumbel–Morgenstern bivariate copula for constructing several different bivariate Kumaraswamy type copulas and discuss their structural properties, including dependence structures. It is established that construction of bivariate distributions by this method allows for greater flexibility in the values of Spearman’s correlation coefficient, ρ and Kendall’s τ .

  3. Efficient estimation of semiparametric copula models for bivariate survival data

    KAUST Repository

    Cheng, Guang

    2014-01-01

    A semiparametric copula model for bivariate survival data is characterized by a parametric copula model of dependence and nonparametric models of two marginal survival functions. Efficient estimation for the semiparametric copula model has been recently studied for the complete data case. When the survival data are censored, semiparametric efficient estimation has only been considered for some specific copula models such as the Gaussian copulas. In this paper, we obtain the semiparametric efficiency bound and efficient estimation for general semiparametric copula models for possibly censored data. We construct an approximate maximum likelihood estimator by approximating the log baseline hazard functions with spline functions. We show that our estimates of the copula dependence parameter and the survival functions are asymptotically normal and efficient. Simple consistent covariance estimators are also provided. Numerical results are used to illustrate the finite sample performance of the proposed estimators. © 2013 Elsevier Inc.

  4. Brand Choice Modeling Modeling Toothpaste Brand Choice: An Empirical Comparison of Artificial Neural Networks and Multinomial Probit Model

    Directory of Open Access Journals (Sweden)

    Tolga Kaya

    2010-11-01

    Full Text Available The purpose of this study is to compare the performances of Artificial Neural Networks (ANN and Multinomial Probit (MNP approaches in modeling the choice decision within fast moving consumer goods sector. To do this, based on 2597 toothpaste purchases of a panel sample of 404 households, choice models are built and their performances are compared on the 861 purchases of a test sample of 135 households. Results show that ANN's predictions are better while MNP is useful in providing marketing insight.

  5. Tennis Elbow Diagnosis Using Equivalent Uniform Voltage to Fit the Logistic and the Probit Diseased Probability Models

    Directory of Open Access Journals (Sweden)

    Tsair-Fwu Lee

    2015-01-01

    Full Text Available To develop the logistic and the probit models to analyse electromyographic (EMG equivalent uniform voltage- (EUV- response for the tenderness of tennis elbow. In total, 78 hands from 39 subjects were enrolled. In this study, surface EMG (sEMG signal is obtained by an innovative device with electrodes over forearm region. The analytical endpoint was defined as Visual Analog Score (VAS 3+ tenderness of tennis elbow. The logistic and the probit diseased probability (DP models were established for the VAS score and EMG absolute voltage-time histograms (AVTH. TV50 is the threshold equivalent uniform voltage predicting a 50% risk of disease. Twenty-one out of 78 samples (27% developed VAS 3+ tenderness of tennis elbow reported by the subject and confirmed by the physician. The fitted DP parameters were TV50 = 153.0 mV (CI: 136.3–169.7 mV, γ50 = 0.84 (CI: 0.78–0.90 and TV50 = 155.6 mV (CI: 138.9–172.4 mV, m = 0.54 (CI: 0.49–0.59 for logistic and probit models, respectively. When the EUV ≥ 153 mV, the DP of the patient is greater than 50% and vice versa. The logistic and the probit models are valuable tools to predict the DP of VAS 3+ tenderness of tennis elbow.

  6. Tennis Elbow Diagnosis Using Equivalent Uniform Voltage to Fit the Logistic and the Probit Diseased Probability Models

    Science.gov (United States)

    Lin, Wei-Chun; Lin, Shu-Yuan; Wu, Li-Fu; Guo, Shih-Sian; Huang, Hsiang-Jui; Chao, Pei-Ju

    2015-01-01

    To develop the logistic and the probit models to analyse electromyographic (EMG) equivalent uniform voltage- (EUV-) response for the tenderness of tennis elbow. In total, 78 hands from 39 subjects were enrolled. In this study, surface EMG (sEMG) signal is obtained by an innovative device with electrodes over forearm region. The analytical endpoint was defined as Visual Analog Score (VAS) 3+ tenderness of tennis elbow. The logistic and the probit diseased probability (DP) models were established for the VAS score and EMG absolute voltage-time histograms (AVTH). TV50 is the threshold equivalent uniform voltage predicting a 50% risk of disease. Twenty-one out of 78 samples (27%) developed VAS 3+ tenderness of tennis elbow reported by the subject and confirmed by the physician. The fitted DP parameters were TV50 = 153.0 mV (CI: 136.3–169.7 mV), γ 50 = 0.84 (CI: 0.78–0.90) and TV50 = 155.6 mV (CI: 138.9–172.4 mV), m = 0.54 (CI: 0.49–0.59) for logistic and probit models, respectively. When the EUV ≥ 153 mV, the DP of the patient is greater than 50% and vice versa. The logistic and the probit models are valuable tools to predict the DP of VAS 3+ tenderness of tennis elbow. PMID:26380281

  7. Comparison of Model Reliabilities from Single-Step and Bivariate Blending Methods

    DEFF Research Database (Denmark)

    Taskinen, Matti; Mäntysaari, Esa; Lidauer, Martin

    2013-01-01

    Model based reliabilities in genetic evaluation are compared between three methods: animal model BLUP, single-step BLUP, and bivariate blending after genomic BLUP. The original bivariate blending is revised in this work to better account animal models. The study data is extracted from...... be calculated. Model reliabilities by the single-step and the bivariate blending methods were higher than by animal model due to genomic information. Compared to the single-step method, the bivariate blending method reliability estimates were, in general, lower. Computationally bivariate blending method was......, on the other hand, lighter than the single-step method....

  8. A constrained multinomial Probit route choice model in the metro network: Formulation, estimation and application

    Science.gov (United States)

    Zhang, Yongsheng; Wei, Heng; Zheng, Kangning

    2017-01-01

    Considering that metro network expansion brings us with more alternative routes, it is attractive to integrate the impacts of routes set and the interdependency among alternative routes on route choice probability into route choice modeling. Therefore, the formulation, estimation and application of a constrained multinomial probit (CMNP) route choice model in the metro network are carried out in this paper. The utility function is formulated as three components: the compensatory component is a function of influencing factors; the non-compensatory component measures the impacts of routes set on utility; following a multivariate normal distribution, the covariance of error component is structured into three parts, representing the correlation among routes, the transfer variance of route, and the unobserved variance respectively. Considering multidimensional integrals of the multivariate normal probability density function, the CMNP model is rewritten as Hierarchical Bayes formula and M-H sampling algorithm based Monte Carlo Markov Chain approach is constructed to estimate all parameters. Based on Guangzhou Metro data, reliable estimation results are gained. Furthermore, the proposed CMNP model also shows a good forecasting performance for the route choice probabilities calculation and a good application performance for transfer flow volume prediction. PMID:28591188

  9. The spatial Probit model-An application to the study of banking crises at the end of the 1990’s

    Science.gov (United States)

    Amaral, Andrea; Abreu, Margarida; Mendes, Victor

    2014-12-01

    We use a spatial Probit model to study the effect of contagion between banking systems of different countries. Applied to the late 1990s banking crisis in Asia we show that the phenomena of contagion is better seized using a spatial than a traditional Probit model. Unlike the latter, the spatial Probit model allows one to consider the cascade of cross and feedback effects of contagion that result from the outbreak of one initial crisis in one country or system. These contagion effects may result either from business connections between institutions of different countries or from institutional similarities between banking systems.

  10. Measuring public understanding on Tenaga Nasional Berhad (TNB) electricity bills using ordered probit model

    Science.gov (United States)

    Zainudin, WNRA; Ramli, NA

    2017-09-01

    In 2016, Tenaga Nasional Berhad (TNB) had introduced an upgrade in its Billing and Customer Relationship Management (BCRM) as part of its long-term initiative to provide its customers with greater access to billing information. This includes information on real and suggested power consumption by the customers and further details in their billing charges. This information is useful to help TNB customers to gain better understanding on their electricity usage patterns and items involved in their billing charges. Up to date, there are not many studies done to measure public understanding on current electricity bills and whether this understanding could contribute towards positive impacts. The purpose of this paper is to measure public understanding on current TNB electricity bills and whether their satisfaction towards energy-related services, electricity utility services, and their awareness on the amount of electricity consumed by various appliances and equipment in their home could improve this understanding on the electricity bills. Both qualitative and quantitative research methods are used to achieve these objectives. A total of 160 respondents from local universities in Malaysia participated in a survey used to collect relevant information. Using Ordered Probit model, this paper finds respondents that are highly satisfied with the electricity utility services tend to understand their electricity bills better. The electric utility services include management of electricity bills and the information obtained from utility or non-utility supplier to help consumers manage their energy usage or bills. Based on the results, this paper concludes that the probability to understand the components in the monthly electricity bill increases as respondents are more satisfied with their electric utility services and are more capable to value the energy-related services.

  11. Measuring public acceptance on renewable energy (RE) development in Malaysia using ordered probit model

    Science.gov (United States)

    Zainudin, W. N. R. A.; Ishak, W. W. M.

    2017-09-01

    In 2009, government of Malaysia has announced a National Renewable Energy Policy and Action Plan as part of their commitment to accelerate the growth in renewable energies (RE). However, an adoption of RE as a main source of energy is still at an early stage due to lack of public awareness and acceptance on RE. Up to date, there are insufficient studies done on the reasons behind this lack of awareness and acceptance. Therefore, this paper is interested to investigate the public acceptance towards development of RE by measuring their willingness to pay slightly more for energy generated from RE sources, denote as willingness level and whether the importance for the electricity to be supplied at absolute lowest possible cost regardless of source and environmental impact, denote as importance level and other socio-economic factors could improve their willingness level. Both qualitative and quantitative research methods are used to achieve the research objectives. A total of 164 respondents from local universities in Malaysia participated in a survey to collect this relevant information. Using Ordered Probit model, the study shows that among the relevant socio-economic factors, age seems to be an important factor to influence the willingness level of the respondents. This paper concludes that younger generation are more willing to pay slightly more for energy generated from RE sources as compared to older generation. One of the possible reason may due to better information access by the younger generation on the RE issues and its positive implication to the world. Finding from this paper is useful to help policy maker in designing RE advocacy programs that would be able to secure public participation. These efforts are important to ensure future success of the RE policy.

  12. Bivariate Probit Models for Analysing how “Knowledge” Affects Innovation and Performance in Small and Medium Sized Firms

    OpenAIRE

    FARACE, Salvatore; MAZZOTTA, Fernanda

    2011-01-01

    This paper examines the determinants of innovation and its effects on small- and medium-sized firms We use the data from the OPIS databank, which provides a survey on a representative sample of firms from a province of the Southern Italy. We want to study whether small and medium sized firms can have a competitive advantage using their innovative capabilities, regardless of their sectoral and size limits. The main factor influencing the likelihood of innovation is knowledge, which is acquired...

  13. Optimizing an objective function under a bivariate probability model

    NARCIS (Netherlands)

    X. Brusset; N.M. Temme (Nico)

    2007-01-01

    htmlabstractThe motivation of this paper is to obtain an analytical closed form of a quadratic objective function arising from a stochastic decision process with bivariate exponential probability distribution functions that may be dependent. This method is applicable when results need to be

  14. A Spatial Probit Econometric Model of Land Change: The Case of Infrastructure Development in Western Amazonia, Peru

    Science.gov (United States)

    Arima, E. Y.

    2016-01-01

    Tropical forests are now at the center stage of climate mitigation policies worldwide given their roles as sources of carbon emissions resulting from deforestation and forest degradation. Although the international community has created mechanisms such as REDD+ to reduce those emissions, developing tropical countries continue to invest in infrastructure development in an effort to spur economic growth. Construction of roads in particular is known to be an important driver of deforestation. This article simulates the impact of road construction on deforestation in Western Amazonia, Peru, and quantifies the amount of carbon emissions associated with projected deforestation. To accomplish this objective, the article adopts a Bayesian probit land change model in which spatial dependencies are defined between regions or groups of pixels instead of between individual pixels, thereby reducing computational requirements. It also compares and contrasts the patterns of deforestation predicted by both spatial and non-spatial probit models. The spatial model replicates complex patterns of deforestation whereas the non-spatial model fails to do so. In terms of policy, both models suggest that road construction will increase deforestation by a modest amount, between 200–300 km2. This translates into aboveground carbon emissions of 1.36 and 1.85 x 106 tons. However, recent introduction of palm oil in the region serves as a cautionary example that the models may be underestimating the impact of roads. PMID:27010739

  15. A Spatial Probit Econometric Model of Land Change: The Case of Infrastructure Development in Western Amazonia, Peru.

    Directory of Open Access Journals (Sweden)

    E Y Arima

    Full Text Available Tropical forests are now at the center stage of climate mitigation policies worldwide given their roles as sources of carbon emissions resulting from deforestation and forest degradation. Although the international community has created mechanisms such as REDD+ to reduce those emissions, developing tropical countries continue to invest in infrastructure development in an effort to spur economic growth. Construction of roads in particular is known to be an important driver of deforestation. This article simulates the impact of road construction on deforestation in Western Amazonia, Peru, and quantifies the amount of carbon emissions associated with projected deforestation. To accomplish this objective, the article adopts a Bayesian probit land change model in which spatial dependencies are defined between regions or groups of pixels instead of between individual pixels, thereby reducing computational requirements. It also compares and contrasts the patterns of deforestation predicted by both spatial and non-spatial probit models. The spatial model replicates complex patterns of deforestation whereas the non-spatial model fails to do so. In terms of policy, both models suggest that road construction will increase deforestation by a modest amount, between 200-300 km2. This translates into aboveground carbon emissions of 1.36 and 1.85 x 106 tons. However, recent introduction of palm oil in the region serves as a cautionary example that the models may be underestimating the impact of roads.

  16. A Spatial Probit Econometric Model of Land Change: The Case of Infrastructure Development in Western Amazonia, Peru.

    Science.gov (United States)

    Arima, E Y

    2016-01-01

    Tropical forests are now at the center stage of climate mitigation policies worldwide given their roles as sources of carbon emissions resulting from deforestation and forest degradation. Although the international community has created mechanisms such as REDD+ to reduce those emissions, developing tropical countries continue to invest in infrastructure development in an effort to spur economic growth. Construction of roads in particular is known to be an important driver of deforestation. This article simulates the impact of road construction on deforestation in Western Amazonia, Peru, and quantifies the amount of carbon emissions associated with projected deforestation. To accomplish this objective, the article adopts a Bayesian probit land change model in which spatial dependencies are defined between regions or groups of pixels instead of between individual pixels, thereby reducing computational requirements. It also compares and contrasts the patterns of deforestation predicted by both spatial and non-spatial probit models. The spatial model replicates complex patterns of deforestation whereas the non-spatial model fails to do so. In terms of policy, both models suggest that road construction will increase deforestation by a modest amount, between 200-300 km2. This translates into aboveground carbon emissions of 1.36 and 1.85 x 106 tons. However, recent introduction of palm oil in the region serves as a cautionary example that the models may be underestimating the impact of roads.

  17. A generalized right truncated bivariate Poisson regression model with applications to health data.

    Science.gov (United States)

    Islam, M Ataharul; Chowdhury, Rafiqul I

    2017-01-01

    A generalized right truncated bivariate Poisson regression model is proposed in this paper. Estimation and tests for goodness of fit and over or under dispersion are illustrated for both untruncated and right truncated bivariate Poisson regression models using marginal-conditional approach. Estimation and test procedures are illustrated for bivariate Poisson regression models with applications to Health and Retirement Study data on number of health conditions and the number of health care services utilized. The proposed test statistics are easy to compute and it is evident from the results that the models fit the data very well. A comparison between the right truncated and untruncated bivariate Poisson regression models using the test for nonnested models clearly shows that the truncated model performs significantly better than the untruncated model.

  18. Modeling vehicle operating speed on urban roads in Montreal: a panel mixed ordered probit fractional split model.

    Science.gov (United States)

    Eluru, Naveen; Chakour, Vincent; Chamberlain, Morgan; Miranda-Moreno, Luis F

    2013-10-01

    that the proposed panel mixed ordered probit fractional split model offers promise for modeling such proportional ordinal variables. Copyright © 2013 Elsevier Ltd. All rights reserved.

  19. Modeling animal-vehicle collisions using diagonal inflated bivariate Poisson regression.

    Science.gov (United States)

    Lao, Yunteng; Wu, Yao-Jan; Corey, Jonathan; Wang, Yinhai

    2011-01-01

    Two types of animal-vehicle collision (AVC) data are commonly adopted for AVC-related risk analysis research: reported AVC data and carcass removal data. One issue with these two data sets is that they were found to have significant discrepancies by previous studies. In order to model these two types of data together and provide a better understanding of highway AVCs, this study adopts a diagonal inflated bivariate Poisson regression method, an inflated version of bivariate Poisson regression model, to fit the reported AVC and carcass removal data sets collected in Washington State during 2002-2006. The diagonal inflated bivariate Poisson model not only can model paired data with correlation, but also handle under- or over-dispersed data sets as well. Compared with three other types of models, double Poisson, bivariate Poisson, and zero-inflated double Poisson, the diagonal inflated bivariate Poisson model demonstrates its capability of fitting two data sets with remarkable overlapping portions resulting from the same stochastic process. Therefore, the diagonal inflated bivariate Poisson model provides researchers a new approach to investigating AVCs from a different perspective involving the three distribution parameters (λ(1), λ(2) and λ(3)). The modeling results show the impacts of traffic elements, geometric design and geographic characteristics on the occurrences of both reported AVC and carcass removal data. It is found that the increase of some associated factors, such as speed limit, annual average daily traffic, and shoulder width, will increase the numbers of reported AVCs and carcass removals. Conversely, the presence of some geometric factors, such as rolling and mountainous terrain, will decrease the number of reported AVCs. Published by Elsevier Ltd.

  20. Logit and probit model in toll sensitivity analysis of Solo-Ngawi, Kartasura-Palang Joglo segment based on Willingness to Pay (WTP)

    Science.gov (United States)

    Handayani, Dewi; Cahyaning Putri, Hera; Mahmudah, AMH

    2017-12-01

    Solo-Ngawi toll road project is part of the mega project of the Trans Java toll road development initiated by the government and is still under construction until now. PT Solo Ngawi Jaya (SNJ) as the Solo-Ngawi toll management company needs to determine the toll fare that is in accordance with the business plan. The determination of appropriate toll rates will affect progress in regional economic sustainability and decrease the traffic congestion. These policy instruments is crucial for achieving environmentally sustainable transport. Therefore, the objective of this research is to find out how the toll fare sensitivity of Solo-Ngawi toll road based on Willingness To Pay (WTP). Primary data was obtained by distributing stated preference questionnaires to four wheeled vehicle users in Kartasura-Palang Joglo artery road segment. Further data obtained will be analysed with logit and probit model. Based on the analysis, it is found that the effect of fare change on the amount of WTP on the binomial logit model is more sensitive than the probit model on the same travel conditions. The range of tariff change against values of WTP on the binomial logit model is 20% greater than the range of values in the probit model . On the other hand, the probability results of the binomial logit model and the binary probit have no significant difference (less than 1%).

  1. Meta-analysis of studies with bivariate binary outcomes: a marginal beta-binomial model approach.

    Science.gov (United States)

    Chen, Yong; Hong, Chuan; Ning, Yang; Su, Xiao

    2016-01-15

    When conducting a meta-analysis of studies with bivariate binary outcomes, challenges arise when the within-study correlation and between-study heterogeneity should be taken into account. In this paper, we propose a marginal beta-binomial model for the meta-analysis of studies with binary outcomes. This model is based on the composite likelihood approach and has several attractive features compared with the existing models such as bivariate generalized linear mixed model (Chu and Cole, 2006) and Sarmanov beta-binomial model (Chen et al., 2012). The advantages of the proposed marginal model include modeling the probabilities in the original scale, not requiring any transformation of probabilities or any link function, having closed-form expression of likelihood function, and no constraints on the correlation parameter. More importantly, because the marginal beta-binomial model is only based on the marginal distributions, it does not suffer from potential misspecification of the joint distribution of bivariate study-specific probabilities. Such misspecification is difficult to detect and can lead to biased inference using currents methods. We compare the performance of the marginal beta-binomial model with the bivariate generalized linear mixed model and the Sarmanov beta-binomial model by simulation studies. Interestingly, the results show that the marginal beta-binomial model performs better than the Sarmanov beta-binomial model, whether or not the true model is Sarmanov beta-binomial, and the marginal beta-binomial model is more robust than the bivariate generalized linear mixed model under model misspecifications. Two meta-analyses of diagnostic accuracy studies and a meta-analysis of case-control studies are conducted for illustration. Copyright © 2015 John Wiley & Sons, Ltd.

  2. Parameter estimation and statistical test of geographically weighted bivariate Poisson inverse Gaussian regression models

    Science.gov (United States)

    Amalia, Junita; Purhadi, Otok, Bambang Widjanarko

    2017-11-01

    Poisson distribution is a discrete distribution with count data as the random variables and it has one parameter defines both mean and variance. Poisson regression assumes mean and variance should be same (equidispersion). Nonetheless, some case of the count data unsatisfied this assumption because variance exceeds mean (over-dispersion). The ignorance of over-dispersion causes underestimates in standard error. Furthermore, it causes incorrect decision in the statistical test. Previously, paired count data has a correlation and it has bivariate Poisson distribution. If there is over-dispersion, modeling paired count data is not sufficient with simple bivariate Poisson regression. Bivariate Poisson Inverse Gaussian Regression (BPIGR) model is mix Poisson regression for modeling paired count data within over-dispersion. BPIGR model produces a global model for all locations. In another hand, each location has different geographic conditions, social, cultural and economic so that Geographically Weighted Regression (GWR) is needed. The weighting function of each location in GWR generates a different local model. Geographically Weighted Bivariate Poisson Inverse Gaussian Regression (GWBPIGR) model is used to solve over-dispersion and to generate local models. Parameter estimation of GWBPIGR model obtained by Maximum Likelihood Estimation (MLE) method. Meanwhile, hypothesis testing of GWBPIGR model acquired by Maximum Likelihood Ratio Test (MLRT) method.

  3. Discrete bivariate population balance modelling of heteroaggregation processes.

    Science.gov (United States)

    Rollié, Sascha; Briesen, Heiko; Sundmacher, Kai

    2009-08-15

    Heteroaggregation in binary particle mixtures was simulated with a discrete population balance model in terms of two internal coordinates describing the particle properties. The considered particle species are of different size and zeta-potential. Property space is reduced with a semi-heuristic approach to enable an efficient solution. Aggregation rates are based on deterministic models for Brownian motion and stability, under consideration of DLVO interaction potentials. A charge-balance kernel is presented, relating the electrostatic surface potential to the property space by a simple charge balance. Parameter sensitivity with respect to the fractal dimension, aggregate size, hydrodynamic correction, ionic strength and absolute particle concentration was assessed. Results were compared to simulations with the literature kernel based on geometric coverage effects for clusters with heterogeneous surface properties. In both cases electrostatic phenomena, which dominate the aggregation process, show identical trends: impeded cluster-cluster aggregation at low particle mixing ratio (1:1), restabilisation at high mixing ratios (100:1) and formation of complex clusters for intermediate ratios (10:1). The particle mixing ratio controls the surface coverage extent of the larger particle species. Simulation results are compared to experimental flow cytometric data and show very satisfactory agreement.

  4. PROBIT MODEL ANALYSIS OF SMALLHOLDER’S FARMERS DECISION TO USE AGROCHEMICAL INPUTS IN GWAGWALADA AND KUJE AREA COUNCILS OF FEDERAL CAPITAL TERRITORY, ABUJA, NIGERIA

    Directory of Open Access Journals (Sweden)

    Omotayo Olugbenga Alabi

    2014-01-01

    Full Text Available This study examined Probit model analysis of smallholder’s farmers decision to use agrochemical inputs in Gwagwalada and Kuje Area Councils of Federal Capital Territory, Abuja, Nigeria. Primary data were used for this study. Data were obtained using structured questionnaire. The questionnaires were administered to sixty smallholder’s farmers sampled using a two-stage sampling technique. Data obtained were analyzed using descriptive statistics and Probit model. Eight estimators, age; farm-size; education–level; extension services; access to credit; off-farm income; experiences in farming; in the Probit model were found statistically significant. Results show that the probability of using agrochemical inputs increases with age; farm-size; family-size; education-level; extension services; experiences in farming but decreases where they have off-farm income and access to credits. Mc Fadden Pseudo-R 2 gives 0.6866 and Probit model correctly classified 93%. This study concluded that capacity of agricultural extension agents needs to be improved in the study area to educate farmers to invest in agrochemicals and improved agricultural technologies. Also, Government needs to improve on good road networks and appropriate policies to regulate standard, use, safety needs and environment of use of agrochemicals in the study area.

  5. A bivariate model for analyzing recurrent multi-type automobile failures

    Science.gov (United States)

    Sunethra, A. A.; Sooriyarachchi, M. R.

    2017-09-01

    The failure mechanism in an automobile can be defined as a system of multi-type recurrent failures where failures can occur due to various multi-type failure modes and these failures are repetitive such that more than one failure can occur from each failure mode. In analysing such automobile failures, both the time and type of the failure serve as response variables. However, these two response variables are highly correlated with each other since the timing of failures has an association with the mode of the failure. When there are more than one correlated response variables, the fitting of a multivariate model is more preferable than separate univariate models. Therefore, a bivariate model of time and type of failure becomes appealing for such automobile failure data. When there are multiple failure observations pertaining to a single automobile, such data cannot be treated as independent data because failure instances of a single automobile are correlated with each other while failures among different automobiles can be treated as independent. Therefore, this study proposes a bivariate model consisting time and type of failure as responses adjusted for correlated data. The proposed model was formulated following the approaches of shared parameter models and random effects models for joining the responses and for representing the correlated data respectively. The proposed model is applied to a sample of automobile failures with three types of failure modes and up to five failure recurrences. The parametric distributions that were suitable for the two responses of time to failure and type of failure were Weibull distribution and multinomial distribution respectively. The proposed bivariate model was programmed in SAS Procedure Proc NLMIXED by user programming appropriate likelihood functions. The performance of the bivariate model was compared with separate univariate models fitted for the two responses and it was identified that better performance is secured by

  6. A dynamic bivariate Poisson model for analysing and forecasting match results in the English Premier League

    NARCIS (Netherlands)

    Koopman, S.J.; Lit, R.

    2015-01-01

    Summary: We develop a statistical model for the analysis and forecasting of football match results which assumes a bivariate Poisson distribution with intensity coefficients that change stochastically over time. The dynamic model is a novelty in the statistical time series analysis of match results

  7. Analysis of Blood Transfusion Data Using Bivariate Zero-Inflated Poisson Model: A Bayesian Approach.

    Science.gov (United States)

    Mohammadi, Tayeb; Kheiri, Soleiman; Sedehi, Morteza

    2016-01-01

    Recognizing the factors affecting the number of blood donation and blood deferral has a major impact on blood transfusion. There is a positive correlation between the variables "number of blood donation" and "number of blood deferral": as the number of return for donation increases, so does the number of blood deferral. On the other hand, due to the fact that many donors never return to donate, there is an extra zero frequency for both of the above-mentioned variables. In this study, in order to apply the correlation and to explain the frequency of the excessive zero, the bivariate zero-inflated Poisson regression model was used for joint modeling of the number of blood donation and number of blood deferral. The data was analyzed using the Bayesian approach applying noninformative priors at the presence and absence of covariates. Estimating the parameters of the model, that is, correlation, zero-inflation parameter, and regression coefficients, was done through MCMC simulation. Eventually double-Poisson model, bivariate Poisson model, and bivariate zero-inflated Poisson model were fitted on the data and were compared using the deviance information criteria (DIC). The results showed that the bivariate zero-inflated Poisson regression model fitted the data better than the other models.

  8. Exploring the Factors that Impact on Transit Use through an Ordered Probit Model: the Case of Metro of Madrid

    Energy Technology Data Exchange (ETDEWEB)

    Eboli, L.; Forciniti, C.; Mazzulla, G.; Calvo, F.

    2016-07-01

    The configuration of urban areas is the result of a cyclic relationship between land use and transportation system: the changes in transportation system arrangements influence the localisation of residence and economic activities, as well as the changes in land use affect transportation system characteristics. In this context, by operating on land use, travel demand can be shift from the individual transportation modes to transit systems. In the literature, many conceptual models were proposed to describe the complex relationship between land use and travel behaviour. In addition to spatial variation, the study of travel demand shows the categorical variation of variables. This work aims to analyse the influence of the categorical variation of variables impacting on transit use. An ordered probit model is proposed for evaluating how transit use depends on variables related to socio-economic characteristics of population, territorial features, accessibility, and transportation system. The study case is Madrid metro network (Spain). The results show a strong influence of characteristics of population and land use variables on daily trips made using metro system and highlighted the aspects that mainly impact on the choice to travel by metro, providing useful suggestions for shifting people from individual transportation mode to transit systems. (Author)

  9. Bivariate functional data clustering: grouping streams based on a varying coefficient model of the stream water and air temperature relationship

    Science.gov (United States)

    H. Li; X. Deng; Andy Dolloff; E. P. Smith

    2015-01-01

    A novel clustering method for bivariate functional data is proposed to group streams based on their water–air temperature relationship. A distance measure is developed for bivariate curves by using a time-varying coefficient model and a weighting scheme. This distance is also adjusted by spatial correlation of streams via the variogram. Therefore, the proposed...

  10. An integrated user-friendly ArcMAP tool for bivariate statistical modeling in geoscience applications

    Science.gov (United States)

    Jebur, M. N.; Pradhan, B.; Shafri, H. Z. M.; Yusof, Z.; Tehrany, M. S.

    2014-10-01

    Modeling and classification difficulties are fundamental issues in natural hazard assessment. A geographic information system (GIS) is a domain that requires users to use various tools to perform different types of spatial modeling. Bivariate statistical analysis (BSA) assists in hazard modeling. To perform this analysis, several calculations are required and the user has to transfer data from one format to another. Most researchers perform these calculations manually by using Microsoft Excel or other programs. This process is time consuming and carries a degree of uncertainty. The lack of proper tools to implement BSA in a GIS environment prompted this study. In this paper, a user-friendly tool, BSM (bivariate statistical modeler), for BSA technique is proposed. Three popular BSA techniques such as frequency ratio, weights-of-evidence, and evidential belief function models are applied in the newly proposed ArcMAP tool. This tool is programmed in Python and is created by a simple graphical user interface, which facilitates the improvement of model performance. The proposed tool implements BSA automatically, thus allowing numerous variables to be examined. To validate the capability and accuracy of this program, a pilot test area in Malaysia is selected and all three models are tested by using the proposed program. Area under curve is used to measure the success rate and prediction rate. Results demonstrate that the proposed program executes BSA with reasonable accuracy. The proposed BSA tool can be used in numerous applications, such as natural hazard, mineral potential, hydrological, and other engineering and environmental applications.

  11. A bivariate measurement error model for semicontinuous and continuous variables: Application to nutritional epidemiology.

    Science.gov (United States)

    Kipnis, Victor; Freedman, Laurence S; Carroll, Raymond J; Midthune, Douglas

    2016-03-01

    Semicontinuous data in the form of a mixture of a large portion of zero values and continuously distributed positive values frequently arise in many areas of biostatistics. This article is motivated by the analysis of relationships between disease outcomes and intakes of episodically consumed dietary components. An important aspect of studies in nutritional epidemiology is that true diet is unobservable and commonly evaluated by food frequency questionnaires with substantial measurement error. Following the regression calibration approach for measurement error correction, unknown individual intakes in the risk model are replaced by their conditional expectations given mismeasured intakes and other model covariates. Those regression calibration predictors are estimated using short-term unbiased reference measurements in a calibration substudy. Since dietary intakes are often "energy-adjusted," e.g., by using ratios of the intake of interest to total energy intake, the correct estimation of the regression calibration predictor for each energy-adjusted episodically consumed dietary component requires modeling short-term reference measurements of the component (a semicontinuous variable), and energy (a continuous variable) simultaneously in a bivariate model. In this article, we develop such a bivariate model, together with its application to regression calibration. We illustrate the new methodology using data from the NIH-AARP Diet and Health Study (Schatzkin et al., 2001, American Journal of Epidemiology 154, 1119-1125), and also evaluate its performance in a simulation study. © 2015, The International Biometric Society.

  12. A probit- log- skew-normal mixture model for repeated measures data with excess zeros, with application to a cohort study of paediatric respiratory symptoms

    Directory of Open Access Journals (Sweden)

    Johnston Neil W

    2010-06-01

    Full Text Available Abstract Background A zero-inflated continuous outcome is characterized by occurrence of "excess" zeros that more than a single distribution can explain, with the positive observations forming a skewed distribution. Mixture models are employed for regression analysis of zero-inflated data. Moreover, for repeated measures zero-inflated data the clustering structure should also be modeled for an adequate analysis. Methods Diary of Asthma and Viral Infections Study (DAVIS was a one year (2004 cohort study conducted at McMaster University to monitor viral infection and respiratory symptoms in children aged 5-11 years with and without asthma. Respiratory symptoms were recorded daily using either an Internet or paper-based diary. Changes in symptoms were assessed by study staff and led to collection of nasal fluid specimens for virological testing. The study objectives included investigating the response of respiratory symptoms to respiratory viral infection in children with and without asthma over a one year period. Due to sparse data daily respiratory symptom scores were aggregated into weekly average scores. More than 70% of the weekly average scores were zero, with the positive scores forming a skewed distribution. We propose a random effects probit/log-skew-normal mixture model to analyze the DAVIS data. The model parameters were estimated using a maximum marginal likelihood approach. A simulation study was conducted to assess the performance of the proposed mixture model if the underlying distribution of the positive response is different from log-skew normal. Results Viral infection status was highly significant in both probit and log-skew normal model components respectively. The probability of being symptom free was much lower for the week a child was viral positive relative to the week she/he was viral negative. The severity of the symptoms was also greater for the week a child was viral positive. The probability of being symptom free was

  13. An integrated user-friendly ArcMAP tool for bivariate statistical modelling in geoscience applications

    Science.gov (United States)

    Jebur, M. N.; Pradhan, B.; Shafri, H. Z. M.; Yusoff, Z. M.; Tehrany, M. S.

    2015-03-01

    Modelling and classification difficulties are fundamental issues in natural hazard assessment. A geographic information system (GIS) is a domain that requires users to use various tools to perform different types of spatial modelling. Bivariate statistical analysis (BSA) assists in hazard modelling. To perform this analysis, several calculations are required and the user has to transfer data from one format to another. Most researchers perform these calculations manually by using Microsoft Excel or other programs. This process is time-consuming and carries a degree of uncertainty. The lack of proper tools to implement BSA in a GIS environment prompted this study. In this paper, a user-friendly tool, bivariate statistical modeler (BSM), for BSA technique is proposed. Three popular BSA techniques, such as frequency ratio, weight-of-evidence (WoE), and evidential belief function (EBF) models, are applied in the newly proposed ArcMAP tool. This tool is programmed in Python and created by a simple graphical user interface (GUI), which facilitates the improvement of model performance. The proposed tool implements BSA automatically, thus allowing numerous variables to be examined. To validate the capability and accuracy of this program, a pilot test area in Malaysia is selected and all three models are tested by using the proposed program. Area under curve (AUC) is used to measure the success rate and prediction rate. Results demonstrate that the proposed program executes BSA with reasonable accuracy. The proposed BSA tool can be used in numerous applications, such as natural hazard, mineral potential, hydrological, and other engineering and environmental applications.

  14. Probabilistic modeling using bivariate normal distributions for identification of flow and displacement intervals in longwall overburden

    Energy Technology Data Exchange (ETDEWEB)

    Karacan, C.O.; Goodman, G.V.R. [NIOSH, Pittsburgh, PA (United States). Off Mine Safety & Health Research

    2011-01-15

    Gob gas ventholes (GGV) are used to control methane emissions in longwall mines by capturing it within the overlying fractured strata before it enters the work environment. In order for GGVs to effectively capture more methane and less mine air, the length of the slotted sections and their proximity to top of the coal bed should be designed based on the potential gas sources and their locations, as well as the displacements in the overburden that will create potential flow paths for the gas. In this paper, an approach to determine the conditional probabilities of depth-displacement, depth-flow percentage, depth-formation and depth-gas content of the formations was developed using bivariate normal distributions. The flow percentage, displacement and formation data as a function of distance from coal bed used in this study were obtained from a series of borehole experiments contracted by the former US Bureau of Mines as part of a research project. Each of these parameters was tested for normality and was modeled using bivariate normal distributions to determine all tail probabilities. In addition, the probability of coal bed gas content as a function of depth was determined using the same techniques. The tail probabilities at various depths were used to calculate conditional probabilities for each of the parameters. The conditional probabilities predicted for various values of the critical parameters can be used with the measurements of flow and methane percentage at gob gas ventholes to optimize their performance.

  15. Genetic correlations between body condition scores and fertility in dairy cattle using bivariate random regression models.

    Science.gov (United States)

    De Haas, Y; Janss, L L G; Kadarmideen, H N

    2007-10-01

    Genetic correlations between body condition score (BCS) and fertility traits in dairy cattle were estimated using bivariate random regression models. BCS was recorded by the Swiss Holstein Association on 22,075 lactating heifers (primiparous cows) from 856 sires. Fertility data during first lactation were extracted for 40,736 cows. The fertility traits were days to first service (DFS), days between first and last insemination (DFLI), calving interval (CI), number of services per conception (NSPC) and conception rate to first insemination (CRFI). A bivariate model was used to estimate genetic correlations between BCS as a longitudinal trait by random regression components, and daughter's fertility at the sire level as a single lactation measurement. Heritability of BCS was 0.17, and heritabilities for fertility traits were low (0.01-0.08). Genetic correlations between BCS and fertility over the lactation varied from: -0.45 to -0.14 for DFS; -0.75 to 0.03 for DFLI; from -0.59 to -0.02 for CI; from -0.47 to 0.33 for NSPC and from 0.08 to 0.82 for CRFI. These results show (genetic) interactions between fat reserves and reproduction along the lactation trajectory of modern dairy cows, which can be useful in genetic selection as well as in management. Maximum genetic gain in fertility from indirect selection on BCS should be based on measurements taken in mid lactation when the genetic variance for BCS is largest, and the genetic correlations between BCS and fertility is strongest.

  16. Semiparametric bivariate zero-inflated Poisson models with application to studies of abundance for multiple species

    Science.gov (United States)

    Arab, Ali; Holan, Scott H.; Wikle, Christopher K.; Wildhaber, Mark L.

    2012-01-01

    Ecological studies involving counts of abundance, presence–absence or occupancy rates often produce data having a substantial proportion of zeros. Furthermore, these types of processes are typically multivariate and only adequately described by complex nonlinear relationships involving externally measured covariates. Ignoring these aspects of the data and implementing standard approaches can lead to models that fail to provide adequate scientific understanding of the underlying ecological processes, possibly resulting in a loss of inferential power. One method of dealing with data having excess zeros is to consider the class of univariate zero-inflated generalized linear models. However, this class of models fails to address the multivariate and nonlinear aspects associated with the data usually encountered in practice. Therefore, we propose a semiparametric bivariate zero-inflated Poisson model that takes into account both of these data attributes. The general modeling framework is hierarchical Bayes and is suitable for a broad range of applications. We demonstrate the effectiveness of our model through a motivating example on modeling catch per unit area for multiple species using data from the Missouri River Benthic Fishes Study, implemented by the United States Geological Survey.

  17. Bivariate spatial analysis of temperature and precipitation from general circulation models and observation proxies

    KAUST Repository

    Philbin, R.

    2015-05-22

    This study validates the near-surface temperature and precipitation output from decadal runs of eight atmospheric ocean general circulation models (AOGCMs) against observational proxy data from the National Centers for Environmental Prediction/National Center for Atmospheric Research (NCEP/NCAR) reanalysis temperatures and Global Precipitation Climatology Project (GPCP) precipitation data. We model the joint distribution of these two fields with a parsimonious bivariate Matérn spatial covariance model, accounting for the two fields\\' spatial cross-correlation as well as their own smoothnesses. We fit output from each AOGCM (30-year seasonal averages from 1981 to 2010) to a statistical model on each of 21 land regions. Both variance and smoothness values agree for both fields over all latitude bands except southern mid-latitudes. Our results imply that temperature fields have smaller smoothness coefficients than precipitation fields, while both have decreasing smoothness coefficients with increasing latitude. Models predict fields with smaller smoothness coefficients than observational proxy data for the tropics. The estimated spatial cross-correlations of these two fields, however, are quite different for most GCMs in mid-latitudes. Model correlation estimates agree well with those for observational proxy data for Australia, at high northern latitudes across North America, Europe and Asia, as well as across the Sahara, India, and Southeast Asia, but elsewhere, little consistent agreement exists.

  18. Bivariate spatial analysis of temperature and precipitation from general circulation models and observation proxies

    KAUST Repository

    Philbin, R.; Jun, M.

    2015-01-01

    This study validates the near-surface temperature and precipitation output from decadal runs of eight atmospheric ocean general circulation models (AOGCMs) against observational proxy data from the National Centers for Environmental Prediction/National Center for Atmospheric Research (NCEP/NCAR) reanalysis temperatures and Global Precipitation Climatology Project (GPCP) precipitation data. We model the joint distribution of these two fields with a parsimonious bivariate Matérn spatial covariance model, accounting for the two fields' spatial cross-correlation as well as their own smoothnesses. We fit output from each AOGCM (30-year seasonal averages from 1981 to 2010) to a statistical model on each of 21 land regions. Both variance and smoothness values agree for both fields over all latitude bands except southern mid-latitudes. Our results imply that temperature fields have smaller smoothness coefficients than precipitation fields, while both have decreasing smoothness coefficients with increasing latitude. Models predict fields with smaller smoothness coefficients than observational proxy data for the tropics. The estimated spatial cross-correlations of these two fields, however, are quite different for most GCMs in mid-latitudes. Model correlation estimates agree well with those for observational proxy data for Australia, at high northern latitudes across North America, Europe and Asia, as well as across the Sahara, India, and Southeast Asia, but elsewhere, little consistent agreement exists.

  19. Bivariate Gaussian bridges: directional factorization of diffusion in Brownian bridge models.

    Science.gov (United States)

    Kranstauber, Bart; Safi, Kamran; Bartumeus, Frederic

    2014-01-01

    In recent years high resolution animal tracking data has become the standard in movement ecology. The Brownian Bridge Movement Model (BBMM) is a widely adopted approach to describe animal space use from such high resolution tracks. One of the underlying assumptions of the BBMM is isotropic diffusive motion between consecutive locations, i.e. invariant with respect to the direction. Here we propose to relax this often unrealistic assumption by separating the Brownian motion variance into two directional components, one parallel and one orthogonal to the direction of the motion. Our new model, the Bivariate Gaussian bridge (BGB), tracks movement heterogeneity across time. Using the BGB and identifying directed and non-directed movement within a trajectory resulted in more accurate utilisation distributions compared to dynamic Brownian bridges, especially for trajectories with a non-isotropic diffusion, such as directed movement or Lévy like movements. We evaluated our model with simulated trajectories and observed tracks, demonstrating that the improvement of our model scales with the directional correlation of a correlated random walk. We find that many of the animal trajectories do not adhere to the assumptions of the BBMM. The proposed model improves accuracy when describing the space use both in simulated correlated random walks as well as observed animal tracks. Our novel approach is implemented and available within the "move" package for R.

  20. Modeling both of the number of pausibacillary and multibacillary leprosy patients by using bivariate poisson regression

    Science.gov (United States)

    Winahju, W. S.; Mukarromah, A.; Putri, S.

    2015-03-01

    Leprosy is a chronic infectious disease caused by bacteria of leprosy (Mycobacterium leprae). Leprosy has become an important thing in Indonesia because its morbidity is quite high. Based on WHO data in 2014, in 2012 Indonesia has the highest number of new leprosy patients after India and Brazil with a contribution of 18.994 people (8.7% of the world). This number makes Indonesia automatically placed as the country with the highest number of leprosy morbidity of ASEAN countries. The province that most contributes to the number of leprosy patients in Indonesia is East Java. There are two kind of leprosy. They consist of pausibacillary and multibacillary. The morbidity of multibacillary leprosy is higher than pausibacillary leprosy. This paper will discuss modeling both of the number of multibacillary and pausibacillary leprosy patients as responses variables. These responses are count variables, so modeling will be conducted by using bivariate poisson regression method. Unit experiment used is in East Java, and predictors involved are: environment, demography, and poverty. The model uses data in 2012, and the result indicates that all predictors influence significantly.

  1. Applying Emax model and bivariate thin plate splines to assess drug interactions.

    Science.gov (United States)

    Kong, Maiying; Lee, J Jack

    2010-01-01

    We review the semiparametric approach previously proposed by Kong and Lee and extend it to a case in which the dose-effect curves follow the Emax model instead of the median effect equation. When the maximum effects for the investigated drugs are different, we provide a procedure to obtain the additive effect based on the Loewe additivity model. Then, we apply a bivariate thin plate spline approach to estimate the effect beyond additivity along with its 95 per cent point-wise confidence interval as well as its 95 per cent simultaneous confidence interval for any combination dose. Thus, synergy, additivity, and antagonism can be identified. The advantages of the method are that it provides an overall assessment of the combination effect on the entire two-dimensional dose space spanned by the experimental doses, and it enables us to identify complex patterns of drug interaction in combination studies. In addition, this approach is robust to outliers. To illustrate this procedure, we analyzed data from two case studies.

  2. Bivariate least squares linear regression: Towards a unified analytic formalism. I. Functional models

    Science.gov (United States)

    Caimmi, R.

    2011-08-01

    Concerning bivariate least squares linear regression, the classical approach pursued for functional models in earlier attempts ( York, 1966, 1969) is reviewed using a new formalism in terms of deviation (matrix) traces which, for unweighted data, reduce to usual quantities leaving aside an unessential (but dimensional) multiplicative factor. Within the framework of classical error models, the dependent variable relates to the independent variable according to the usual additive model. The classes of linear models considered are regression lines in the general case of correlated errors in X and in Y for weighted data, and in the opposite limiting situations of (i) uncorrelated errors in X and in Y, and (ii) completely correlated errors in X and in Y. The special case of (C) generalized orthogonal regression is considered in detail together with well known subcases, namely: (Y) errors in X negligible (ideally null) with respect to errors in Y; (X) errors in Y negligible (ideally null) with respect to errors in X; (O) genuine orthogonal regression; (R) reduced major-axis regression. In the limit of unweighted data, the results determined for functional models are compared with their counterparts related to extreme structural models i.e. the instrumental scatter is negligible (ideally null) with respect to the intrinsic scatter ( Isobe et al., 1990; Feigelson and Babu, 1992). While regression line slope and intercept estimators for functional and structural models necessarily coincide, the contrary holds for related variance estimators even if the residuals obey a Gaussian distribution, with the exception of Y models. An example of astronomical application is considered, concerning the [O/H]-[Fe/H] empirical relations deduced from five samples related to different stars and/or different methods of oxygen abundance determination. For selected samples and assigned methods, different regression models yield consistent results within the errors (∓ σ) for both

  3. The Evaluation of Bivariate Mixed Models in Meta-analyses of Diagnostic Accuracy Studies with SAS, Stata and R.

    Science.gov (United States)

    Vogelgesang, Felicitas; Schlattmann, Peter; Dewey, Marc

    2018-05-01

    Meta-analyses require a thoroughly planned procedure to obtain unbiased overall estimates. From a statistical point of view not only model selection but also model implementation in the software affects the results. The present simulation study investigates the accuracy of different implementations of general and generalized bivariate mixed models in SAS (using proc mixed, proc glimmix and proc nlmixed), Stata (using gllamm, xtmelogit and midas) and R (using reitsma from package mada and glmer from package lme4). Both models incorporate the relationship between sensitivity and specificity - the two outcomes of interest in meta-analyses of diagnostic accuracy studies - utilizing random effects. Model performance is compared in nine meta-analytic scenarios reflecting the combination of three sizes for meta-analyses (89, 30 and 10 studies) with three pairs of sensitivity/specificity values (97%/87%; 85%/75%; 90%/93%). The evaluation of accuracy in terms of bias, standard error and mean squared error reveals that all implementations of the generalized bivariate model calculate sensitivity and specificity estimates with deviations less than two percentage points. proc mixed which together with reitsma implements the general bivariate mixed model proposed by Reitsma rather shows convergence problems. The random effect parameters are in general underestimated. This study shows that flexibility and simplicity of model specification together with convergence robustness should influence implementation recommendations, as the accuracy in terms of bias was acceptable in all implementations using the generalized approach. Schattauer GmbH.

  4. Meta-analysis for diagnostic accuracy studies: a new statistical model using beta-binomial distributions and bivariate copulas.

    Science.gov (United States)

    Kuss, Oliver; Hoyer, Annika; Solms, Alexander

    2014-01-15

    There are still challenges when meta-analyzing data from studies on diagnostic accuracy. This is mainly due to the bivariate nature of the response where information on sensitivity and specificity must be summarized while accounting for their correlation within a single trial. In this paper, we propose a new statistical model for the meta-analysis for diagnostic accuracy studies. This model uses beta-binomial distributions for the marginal numbers of true positives and true negatives and links these margins by a bivariate copula distribution. The new model comes with all the features of the current standard model, a bivariate logistic regression model with random effects, but has the additional advantages of a closed likelihood function and a larger flexibility for the correlation structure of sensitivity and specificity. In a simulation study, which compares three copula models and two implementations of the standard model, the Plackett and the Gauss copula do rarely perform worse but frequently better than the standard model. We use an example from a meta-analysis to judge the diagnostic accuracy of telomerase (a urinary tumor marker) for the diagnosis of primary bladder cancer for illustration. Copyright © 2013 John Wiley & Sons, Ltd.

  5. Comparing Johnson’s SBB, Weibull and Logit-Logistic bivariate distributions for modeling tree diameters and heights using copulas

    Energy Technology Data Exchange (ETDEWEB)

    Cardil Forradellas, A.; Molina Terrén, D.M.; Oliveres, J.; Castellnou, M.

    2016-07-01

    Aim of study: In this study we compare the accuracy of three bivariate distributions: Johnson’s SBB, Weibull-2P and LL-2P functions for characterizing the joint distribution of tree diameters and heights. Area of study: North-West of Spain. Material and methods: Diameter and height measurements of 128 plots of pure and even-aged Tasmanian blue gum (Eucalyptus globulus Labill.) stands located in the North-west of Spain were considered in the present study. The SBB bivariate distribution was obtained from SB marginal distributions using a Normal Copula based on a four-parameter logistic transformation. The Plackett Copula was used to obtain the bivariate models from the Weibull and Logit-logistic univariate marginal distributions. The negative logarithm of the maximum likelihood function was used to compare the results and the Wilcoxon signed-rank test was used to compare the related samples of these logarithms calculated for each sample plot and each distribution. Main results: The best results were obtained by using the Plackett copula and the best marginal distribution was the Logit-logistic. Research highlights: The copulas used in this study have shown a good performance for modeling the joint distribution of tree diameters and heights. They could be easily extended for modelling multivariate distributions involving other tree variables, such as tree volume or biomass. (Author)

  6. Predicting Lung Radiotherapy-Induced Pneumonitis Using a Model Combining Parametric Lyman Probit With Nonparametric Decision Trees

    International Nuclear Information System (INIS)

    Das, Shiva K.; Zhou Sumin; Zhang, Junan; Yin, F.-F.; Dewhirst, Mark W.; Marks, Lawrence B.

    2007-01-01

    Purpose: To develop and test a model to predict for lung radiation-induced Grade 2+ pneumonitis. Methods and Materials: The model was built from a database of 234 lung cancer patients treated with radiotherapy (RT), of whom 43 were diagnosed with pneumonitis. The model augmented the predictive capability of the parametric dose-based Lyman normal tissue complication probability (LNTCP) metric by combining it with weighted nonparametric decision trees that use dose and nondose inputs. The decision trees were sequentially added to the model using a 'boosting' process that enhances the accuracy of prediction. The model's predictive capability was estimated by 10-fold cross-validation. To facilitate dissemination, the cross-validation result was used to extract a simplified approximation to the complicated model architecture created by boosting. Application of the simplified model is demonstrated in two example cases. Results: The area under the model receiver operating characteristics curve for cross-validation was 0.72, a significant improvement over the LNTCP area of 0.63 (p = 0.005). The simplified model used the following variables to output a measure of injury: LNTCP, gender, histologic type, chemotherapy schedule, and treatment schedule. For a given patient RT plan, injury prediction was highest for the combination of pre-RT chemotherapy, once-daily treatment, female gender and lowest for the combination of no pre-RT chemotherapy and nonsquamous cell histologic type. Application of the simplified model to the example cases revealed that injury prediction for a given treatment plan can range from very low to very high, depending on the settings of the nondose variables. Conclusions: Radiation pneumonitis prediction was significantly enhanced by decision trees that added the influence of nondose factors to the LNTCP formulation

  7. Study on the Rationality and Validity of Probit Models of Domino Effect to Chemical Process Equipment caused by Overpressure

    International Nuclear Information System (INIS)

    Sun, Dongliang; Huang, Guangtuan; Jiang, Juncheng; Zhang, Mingguang; Wang, Zhirong

    2013-01-01

    Overpressure is one important cause of domino effect in accidents of chemical process equipments. Some models considering propagation probability and threshold values of the domino effect caused by overpressure have been proposed in previous study. In order to prove the rationality and validity of the models reported in the reference, two boundary values of three damage degrees reported were considered as random variables respectively in the interval [0, 100%]. Based on the overpressure data for damage to the equipment and the damage state, and the calculation method reported in the references, the mean square errors of the four categories of damage probability models of overpressure were calculated with random boundary values, and then a relationship of mean square error vs. the two boundary value was obtained, the minimum of mean square error was obtained, compared with the result of the present work, mean square error decreases by about 3%. Therefore, the error was in the acceptable range of engineering applications, the models reported can be considered reasonable and valid.

  8. Study on the Rationality and Validity of Probit Models of Domino Effect to Chemical Process Equipment caused by Overpressure

    Science.gov (United States)

    Sun, Dongliang; Huang, Guangtuan; Jiang, Juncheng; Zhang, Mingguang; Wang, Zhirong

    2013-04-01

    Overpressure is one important cause of domino effect in accidents of chemical process equipments. Some models considering propagation probability and threshold values of the domino effect caused by overpressure have been proposed in previous study. In order to prove the rationality and validity of the models reported in the reference, two boundary values of three damage degrees reported were considered as random variables respectively in the interval [0, 100%]. Based on the overpressure data for damage to the equipment and the damage state, and the calculation method reported in the references, the mean square errors of the four categories of damage probability models of overpressure were calculated with random boundary values, and then a relationship of mean square error vs. the two boundary value was obtained, the minimum of mean square error was obtained, compared with the result of the present work, mean square error decreases by about 3%. Therefore, the error was in the acceptable range of engineering applications, the models reported can be considered reasonable and valid.

  9. Ordinal bivariate inequality

    DEFF Research Database (Denmark)

    Sonne-Schmidt, Christoffer Scavenius; Tarp, Finn; Østerdal, Lars Peter Raahave

    This paper introduces a concept of inequality comparisons with ordinal bivariate categorical data. In our model, one population is more unequal than another when they have common arithmetic median outcomes and the first can be obtained from the second by correlationincreasing switches and/or median......-preserving spreads. For the canonical 2x2 case (with two binary indicators), we derive a simple operational procedure for checking ordinal inequality relations in practice. As an illustration, we apply the model to childhood deprivation in Mozambique....

  10. Ordinal Bivariate Inequality

    DEFF Research Database (Denmark)

    Sonne-Schmidt, Christoffer Scavenius; Tarp, Finn; Østerdal, Lars Peter Raahave

    2016-01-01

    This paper introduces a concept of inequality comparisons with ordinal bivariate categorical data. In our model, one population is more unequal than another when they have common arithmetic median outcomes and the first can be obtained from the second by correlation-increasing switches and....../or median-preserving spreads. For the canonical 2 × 2 case (with two binary indicators), we derive a simple operational procedure for checking ordinal inequality relations in practice. As an illustration, we apply the model to childhood deprivation in Mozambique....

  11. Collective estimation of multiple bivariate density functions with application to angular-sampling-based protein loop modeling

    KAUST Repository

    Maadooliat, Mehdi

    2015-10-21

    This paper develops a method for simultaneous estimation of density functions for a collection of populations of protein backbone angle pairs using a data-driven, shared basis that is constructed by bivariate spline functions defined on a triangulation of the bivariate domain. The circular nature of angular data is taken into account by imposing appropriate smoothness constraints across boundaries of the triangles. Maximum penalized likelihood is used to fit the model and an alternating blockwise Newton-type algorithm is developed for computation. A simulation study shows that the collective estimation approach is statistically more efficient than estimating the densities individually. The proposed method was used to estimate neighbor-dependent distributions of protein backbone dihedral angles (i.e., Ramachandran distributions). The estimated distributions were applied to protein loop modeling, one of the most challenging open problems in protein structure prediction, by feeding them into an angular-sampling-based loop structure prediction framework. Our estimated distributions compared favorably to the Ramachandran distributions estimated by fitting a hierarchical Dirichlet process model; and in particular, our distributions showed significant improvements on the hard cases where existing methods do not work well.

  12. Collective estimation of multiple bivariate density functions with application to angular-sampling-based protein loop modeling

    KAUST Repository

    Maadooliat, Mehdi; Zhou, Lan; Najibi, Seyed Morteza; Gao, Xin; Huang, Jianhua Z.

    2015-01-01

    This paper develops a method for simultaneous estimation of density functions for a collection of populations of protein backbone angle pairs using a data-driven, shared basis that is constructed by bivariate spline functions defined on a triangulation of the bivariate domain. The circular nature of angular data is taken into account by imposing appropriate smoothness constraints across boundaries of the triangles. Maximum penalized likelihood is used to fit the model and an alternating blockwise Newton-type algorithm is developed for computation. A simulation study shows that the collective estimation approach is statistically more efficient than estimating the densities individually. The proposed method was used to estimate neighbor-dependent distributions of protein backbone dihedral angles (i.e., Ramachandran distributions). The estimated distributions were applied to protein loop modeling, one of the most challenging open problems in protein structure prediction, by feeding them into an angular-sampling-based loop structure prediction framework. Our estimated distributions compared favorably to the Ramachandran distributions estimated by fitting a hierarchical Dirichlet process model; and in particular, our distributions showed significant improvements on the hard cases where existing methods do not work well.

  13. A Bivariate Generalized Linear Item Response Theory Modeling Framework to the Analysis of Responses and Response Times.

    Science.gov (United States)

    Molenaar, Dylan; Tuerlinckx, Francis; van der Maas, Han L J

    2015-01-01

    A generalized linear modeling framework to the analysis of responses and response times is outlined. In this framework, referred to as bivariate generalized linear item response theory (B-GLIRT), separate generalized linear measurement models are specified for the responses and the response times that are subsequently linked by cross-relations. The cross-relations can take various forms. Here, we focus on cross-relations with a linear or interaction term for ability tests, and cross-relations with a curvilinear term for personality tests. In addition, we discuss how popular existing models from the psychometric literature are special cases in the B-GLIRT framework depending on restrictions in the cross-relation. This allows us to compare existing models conceptually and empirically. We discuss various extensions of the traditional models motivated by practical problems. We also illustrate the applicability of our approach using various real data examples, including data on personality and cognitive ability.

  14. The return period analysis of natural disasters with statistical modeling of bivariate joint probability distribution.

    Science.gov (United States)

    Li, Ning; Liu, Xueqin; Xie, Wei; Wu, Jidong; Zhang, Peng

    2013-01-01

    New features of natural disasters have been observed over the last several years. The factors that influence the disasters' formation mechanisms, regularity of occurrence and main characteristics have been revealed to be more complicated and diverse in nature than previously thought. As the uncertainty involved increases, the variables need to be examined further. This article discusses the importance and the shortage of multivariate analysis of natural disasters and presents a method to estimate the joint probability of the return periods and perform a risk analysis. Severe dust storms from 1990 to 2008 in Inner Mongolia were used as a case study to test this new methodology, as they are normal and recurring climatic phenomena on Earth. Based on the 79 investigated events and according to the dust storm definition with bivariate, the joint probability distribution of severe dust storms was established using the observed data of maximum wind speed and duration. The joint return periods of severe dust storms were calculated, and the relevant risk was analyzed according to the joint probability. The copula function is able to simulate severe dust storm disasters accurately. The joint return periods generated are closer to those observed in reality than the univariate return periods and thus have more value in severe dust storm disaster mitigation, strategy making, program design, and improvement of risk management. This research may prove useful in risk-based decision making. The exploration of multivariate analysis methods can also lay the foundation for further applications in natural disaster risk analysis. © 2012 Society for Risk Analysis.

  15. Cost-offsets of prescription drug expenditures: data analysis via a copula-based bivariate dynamic hurdle model.

    Science.gov (United States)

    Deb, Partha; Trivedi, Pravin K; Zimmer, David M

    2014-10-01

    In this paper, we estimate a copula-based bivariate dynamic hurdle model of prescription drug and nondrug expenditures to test the cost-offset hypothesis, which posits that increased expenditures on prescription drugs are offset by reductions in other nondrug expenditures. We apply the proposed methodology to data from the Medical Expenditure Panel Survey, which have the following features: (i) the observed bivariate outcomes are a mixture of zeros and continuously measured positives; (ii) both the zero and positive outcomes show state dependence and inter-temporal interdependence; and (iii) the zeros and the positives display contemporaneous association. The point mass at zero is accommodated using a hurdle or a two-part approach. The copula-based approach to generating joint distributions is appealing because the contemporaneous association involves asymmetric dependence. The paper studies samples categorized by four health conditions: arthritis, diabetes, heart disease, and mental illness. There is evidence of greater than dollar-for-dollar cost-offsets of expenditures on prescribed drugs for relatively low levels of spending on drugs and less than dollar-for-dollar cost-offsets at higher levels of drug expenditures. Copyright © 2013 John Wiley & Sons, Ltd.

  16. Improving the modelling of redshift-space distortions - I. A bivariate Gaussian description for the galaxy pairwise velocity distributions

    Science.gov (United States)

    Bianchi, Davide; Chiesa, Matteo; Guzzo, Luigi

    2015-01-01

    As a step towards a more accurate modelling of redshift-space distortions (RSD) in galaxy surveys, we develop a general description of the probability distribution function of galaxy pairwise velocities within the framework of the so-called streaming model. For a given galaxy separation r, such function can be described as a superposition of virtually infinite local distributions. We characterize these in terms of their moments and then consider the specific case in which they are Gaussian functions, each with its own mean μ and dispersion σ. Based on physical considerations, we make the further crucial assumption that these two parameters are in turn distributed according to a bivariate Gaussian, with its own mean and covariance matrix. Tests using numerical simulations explicitly show that with this compact description one can correctly model redshift-space distortions on all scales, fully capturing the overall linear and non-linear dynamics of the galaxy flow at different separations. In particular, we naturally obtain Gaussian/exponential, skewed/unskewed distribution functions, depending on separation as observed in simulations and data. Also, the recently proposed single-Gaussian description of RSD is included in this model as a limiting case, when the bivariate Gaussian is collapsed to a two-dimensional Dirac delta function. We also show how this description naturally allows for the Taylor expansion of 1 + ξS(s) around 1 + ξR(r), which leads to the Kaiser linear formula when truncated to second order, explicating its connection with the moments of the velocity distribution functions. More work is needed, but these results indicate a very promising path to make definitive progress in our programme to improve RSD estimators.

  17. Modeling Bivariate Change in Individual Differences: Prospective Associations Between Personality and Life Satisfaction.

    Science.gov (United States)

    Hounkpatin, Hilda Osafo; Boyce, Christopher J; Dunn, Graham; Wood, Alex M

    2017-09-18

    A number of structural equation models have been developed to examine change in 1 variable or the longitudinal association between 2 variables. The most common of these are the latent growth model, the autoregressive cross-lagged model, the autoregressive latent trajectory model, and the latent change score model. The authors first overview each of these models through evaluating their different assumptions surrounding the nature of change and how these assumptions may result in different data interpretations. They then, to elucidate these issues in an empirical example, examine the longitudinal association between personality traits and life satisfaction. In a representative Dutch sample (N = 8,320), with participants providing data on both personality and life satisfaction measures every 2 years over an 8-year period, the authors reproduce findings from previous research. However, some of the structural equation models overviewed have not previously been applied to the personality-life satisfaction relation. The extended empirical examination suggests intraindividual changes in life satisfaction predict subsequent intraindividual changes in personality traits. The availability of data sets with 3 or more assessment waves allows the application of more advanced structural equation models such as the autoregressive latent trajectory or the extended latent change score model, which accounts for the complex dynamic nature of change processes and allows stronger inferences on the nature of the association between variables. However, the choice of model should be determined by theories of change processes in the variables being studied. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  18. Fitting a Bivariate Measurement Error Model for Episodically Consumed Dietary Components

    KAUST Repository

    Zhang, Saijuan

    2011-01-06

    There has been great public health interest in estimating usual, i.e., long-term average, intake of episodically consumed dietary components that are not consumed daily by everyone, e.g., fish, red meat and whole grains. Short-term measurements of episodically consumed dietary components have zero-inflated skewed distributions. So-called two-part models have been developed for such data in order to correct for measurement error due to within-person variation and to estimate the distribution of usual intake of the dietary component in the univariate case. However, there is arguably much greater public health interest in the usual intake of an episodically consumed dietary component adjusted for energy (caloric) intake, e.g., ounces of whole grains per 1000 kilo-calories, which reflects usual dietary composition and adjusts for different total amounts of caloric intake. Because of this public health interest, it is important to have models to fit such data, and it is important that the model-fitting methods can be applied to all episodically consumed dietary components.We have recently developed a nonlinear mixed effects model (Kipnis, et al., 2010), and have fit it by maximum likelihood using nonlinear mixed effects programs and methodology (the SAS NLMIXED procedure). Maximum likelihood fitting of such a nonlinear mixed model is generally slow because of 3-dimensional adaptive Gaussian quadrature, and there are times when the programs either fail to converge or converge to models with a singular covariance matrix. For these reasons, we develop a Monte-Carlo (MCMC) computation of fitting this model, which allows for both frequentist and Bayesian inference. There are technical challenges to developing this solution because one of the covariance matrices in the model is patterned. Our main application is to the National Institutes of Health (NIH)-AARP Diet and Health Study, where we illustrate our methods for modeling the energy-adjusted usual intake of fish and whole

  19. Fitting a Bivariate Measurement Error Model for Episodically Consumed Dietary Components

    KAUST Repository

    Zhang, Saijuan; Krebs-Smith, Susan M.; Midthune, Douglas; Perez, Adriana; Buckman, Dennis W.; Kipnis, Victor; Freedman, Laurence S.; Dodd, Kevin W.; Carroll, Raymond J

    2011-01-01

    There has been great public health interest in estimating usual, i.e., long-term average, intake of episodically consumed dietary components that are not consumed daily by everyone, e.g., fish, red meat and whole grains. Short-term measurements of episodically consumed dietary components have zero-inflated skewed distributions. So-called two-part models have been developed for such data in order to correct for measurement error due to within-person variation and to estimate the distribution of usual intake of the dietary component in the univariate case. However, there is arguably much greater public health interest in the usual intake of an episodically consumed dietary component adjusted for energy (caloric) intake, e.g., ounces of whole grains per 1000 kilo-calories, which reflects usual dietary composition and adjusts for different total amounts of caloric intake. Because of this public health interest, it is important to have models to fit such data, and it is important that the model-fitting methods can be applied to all episodically consumed dietary components.We have recently developed a nonlinear mixed effects model (Kipnis, et al., 2010), and have fit it by maximum likelihood using nonlinear mixed effects programs and methodology (the SAS NLMIXED procedure). Maximum likelihood fitting of such a nonlinear mixed model is generally slow because of 3-dimensional adaptive Gaussian quadrature, and there are times when the programs either fail to converge or converge to models with a singular covariance matrix. For these reasons, we develop a Monte-Carlo (MCMC) computation of fitting this model, which allows for both frequentist and Bayesian inference. There are technical challenges to developing this solution because one of the covariance matrices in the model is patterned. Our main application is to the National Institutes of Health (NIH)-AARP Diet and Health Study, where we illustrate our methods for modeling the energy-adjusted usual intake of fish and whole

  20. Testing for a Common Volatility Process and Information Spillovers in Bivariate Financial Time Series Models

    NARCIS (Netherlands)

    J. Chen (Jinghui); M. Kobayashi (Masahito); M.J. McAleer (Michael)

    2016-01-01

    textabstractThe paper considers the problem as to whether financial returns have a common volatility process in the framework of stochastic volatility models that were suggested by Harvey et al. (1994). We propose a stochastic volatility version of the ARCH test proposed by Engle and Susmel (1993),

  1. Testing for Volatility Co-movement in Bivariate Stochastic Volatility Models

    NARCIS (Netherlands)

    J. Chen (Jinghui); M. Kobayashi (Masahito); M.J. McAleer (Michael)

    2017-01-01

    markdownabstractThe paper considers the problem of volatility co-movement, namely as to whether two financial returns have perfectly correlated common volatility process, in the framework of multivariate stochastic volatility models and proposes a test which checks the volatility co-movement. The

  2. Testing for Volatility Co-movement in Bivariate Stochastic Volatility Models

    OpenAIRE

    Chen, Jinghui; Kobayashi, Masahito; McAleer, Michael

    2017-01-01

    markdownabstractThe paper considers the problem of volatility co-movement, namely as to whether two financial returns have perfectly correlated common volatility process, in the framework of multivariate stochastic volatility models and proposes a test which checks the volatility co-movement. The proposed test is a stochastic volatility version of the co-movement test proposed by Engle and Susmel (1993), who investigated whether international equity markets have volatility co-movement using t...

  3. Bivariate autoregressive state-space modeling of psychophysiological time series data.

    Science.gov (United States)

    Smith, Daniel M; Abtahi, Mohammadreza; Amiri, Amir Mohammad; Mankodiya, Kunal

    2016-08-01

    Heart rate (HR) and electrodermal activity (EDA) are often used as physiological measures of psychological arousal in various neuropsychology experiments. In this exploratory study, we analyze HR and EDA data collected from four participants, each with a history of suicidal tendencies, during a cognitive task known as the Paced Auditory Serial Addition Test (PASAT). A central aim of this investigation is to guide future research by assessing heterogeneity in the population of individuals with suicidal tendencies. Using a state-space modeling approach to time series analysis, we evaluate the effect of an exogenous input, i.e., the stimulus presentation rate which was increased systematically during the experimental task. Participants differed in several parameters characterizing the way in which psychological arousal was experienced during the task. Increasing the stimulus presentation rate was associated with an increase in EDA in participants 2 and 4. The effect on HR was positive for participant 2 and negative for participants 3 and 4. We discuss future directions in light of the heterogeneity in the population indicated by these findings.

  4. Application of Multinomial Probit Model in Analyzing Factors Affecting the Occupation of Graduated Students from the University of Agricultural Applied-Science

    Directory of Open Access Journals (Sweden)

    H. Mohammadi

    2016-03-01

    Full Text Available Introduction:Scientificand practicaltrainingwith an emphasis onoperation andapplication of what is taught and having an empiricalapproachto education isa more suitable approach for creating jobs. Preparation of educational needs of the agricultural sector by scientificand practicaltraining and providingemploymentin agreement with education and skills is one of the most important programs in order to achieve the objectives of comprehensive development of the country. An imbalance seems to exist between the processes and materials in university courses and the skills and abilities needed by the labor market and this is the most importantreason for the failureof the university graduatesin finding employment. This studyhas beendone for understandingthe type of jobof agricultural graduatesof training center of Jihad-e-Keshavarzi in Mashhad and the factor saffecting their employment. Materials and Methods: This study is an applied research and the statistical population is 167 and includes all the students who had earned a Bachelor’s degree who had come to receive their graduation certificates in 2011. The dependent variable is type of job which includes five categories of employment in the public sector related to education, employ men unrelated to the government, employment related tothe privatesector andthe unemployed who were seeking workin the private sector. Independent variables includegender,quotainuniversityadmissions, the level of interestin thefield of study,satisfaction withthe discipline, evaluationand trainingof graduatesofvocational skillsacquired incollegegraduates'assessmentof thework culturein the societyand evaluation oflack ofcapitalas a factor preventingemployment in the academicfield. Information was collected through questionnaires and the multiple probit mode lwas used. Results and discussion: The results ofthe survey showthatjobsof graduates are divided intofour categoriesincluding:Related to the field of study and

  5. A comparison of bivariate, multivariate random-effects, and Poisson correlated gamma-frailty models to meta-analyze individual patient data of ordinal scale diagnostic tests.

    Science.gov (United States)

    Simoneau, Gabrielle; Levis, Brooke; Cuijpers, Pim; Ioannidis, John P A; Patten, Scott B; Shrier, Ian; Bombardier, Charles H; de Lima Osório, Flavia; Fann, Jesse R; Gjerdingen, Dwenda; Lamers, Femke; Lotrakul, Manote; Löwe, Bernd; Shaaban, Juwita; Stafford, Lesley; van Weert, Henk C P M; Whooley, Mary A; Wittkampf, Karin A; Yeung, Albert S; Thombs, Brett D; Benedetti, Andrea

    2017-11-01

    Individual patient data (IPD) meta-analyses are increasingly common in the literature. In the context of estimating the diagnostic accuracy of ordinal or semi-continuous scale tests, sensitivity and specificity are often reported for a given threshold or a small set of thresholds, and a meta-analysis is conducted via a bivariate approach to account for their correlation. When IPD are available, sensitivity and specificity can be pooled for every possible threshold. Our objective was to compare the bivariate approach, which can be applied separately at every threshold, to two multivariate methods: the ordinal multivariate random-effects model and the Poisson correlated gamma-frailty model. Our comparison was empirical, using IPD from 13 studies that evaluated the diagnostic accuracy of the 9-item Patient Health Questionnaire depression screening tool, and included simulations. The empirical comparison showed that the implementation of the two multivariate methods is more laborious in terms of computational time and sensitivity to user-supplied values compared to the bivariate approach. Simulations showed that ignoring the within-study correlation of sensitivity and specificity across thresholds did not worsen inferences with the bivariate approach compared to the Poisson model. The ordinal approach was not suitable for simulations because the model was highly sensitive to user-supplied starting values. We tentatively recommend the bivariate approach rather than more complex multivariate methods for IPD diagnostic accuracy meta-analyses of ordinal scale tests, although the limited type of diagnostic data considered in the simulation study restricts the generalization of our findings. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. The MDI Method as a Generalization of Logit, Probit and Hendry Analyses in Marketing.

    Science.gov (United States)

    1980-04-01

    model involves nothing more than fitting a normal distribution function ( Hanushek and Jackson (1977)). For a given value of x, the probit model...preference shifts within the soft drink category. --For applications of probit models relevant for marketing, see Hausman and Wise (1978) and Hanushek and...Marketing Research" JMR XIV, Feb. (1977). Hanushek , E.A., and J.E. Jackson, Statistical Methods for Social Scientists. Academic Press, New York (1977

  7. Bivariate threshold models for genetic evaluation of susceptibility to and ability to recover from mastitis in Danish Holstein cows.

    Science.gov (United States)

    Welderufael, B G; Janss, L L G; de Koning, D J; Sørensen, L P; Løvendahl, P; Fikse, W F

    2017-06-01

    Mastitis in dairy cows is an unavoidable problem and genetic variation in recovery from mastitis, in addition to susceptibility, is therefore of interest. Genetic parameters for susceptibility to and recovery from mastitis were estimated for Danish Holstein-Friesian cows using data from automatic milking systems equipped with online somatic cell count measuring units. The somatic cell count measurements were converted to elevated mastitis risk, a continuous variable [on a (0-1) scale] indicating the risk of mastitis. Risk values >0.6 were assumed to indicate that a cow had mastitis. For each cow and lactation, the sequence of health states (mastitic or healthy) was converted to a weekly transition: 0 if the cow stayed within the same state and 1 if the cow changed state. The result was 2 series of transitions: one for healthy to diseased (HD, to model mastitis susceptibility) and the other for diseased to healthy (DH, to model recovery ability). The 2 series of transitions were analyzed with bivariate threshold models, including several systematic effects and a function of time. The model included effects of herd, parity, herd-test-week, permanent environment (to account for the repetitive nature of transition records from a cow) plus two time-varying effects (lactation stage and time within episode). In early lactation, there was an increased risk of getting mastitis but the risk remained stable afterwards. Mean recovery rate was 45% per lactation. Heritabilities were 0.07 [posterior mean of standard deviations (PSD) = 0.03] for HD and 0.08 (PSD = 0.03) for DH. The genetic correlation between HD and DH has a posterior mean of -0.83 (PSD = 0.13). Although susceptibility and recovery from mastitis are strongly negatively correlated, recovery can be considered as a new trait for selection. The Authors. Published by the Federation of Animal Science Societies and Elsevier Inc. on behalf of the American Dairy Science Association®. This is an open access article under

  8. A Bivariate Mixture Model for Natural Antibody Levels to Human Papillomavirus Types 16 and 18: Baseline Estimates for Monitoring the Herd Effects of Immunization.

    Directory of Open Access Journals (Sweden)

    Margaretha A Vink

    Full Text Available Post-vaccine monitoring programs for human papillomavirus (HPV have been introduced in many countries, but HPV serology is still an underutilized tool, partly owing to the weak antibody response to HPV infection. Changes in antibody levels among non-vaccinated individuals could be employed to monitor herd effects of immunization against HPV vaccine types 16 and 18, but inference requires an appropriate statistical model. The authors developed a four-component bivariate mixture model for jointly estimating vaccine-type seroprevalence from correlated antibody responses against HPV16 and -18 infections. This model takes account of the correlation between HPV16 and -18 antibody concentrations within subjects, caused e.g. by heterogeneity in exposure level and immune response. The model was fitted to HPV16 and -18 antibody concentrations as measured by a multiplex immunoassay in a large serological survey (3,875 females carried out in the Netherlands in 2006/2007, before the introduction of mass immunization. Parameters were estimated by Bayesian analysis. We used the deviance information criterion for model selection; performance of the preferred model was assessed through simulation. Our analysis uncovered elevated antibody concentrations in doubly as compared to singly seropositive individuals, and a strong clustering of HPV16 and -18 seropositivity, particularly around the age of sexual debut. The bivariate model resulted in a more reliable classification of singly and doubly seropositive individuals than achieved by a combination of two univariate models, and suggested a higher pre-vaccine HPV16 seroprevalence than previously estimated. The bivariate mixture model provides valuable baseline estimates of vaccine-type seroprevalence and may prove useful in seroepidemiologic assessment of the herd effects of HPV vaccination.

  9. The use of bivariate spatial modeling of questionnaire and parasitology data to predict the distribution of Schistosoma haematobium in Coastal Kenya.

    Directory of Open Access Journals (Sweden)

    Hugh J W Sturrock

    Full Text Available Questionnaires of reported blood in urine (BIU distributed through the existing school system provide a rapid and reliable method to classify schools according to the prevalence of Schistosoma haematobium, thereby helping in the targeting of schistosomiasis control. However, not all schools return questionnaires and it is unclear whether treatment is warranted in such schools. This study investigates the use of bivariate spatial modelling of available and multiple data sources to predict the prevalence of S. haematobium at every school along the Kenyan coast.Data from a questionnaire survey conducted by the Kenya Ministry of Education in Coast Province in 2009 were combined with available parasitological and environmental data in a Bayesian bivariate spatial model. This modeled the relationship between BIU data and environmental covariates, as well as the relationship between BIU and S. haematobium infection prevalence, to predict S. haematobium infection prevalence at all schools in the study region. Validation procedures were implemented to assess the predictive accuracy of endemicity classification.The prevalence of BIU was negatively correlated with distance to nearest river and there was considerable residual spatial correlation at small (~15 km spatial scales. There was a predictable relationship between the prevalence of reported BIU and S. haematobium infection. The final model exhibited excellent sensitivity (0.94 but moderate specificity (0.69 in identifying low (<10% prevalence schools, and had poor performance in differentiating between moderate and high prevalence schools (sensitivity 0.5, specificity 1.Schistosomiasis is highly focal and there is a need to target treatment on a school-by-school basis. The use of bivariate spatial modelling can supplement questionnaire data to identify schools requiring mass treatment, but is unable to distinguish between moderate and high prevalence schools.

  10. A Hybrid Forecasting Model Based on Bivariate Division and a Backpropagation Artificial Neural Network Optimized by Chaos Particle Swarm Optimization for Day-Ahead Electricity Price

    Directory of Open Access Journals (Sweden)

    Zhilong Wang

    2014-01-01

    Full Text Available In the electricity market, the electricity price plays an inevitable role. Nevertheless, accurate price forecasting, a vital factor affecting both government regulatory agencies and public power companies, remains a huge challenge and a critical problem. Determining how to address the accurate forecasting problem becomes an even more significant task in an era in which electricity is increasingly important. Based on the chaos particle swarm optimization (CPSO, the backpropagation artificial neural network (BPANN, and the idea of bivariate division, this paper proposes a bivariate division BPANN (BD-BPANN method and the CPSO-BD-BPANN method for forecasting electricity price. The former method creatively transforms the electricity demand and price to be a new variable, named DV, which is calculated using the division principle, to forecast the day-ahead electricity by multiplying the forecasted values of the DVs and forecasted values of the demand. Next, to improve the accuracy of BD-BPANN, chaos particle swarm optimization and BD-BPANN are synthesized to form a novel model, CPSO-BD-BPANN. In this study, CPSO is utilized to optimize the initial parameters of BD-BPANN to make its output more stable than the original model. Finally, two forecasting strategies are proposed regarding different situations.

  11. A bivariate contaminated binormal model for robust fitting of proper ROC curves to a pair of correlated, possibly degenerate, ROC datasets.

    Science.gov (United States)

    Zhai, Xuetong; Chakraborty, Dev P

    2017-06-01

    The objective was to design and implement a bivariate extension to the contaminated binormal model (CBM) to fit paired receiver operating characteristic (ROC) datasets-possibly degenerate-with proper ROC curves. Paired datasets yield two correlated ratings per case. Degenerate datasets have no interior operating points and proper ROC curves do not inappropriately cross the chance diagonal. The existing method, developed more than three decades ago utilizes a bivariate extension to the binormal model, implemented in CORROC2 software, which yields improper ROC curves and cannot fit degenerate datasets. CBM can fit proper ROC curves to unpaired (i.e., yielding one rating per case) and degenerate datasets, and there is a clear scientific need to extend it to handle paired datasets. In CBM, nondiseased cases are modeled by a probability density function (pdf) consisting of a unit variance peak centered at zero. Diseased cases are modeled with a mixture distribution whose pdf consists of two unit variance peaks, one centered at positive μ with integrated probability α, the mixing fraction parameter, corresponding to the fraction of diseased cases where the disease was visible to the radiologist, and one centered at zero, with integrated probability (1-α), corresponding to disease that was not visible. It is shown that: (a) for nondiseased cases the bivariate extension is a unit variances bivariate normal distribution centered at (0,0) with a specified correlation ρ 1 ; (b) for diseased cases the bivariate extension is a mixture distribution with four peaks, corresponding to disease not visible in either condition, disease visible in only one condition, contributing two peaks, and disease visible in both conditions. An expression for the likelihood function is derived. A maximum likelihood estimation (MLE) algorithm, CORCBM, was implemented in the R programming language that yields parameter estimates and the covariance matrix of the parameters, and other statistics

  12. Socio-Economic Factors Affecting Adoption of Modern Information and Communication Technology by Farmers in India: Analysis Using Multivariate Probit Model

    Science.gov (United States)

    Mittal, Surabhi; Mehar, Mamta

    2016-01-01

    Purpose: The paper analyzes factors that affect the likelihood of adoption of different agriculture-related information sources by farmers. Design/Methodology/Approach: The paper links the theoretical understanding of the existing multiple sources of information that farmers use, with the empirical model to analyze the factors that affect the…

  13. Contributory fault and level of personal injury to drivers involved in head-on collisions: Application of copula-based bivariate ordinal models.

    Science.gov (United States)

    Wali, Behram; Khattak, Asad J; Xu, Jingjing

    2018-01-01

    The main objective of this study is to simultaneously investigate the degree of injury severity sustained by drivers involved in head-on collisions with respect to fault status designation. This is complicated to answer due to many issues, one of which is the potential presence of correlation between injury outcomes of drivers involved in the same head-on collision. To address this concern, we present seemingly unrelated bivariate ordered response models by analyzing the joint injury severity probability distribution of at-fault and not-at-fault drivers. Moreover, the assumption of bivariate normality of residuals and the linear form of stochastic dependence implied by such models may be unduly restrictive. To test this, Archimedean copula structures and normal mixture marginals are integrated into the joint estimation framework, which can characterize complex forms of stochastic dependencies and non-normality in residual terms. The models are estimated using 2013 Virginia police reported two-vehicle head-on collision data, where exactly one driver is at-fault. The results suggest that both at-fault and not-at-fault drivers sustained serious/fatal injuries in 8% of crashes, whereas, in 4% of the cases, the not-at-fault driver sustained a serious/fatal injury with no injury to the at-fault driver at all. Furthermore, if the at-fault driver is fatigued, apparently asleep, or has been drinking the not-at-fault driver is more likely to sustain a severe/fatal injury, controlling for other factors and potential correlations between the injury outcomes. While not-at-fault vehicle speed affects injury severity of at-fault driver, the effect is smaller than the effect of at-fault vehicle speed on at-fault injury outcome. Contrarily, and importantly, the effect of at-fault vehicle speed on injury severity of not-at-fault driver is almost equal to the effect of not-at-fault vehicle speed on injury outcome of not-at-fault driver. Compared to traditional ordered probability

  14. Evolution of association between renal and liver functions while awaiting heart transplant: An application using a bivariate multiphase nonlinear mixed effects model.

    Science.gov (United States)

    Rajeswaran, Jeevanantham; Blackstone, Eugene H; Barnard, John

    2018-07-01

    In many longitudinal follow-up studies, we observe more than one longitudinal outcome. Impaired renal and liver functions are indicators of poor clinical outcomes for patients who are on mechanical circulatory support and awaiting heart transplant. Hence, monitoring organ functions while waiting for heart transplant is an integral part of patient management. Longitudinal measurements of bilirubin can be used as a marker for liver function and glomerular filtration rate for renal function. We derive an approximation to evolution of association between these two organ functions using a bivariate nonlinear mixed effects model for continuous longitudinal measurements, where the two submodels are linked by a common distribution of time-dependent latent variables and a common distribution of measurement errors.

  15. The intermediate endpoint effect in logistic and probit regression

    Science.gov (United States)

    MacKinnon, DP; Lockwood, CM; Brown, CH; Wang, W; Hoffman, JM

    2010-01-01

    Background An intermediate endpoint is hypothesized to be in the middle of the causal sequence relating an independent variable to a dependent variable. The intermediate variable is also called a surrogate or mediating variable and the corresponding effect is called the mediated, surrogate endpoint, or intermediate endpoint effect. Clinical studies are often designed to change an intermediate or surrogate endpoint and through this intermediate change influence the ultimate endpoint. In many intermediate endpoint clinical studies the dependent variable is binary, and logistic or probit regression is used. Purpose The purpose of this study is to describe a limitation of a widely used approach to assessing intermediate endpoint effects and to propose an alternative method, based on products of coefficients, that yields more accurate results. Methods The intermediate endpoint model for a binary outcome is described for a true binary outcome and for a dichotomization of a latent continuous outcome. Plots of true values and a simulation study are used to evaluate the different methods. Results Distorted estimates of the intermediate endpoint effect and incorrect conclusions can result from the application of widely used methods to assess the intermediate endpoint effect. The same problem occurs for the proportion of an effect explained by an intermediate endpoint, which has been suggested as a useful measure for identifying intermediate endpoints. A solution to this problem is given based on the relationship between latent variable modeling and logistic or probit regression. Limitations More complicated intermediate variable models are not addressed in the study, although the methods described in the article can be extended to these more complicated models. Conclusions Researchers are encouraged to use an intermediate endpoint method based on the product of regression coefficients. A common method based on difference in coefficient methods can lead to distorted

  16. An Affine Invariant Bivariate Version of the Sign Test.

    Science.gov (United States)

    1987-06-01

    words: affine invariance, bivariate quantile, bivariate symmetry, model,. generalized median, influence function , permutation test, normal efficiency...calculate a bivariate version of the influence function , and the resulting form is bounded, as is the case for the univartate sign test, and shows the...terms of a blvariate analogue of IHmpel’s (1974) influence function . The latter, though usually defined as a von-Mises derivative of certain

  17. Heterogeneous Impact of the “Seguro Popular” Program on the Utilization of Obstetrical Services in Mexico, 2001–2006: A Multinomial Probit Model with a Discrete Endogenous Variable

    Science.gov (United States)

    Sosa-Rubi, Sandra G.; Galárraga, Omar

    2009-01-01

    Objective We evaluated the impact of Seguro Popular (SP), a program introduced in 2001 in Mexico primarily to finance health care for the poor. We focused on the effect of household enrollment in SP on pregnant women’s access to obstetrical services, an important outcome measure of both maternal and infant health. Data We relied upon data from the cross-sectional 2006 National Health and Nutrition Survey (ENSANUT) in Mexico. We analyzed the responses of 3,890 women who delivered babies during 2001–2006 and whose households lacked employer-based health care coverage. Methods We formulated a multinomial probit model that distinguished between three mutually exclusive sites for delivering a baby: a health unit specifically accredited by SP; a non-SP-accredited clinic run by the Department of Health (Secretaría de Salud, or SSA); and private obstetrical care. Our model accounted for the endogeneity of the household’s binary decision to enroll in the SP program. Results Women in households that participated in the SP program had a much stronger preference for having a baby in a SP-sponsored unit rather than paying out of pocket for a private delivery. At the same time, participation in SP was associated with a stronger preference for delivering in the private sector rather than at a state-run SSA clinic. On balance, the Seguro Popular program reduced pregnant women’s attendance at an SSA clinic much more than it reduced the probability of delivering a baby in the private sector. The quantitative impact of the SP program varied with the woman’s education and health, as well as the assets and location (rural versus urban) of the household. Conclusions The SP program had a robust, significantly positive impact on access to obstetrical services. Our finding that women enrolled in SP switched from non-SP state-run facilities, rather than from out-of-pocket private services, is important for public policy and requires further exploration. PMID:18824268

  18. Bivariate Left-Censored Bayesian Model for Predicting Exposure: Preliminary Analysis of Worker Exposure during the Deepwater Horizon Oil Spill.

    Science.gov (United States)

    Groth, Caroline; Banerjee, Sudipto; Ramachandran, Gurumurthy; Stenzel, Mark R; Sandler, Dale P; Blair, Aaron; Engel, Lawrence S; Kwok, Richard K; Stewart, Patricia A

    2017-01-01

    In April 2010, the Deepwater Horizon oil rig caught fire and exploded, releasing almost 5 million barrels of oil into the Gulf of Mexico over the ensuing 3 months. Thousands of oil spill workers participated in the spill response and clean-up efforts. The GuLF STUDY being conducted by the National Institute of Environmental Health Sciences is an epidemiological study to investigate potential adverse health effects among these oil spill clean-up workers. Many volatile chemicals were released from the oil into the air, including total hydrocarbons (THC), which is a composite of the volatile components of oil including benzene, toluene, ethylbenzene, xylene, and hexane (BTEXH). Our goal is to estimate exposure levels to these toxic chemicals for groups of oil spill workers in the study (hereafter called exposure groups, EGs) with likely comparable exposure distributions. A large number of air measurements were collected, but many EGs are characterized by datasets with a large percentage of censored measurements (below the analytic methods' limits of detection) and/or a limited number of measurements. We use THC for which there was less censoring to develop predictive linear models for specific BTEXH air exposures with higher degrees of censoring. We present a novel Bayesian hierarchical linear model that allows us to predict, for different EGs simultaneously, exposure levels of a second chemical while accounting for censoring in both THC and the chemical of interest. We illustrate the methodology by estimating exposure levels for several EGs on the Development Driller III, a rig vessel charged with drilling one of the relief wells. The model provided credible estimates in this example for geometric means, arithmetic means, variances, correlations, and regression coefficients for each group. This approach should be considered when estimating exposures in situations when multiple chemicals are correlated and have varying degrees of censoring. © The Author 2017

  19. Calculus of bivariant function

    OpenAIRE

    PTÁČNÍK, Jan

    2011-01-01

    This thesis deals with the introduction of function of two variables and differential calculus of this function. This work should serve as a textbook for students of elementary school's teacher. Each chapter contains a summary of basic concepts and explanations of relationships, then solved model exercises of the topic and finally the exercises, which should solve the student himself. Thesis have transmit to students basic knowledges of differential calculus of functions of two variables, inc...

  20. Bivariate copula in fitting rainfall data

    Science.gov (United States)

    Yee, Kong Ching; Suhaila, Jamaludin; Yusof, Fadhilah; Mean, Foo Hui

    2014-07-01

    The usage of copula to determine the joint distribution between two variables is widely used in various areas. The joint distribution of rainfall characteristic obtained using the copula model is more ideal than the standard bivariate modelling where copula is belief to have overcome some limitation. Six copula models will be applied to obtain the most suitable bivariate distribution between two rain gauge stations. The copula models are Ali-Mikhail-Haq (AMH), Clayton, Frank, Galambos, Gumbel-Hoogaurd (GH) and Plackett. The rainfall data used in the study is selected from rain gauge stations which are located in the southern part of Peninsular Malaysia, during the period from 1980 to 2011. The goodness-of-fit test in this study is based on the Akaike information criterion (AIC).

  1. Covariate analysis of bivariate survival data

    Energy Technology Data Exchange (ETDEWEB)

    Bennett, L.E.

    1992-01-01

    The methods developed are used to analyze the effects of covariates on bivariate survival data when censoring and ties are present. The proposed method provides models for bivariate survival data that include differential covariate effects and censored observations. The proposed models are based on an extension of the univariate Buckley-James estimators which replace censored data points by their expected values, conditional on the censoring time and the covariates. For the bivariate situation, it is necessary to determine the expectation of the failure times for one component conditional on the failure or censoring time of the other component. Two different methods have been developed to estimate these expectations. In the semiparametric approach these expectations are determined from a modification of Burke's estimate of the bivariate empirical survival function. In the parametric approach censored data points are also replaced by their conditional expected values where the expected values are determined from a specified parametric distribution. The model estimation will be based on the revised data set, comprised of uncensored components and expected values for the censored components. The variance-covariance matrix for the estimated covariate parameters has also been derived for both the semiparametric and parametric methods. Data from the Demographic and Health Survey was analyzed by these methods. The two outcome variables are post-partum amenorrhea and breastfeeding; education and parity were used as the covariates. Both the covariate parameter estimates and the variance-covariance estimates for the semiparametric and parametric models will be compared. In addition, a multivariate test statistic was used in the semiparametric model to examine contrasts. The significance of the statistic was determined from a bootstrap distribution of the test statistic.

  2. Dealing with selection bias in educational transition models

    DEFF Research Database (Denmark)

    Holm, Anders; Jæger, Mads Meier

    2011-01-01

    This paper proposes the bivariate probit selection model (BPSM) as an alternative to the traditional Mare model for analyzing educational transitions. The BPSM accounts for selection on unobserved variables by allowing for unobserved variables which affect the probability of making educational tr...... account for selection on unobserved variables and high-quality data are both required in order to estimate credible educational transition models.......This paper proposes the bivariate probit selection model (BPSM) as an alternative to the traditional Mare model for analyzing educational transitions. The BPSM accounts for selection on unobserved variables by allowing for unobserved variables which affect the probability of making educational...... transitions to be correlated across transitions. We use simulated and real data to illustrate how the BPSM improves on the traditional Mare model in terms of correcting for selection bias and providing credible estimates of the effect of family background on educational success. We conclude that models which...

  3. Reliability for some bivariate beta distributions

    Directory of Open Access Journals (Sweden)

    Nadarajah Saralees

    2005-01-01

    Full Text Available In the area of stress-strength models there has been a large amount of work as regards estimation of the reliability R=Pr( Xbivariate distribution with dependence between X and Y . In particular, we derive explicit expressions for R when the joint distribution is bivariate beta. The calculations involve the use of special functions.

  4. Reliability for some bivariate gamma distributions

    Directory of Open Access Journals (Sweden)

    Nadarajah Saralees

    2005-01-01

    Full Text Available In the area of stress-strength models, there has been a large amount of work as regards estimation of the reliability R=Pr( Xbivariate distribution with dependence between X and Y . In particular, we derive explicit expressions for R when the joint distribution is bivariate gamma. The calculations involve the use of special functions.

  5. Prediction of unwanted pregnancies using logistic regression, probit regression and discriminant analysis.

    Science.gov (United States)

    Ebrahimzadeh, Farzad; Hajizadeh, Ebrahim; Vahabi, Nasim; Almasian, Mohammad; Bakhteyar, Katayoon

    2015-01-01

    Unwanted pregnancy not intended by at least one of the parents has undesirable consequences for the family and the society. In the present study, three classification models were used and compared to predict unwanted pregnancies in an urban population. In this cross-sectional study, 887 pregnant mothers referring to health centers in Khorramabad, Iran, in 2012 were selected by the stratified and cluster sampling; relevant variables were measured and for prediction of unwanted pregnancy, logistic regression, discriminant analysis, and probit regression models and SPSS software version 21 were used. To compare these models, indicators such as sensitivity, specificity, the area under the ROC curve, and the percentage of correct predictions were used. The prevalence of unwanted pregnancies was 25.3%. The logistic and probit regression models indicated that parity and pregnancy spacing, contraceptive methods, household income and number of living male children were related to unwanted pregnancy. The performance of the models based on the area under the ROC curve was 0.735, 0.733, and 0.680 for logistic regression, probit regression, and linear discriminant analysis, respectively. Given the relatively high prevalence of unwanted pregnancies in Khorramabad, it seems necessary to revise family planning programs. Despite the similar accuracy of the models, if the researcher is interested in the interpretability of the results, the use of the logistic regression model is recommended.

  6. Analysing risk factors of co-occurrence of schistosomiasis haematobium and hookworm using bivariate regression models: Case study of Chikwawa, Malawi

    Directory of Open Access Journals (Sweden)

    Bruce B.W. Phiri

    2016-06-01

    Full Text Available Schistosomiasis and soil-transmitted helminth (STH infections constitute a major public health problem in many parts of sub-Saharan Africa. In areas where prevalence of geo-helminths and schistosomes is high, co-infection with multiple parasite species is common, resulting in disproportionately elevated burden compared with single infections. Determining risk factors of co-infection intensity is important for better design of targeted interventions. In this paper, we examined risk factors of hookworm and S. haematobium co-infection intensity, in Chikwawa district, southern Malawi in 2005, using bivariate count models. Results show that hookworm and S. haematobium infections were much localised with small proportion of individuals harbouring more parasites especially among school-aged children. The risk of co-intensity with both hookworm and S. haematobium was high for all ages, although this diminished with increasing age, increased with fishing (hookworm: coefficient. = 12.29; 95% CI = 11.50–13.09; S. haematobium: 0.040; 95% CI = 0.0037, 3.832. Both infections were abundant in those with primary education (hookworm: coef. = 0.072; 95% CI = 0.056, 0.401 and S. haematobium: coef. = 0.286; 95% CI = 0.034, 0.538. However, much lower risk was observed for those who were farmers (hookworm: coef. = −0.349, 95% CI = −0.547,−0.150; S. haematobium: coef. −0.239, 95% CI = −0.406, −0.072. In conclusion, our findings suggest that efforts to control helminths infection should be co-integrated and health promotion campaigns should be aimed at school-going children and adults who are in constant contact with water.

  7. Exploring Driver Injury Severity at Intersection: An Ordered Probit Analysis

    Directory of Open Access Journals (Sweden)

    Yaping Zhang

    2015-02-01

    Full Text Available It is well known that intersections are the most hazardous locations; however, only little is known about driver injury severity in intersection crashes. Hence, the main goal of this study was to further examine the different factors contributing to driver injury severity involved in fatal crashes at intersections. Data used for the present analysis was from the US DOT-Fatality Analysis Reporting System (FARS crash database from the year 2011. An ordered probit model was employed to fit the fatal crash data and analyze the factors impacting each injury severity level. The analysis results displayed that driver injury severity is significantly affected by many factors. They include driver age and gender, driver ethnicity, vehicle type and age (years of use, crash type, driving drunk, speeding, violating stop sign, cognitively distracted driving, and seat belt usage. These findings from the current study are beneficial to form a solid basis for adopting corresponding measures to effectively drop injury severity suffering from intersection crash. More insights into the effects of risk factors on driver injury severity could be acquired using more advanced statistical models.

  8. Comparison between two bivariate Poisson distributions through the ...

    African Journals Online (AJOL)

    These two models express themselves by their probability mass function. ... To remedy this problem, Berkhout and Plug proposed a bivariate Poisson distribution accepting the correlation as well negative, equal to zero, that positive.

  9. A metric for cross-sample comparisons using logit and probit

    DEFF Research Database (Denmark)

    Karlson, Kristian Bernt

    relative to an arbitrary scale, which makes the coefficients difficult both to interpret and to compare across groups or samples. Do differences in coefficients reflect true differences or differences in scales? This cross-sample comparison problem raises concerns for comparative research. However, we......* across groups or samples, making it suitable for situations met in real applications in comparative research. Our derivations also extend to the probit and to ordered and multinomial models. The new metric is implemented in the Stata command nlcorr....

  10. Bivariate value-at-risk

    Directory of Open Access Journals (Sweden)

    Giuseppe Arbia

    2007-10-01

    Full Text Available In this paper we extend the concept of Value-at-risk (VaR to bivariate return distributions in order to obtain measures of the market risk of an asset taking into account additional features linked to downside risk exposure. We first present a general definition of risk as the probability of an adverse event over a random distribution and we then introduce a measure of market risk (b-VaR that admits the traditional b of an asset in portfolio management as a special case when asset returns are normally distributed. Empirical evidences are provided by using Italian stock market data.

  11. Spectral density regression for bivariate extremes

    KAUST Repository

    Castro Camilo, Daniela

    2016-05-11

    We introduce a density regression model for the spectral density of a bivariate extreme value distribution, that allows us to assess how extremal dependence can change over a covariate. Inference is performed through a double kernel estimator, which can be seen as an extension of the Nadaraya–Watson estimator where the usual scalar responses are replaced by mean constrained densities on the unit interval. Numerical experiments with the methods illustrate their resilience in a variety of contexts of practical interest. An extreme temperature dataset is used to illustrate our methods. © 2016 Springer-Verlag Berlin Heidelberg

  12. An improved probit method for assessment of domino effect to chemical process equipment caused by overpressure.

    Science.gov (United States)

    Mingguang, Zhang; Juncheng, Jiang

    2008-10-30

    Overpressure is one important cause of domino effect in accidents of chemical process equipments. Damage probability and relative threshold value are two necessary parameters in QRA of this phenomenon. Some simple models had been proposed based on scarce data or oversimplified assumption. Hence, more data about damage to chemical process equipments were gathered and analyzed, a quantitative relationship between damage probability and damage degrees of equipment was built, and reliable probit models were developed associated to specific category of chemical process equipments. Finally, the improvements of present models were evidenced through comparison with other models in literatures, taking into account such parameters: consistency between models and data, depth of quantitativeness in QRA.

  13. Probit vs. semi-nonparametric estimation: examining the role of disability on institutional entry for older adults.

    Science.gov (United States)

    Sharma, Andy

    2017-06-01

    The purpose of this study was to showcase an advanced methodological approach to model disability and institutional entry. Both of these are important areas to investigate given the on-going aging of the United States population. By 2020, approximately 15% of the population will be 65 years and older. Many of these older adults will experience disability and require formal care. A probit analysis was employed to determine which disabilities were associated with admission into an institution (i.e. long-term care). Since this framework imposes strong distributional assumptions, misspecification leads to inconsistent estimators. To overcome such a short-coming, this analysis extended the probit framework by employing an advanced semi-nonparamertic maximum likelihood estimation utilizing Hermite polynomial expansions. Specification tests show semi-nonparametric estimation is preferred over probit. In terms of the estimates, semi-nonparametric ratios equal 42 for cognitive difficulty, 64 for independent living, and 111 for self-care disability while probit yields much smaller estimates of 19, 30, and 44, respectively. Public health professionals can use these results to better understand why certain interventions have not shown promise. Equally important, healthcare workers can use this research to evaluate which type of treatment plans may delay institutionalization and improve the quality of life for older adults. Implications for rehabilitation With on-going global aging, understanding the association between disability and institutional entry is important in devising successful rehabilitation interventions. Semi-nonparametric is preferred to probit and shows ambulatory and cognitive impairments present high risk for institutional entry (long-term care). Informal caregiving and home-based care require further examination as forms of rehabilitation/therapy for certain types of disabilities.

  14. Performance and separation occurrence of binary probit regression estimator using maximum likelihood method and Firths approach under different sample size

    Science.gov (United States)

    Lusiana, Evellin Dewi

    2017-12-01

    The parameters of binary probit regression model are commonly estimated by using Maximum Likelihood Estimation (MLE) method. However, MLE method has limitation if the binary data contains separation. Separation is the condition where there are one or several independent variables that exactly grouped the categories in binary response. It will result the estimators of MLE method become non-convergent, so that they cannot be used in modeling. One of the effort to resolve the separation is using Firths approach instead. This research has two aims. First, to identify the chance of separation occurrence in binary probit regression model between MLE method and Firths approach. Second, to compare the performance of binary probit regression model estimator that obtained by MLE method and Firths approach using RMSE criteria. Those are performed using simulation method and under different sample size. The results showed that the chance of separation occurrence in MLE method for small sample size is higher than Firths approach. On the other hand, for larger sample size, the probability decreased and relatively identic between MLE method and Firths approach. Meanwhile, Firths estimators have smaller RMSE than MLEs especially for smaller sample sizes. But for larger sample sizes, the RMSEs are not much different. It means that Firths estimators outperformed MLE estimator.

  15. How Much Math Do Students Need to Succeed in Business and Economics Statistics? An Ordered Probit Analysis

    OpenAIRE

    Jeffrey J. Green; Courtenay C. Stone; Abera Zegeye; Thomas A. Charles

    2008-01-01

    Because statistical analysis requires both familiarity with and the ability to use mathematics, students typically are required to take one or more prerequisite math courses prior to enrolling in the business statistics course. Despite these math prerequisites, however, students find it extremely difficult to learn business statistics. In this study, we use an ordered probit model to examine the effect of alternative prerequisite math course sequences on the grade performance of 1,684 busines...

  16. Bivariate extreme value with application to PM10 concentration analysis

    Science.gov (United States)

    Amin, Nor Azrita Mohd; Adam, Mohd Bakri; Ibrahim, Noor Akma; Aris, Ahmad Zaharin

    2015-05-01

    This study is focus on a bivariate extreme of renormalized componentwise maxima with generalized extreme value distribution as a marginal function. The limiting joint distribution of several parametric models are presented. Maximum likelihood estimation is employed for parameter estimations and the best model is selected based on the Akaike Information Criterion. The weekly and monthly componentwise maxima series are extracted from the original observations of daily maxima PM10 data for two air quality monitoring stations located in Pasir Gudang and Johor Bahru. The 10 years data are considered for both stations from year 2001 to 2010. The asymmetric negative logistic model is found as the best fit bivariate extreme model for both weekly and monthly maxima componentwise series. However the dependence parameters show that the variables for weekly maxima series is more dependence to each other compared to the monthly maxima.

  17. Stereology of extremes; bivariate models and computation

    Czech Academy of Sciences Publication Activity Database

    Beneš, Viktor; Bodlák, M.; Hlubinka, D.

    2003-01-01

    Roč. 5, č. 3 (2003), s. 289-308 ISSN 1387-5841 R&D Projects: GA AV ČR IAA1075201; GA ČR GA201/03/0946 Institutional research plan: CEZ:AV0Z1075907 Keywords : sample extremes * domain of attraction * normalizing constants Subject RIV: BA - General Mathematics

  18. The Market for Ph.D. Holders in Greece: Probit and Multinomial Logit Analysis of their Employment Status

    OpenAIRE

    Joan Daouli; Eirini Konstantina Nikolatou

    2015-01-01

    The objective of this paper is to investigate the factors influencing the probability that a Ph.D. holder in Greece will work in the academic sector, as well as the probability of his or her choosing employment in various sectors of industry and occupational categories. Probit/multinomial logit models are employed using the 2001 Census data. The empirical results indicate that being young, married, having a Ph.D. in Natural Sciences and/or in Engineering, granted by a Greek university, increa...

  19. Computational approach to Thornley's problem by bivariate operational calculus

    Science.gov (United States)

    Bazhlekova, E.; Dimovski, I.

    2012-10-01

    Thornley's problem is an initial-boundary value problem with a nonlocal boundary condition for linear onedimensional reaction-diffusion equation, used as a mathematical model of spiral phyllotaxis in botany. Applying a bivariate operational calculus we find explicit representation of the solution, containing two convolution products of special solutions and the arbitrary initial and boundary functions. We use a non-classical convolution with respect to the space variable, extending in this way the classical Duhamel principle. The special solutions involved are represented in the form of fast convergent series. Numerical examples are considered to show the application of the present technique and to analyze the character of the solution.

  20. Bivariate generalized Pareto distribution for extreme atmospheric particulate matter

    Science.gov (United States)

    Amin, Nor Azrita Mohd; Adam, Mohd Bakri; Ibrahim, Noor Akma; Aris, Ahmad Zaharin

    2015-02-01

    The high particulate matter (PM10) level is the prominent issue causing various impacts to human health and seriously affecting the economics. The asymptotic theory of extreme value is apply for analyzing the relation of extreme PM10 data from two nearby air quality monitoring stations. The series of daily maxima PM10 for Johor Bahru and Pasir Gudang stations are consider for year 2001 to 2010 databases. The 85% and 95% marginal quantile apply to determine the threshold values and hence construct the series of exceedances over the chosen threshold. The logistic, asymmetric logistic, negative logistic and asymmetric negative logistic models areconsidered as the dependence function to the joint distribution of a bivariate observation. Maximum likelihood estimation is employed for parameter estimations. The best fitted model is chosen based on the Akaike Information Criterion and the quantile plots. It is found that the asymmetric logistic model gives the best fitted model for bivariate extreme PM10 data and shows the weak dependence between two stations.

  1. Influencing Factors of Currency Risk of Deposit Banks in Turkey by Using Probit Method

    Directory of Open Access Journals (Sweden)

    Serhat Yüksel

    2016-12-01

    Full Text Available In this paper, we aimed to analyze the factors that affect currency risk of the banks. Within this scope, annual data of 23 deposit banks for the periods between 2005 and 2015 was evaluated. In addition to this situation, panel probit model was used in order to achieve this objective. Regarding the subject of the currency risk, this model was firstly used in this study. According to the results of the analysis, it was determined that 3 independent variables affect the currency risk of deposit banks in Turkey. Firstly, it was identified that there is a positive relationship between total assets and currency risk. This situation explains that when the size of the banks increases, they tend to take more currency risk. In addition to this variable, it was also defined that there is a direct relationship between economic growth and currency risk of the banks. This result refers that in case of an increment in the market stability; banks think that the market is safer and they increase their currency risk. Moreover, it was also concluded that there is a negative relationship between interest rate and currency risk of the banks. This aspect shows that when interest rate decreases, it will lower uncertainty in the market. Thus, banks would take higher currency risk in such markets.

  2. A Comparison of Alternative Specifications of the College Attendance Equation with an Extension to two-stage Selectivity-Correction Models.

    Science.gov (United States)

    Hilmer, Michael J.

    2001-01-01

    Estimates a college-attendance equation for a common set of students (from the High School and Beyond Survey) using three popular econometric specifications: the multinomial logit, the ordered probit, and the bivariate probit. Estimated marginal effects do not differ significantly across the three specifications. Choice of specification may not…

  3. Bivariate Rayleigh Distribution and its Properties

    Directory of Open Access Journals (Sweden)

    Ahmad Saeed Akhter

    2007-01-01

    Full Text Available Rayleigh (1880 observed that the sea waves follow no law because of the complexities of the sea, but it has been seen that the probability distributions of wave heights, wave length, wave induce pitch, wave and heave motions of the ships follow the Rayleigh distribution. At present, several different quantities are in use for describing the state of the sea; for example, the mean height of the waves, the root mean square height, the height of the “significant waves” (the mean height of the highest one-third of all the waves the maximum height over a given interval of the time, and so on. At present, the ship building industry knows less than any other construction industry about the service conditions under which it must operate. Only small efforts have been made to establish the stresses and motions and to incorporate the result of such studies in to design. This is due to the complexity of the problem caused by the extensive variability of the sea and the corresponding response of the ships. Although the problem appears feasible, yet it is possible to predict service conditions for ships in an orderly and relatively simple manner Rayleigh (1980 derived it from the amplitude of sound resulting from many independent sources. This distribution is also connected with one or two dimensions and is sometimes referred to as “random walk” frequency distribution. The Rayleigh distribution can be derived from the bivariate normal distribution when the variate are independent and random with equal variances. We try to construct bivariate Rayleigh distribution with marginal Rayleigh distribution function and discuss its fundamental properties.

  4. Comparison of Weibull and Probit Analysis in Toxicity Testing of ...

    African Journals Online (AJOL)

    HP

    Keywords: Hunteria umbellata, Weibull model, Acute toxicity, Median lethal dose (LD50). Received: 7 November ... (PBPK) models [14,15], and (v) biologically-. Based Models: Moolgavkar-Venzon-Kundson. (MVK) model [16] and Ellwein and Cohen model [17]. ... Nigeria, Ibadan, where a sample with number. FHI107618 ...

  5. A Bivariate return period for levee failure monitoring

    Science.gov (United States)

    Isola, M.; Caporali, E.

    2017-12-01

    Levee breaches are strongly linked with the interaction processes among water, soil and structure, thus many are the factors that affect the breach development. One of the main is the hydraulic load, characterized by intensity and duration, i.e. by the flood event hydrograph. On the magnitude of the hydraulic load is based the levee design, generally without considering the fatigue failure due to the load duration. Moreover, many are the cases in which the levee breach are characterized by flood of magnitude lower than the design one. In order to implement the strategies of flood risk management, we built here a procedure based on a multivariate statistical analysis of flood peak and volume together with the analysis of the past levee failure events. Particularly, in order to define the probability of occurrence of the hydraulic load on a levee, a bivariate copula model is used to obtain the bivariate joint distribution of flood peak and volume. Flood peak is the expression of the load magnitude, while the volume is the expression of the stress over time. We consider the annual flood peak and the relative volume. The volume is given by the hydrograph area between the beginning and the end of event. The beginning of the event is identified as an abrupt rise of the discharge by more than 20%. The end is identified as the point from which the receding limb is characterized by the baseflow, using a nonlinear reservoir algorithm as baseflow separation technique. By this, with the aim to define warning thresholds we consider the past levee failure events and the relative bivariate return period (BTr) compared with the estimation of a traditional univariate model. The discharge data of 30 hydrometric stations of Arno River in Tuscany, Italy, in the period 1995-2016 are analysed. The database of levee failure events, considering for each event the location as well as the failure mode, is also created. The events were registered in the period 2000-2014 by EEA

  6. Appropriateness of Probit-9 in development of quarantine treatments for timber and timber commodities

    Science.gov (United States)

    Marcus Schortemeyer; Ken Thomas; Robert A. Haack; Adnan Uzunovic; Kelli Hoover; Jack A. Simpson; Cheryl A. Grgurinovic

    2011-01-01

    Following the increasing international phasing out of methyl bromide for quarantine purposes, the development of alternative treatments for timber pests becomes imperative. The international accreditation of new quarantine treatments requires verification standards that give confidence in the effectiveness of a treatment. Probit-9 mortality is a standard for treatment...

  7. Seeking alternatives to probit 9 when developing treatments for wood packaging materials under ISPM No. 15

    Science.gov (United States)

    R.A. Haack; A. Uzunovic; K. Hoover; J.A. Cook

    2011-01-01

    ISPM No. 15 presents guidelines for treating wood packaging material used in international trade. There are currently two approved phytosanitary treatments: heat treatment and methyl bromide fumigation. New treatments are under development, and are needed given that methyl bromide is being phased out. Probit 9 efficacy (100% mortality of at least 93 613 test organisms...

  8. Default probabilities, CDS premiums and downgrades : A probit-MIDAS analysis

    NARCIS (Netherlands)

    Freitag, L.

    2014-01-01

    This paper examines the relationship between sovereign credit default swaps (CDS) and sovereign rating changes of European countries. To this aim, a new estimator is introduced which merges mixed data sampling (MIDAS) with probit regression. Simulations show that the estimator has good properties in

  9. Two new bivariate zero-inflated generalized Poisson distributions with a flexible correlation structure

    Directory of Open Access Journals (Sweden)

    Chi Zhang

    2015-05-01

    Full Text Available To model correlated bivariate count data with extra zero observations, this paper proposes two new bivariate zero-inflated generalized Poisson (ZIGP distributions by incorporating a multiplicative factor (or dependency parameter λ, named as Type I and Type II bivariate ZIGP distributions, respectively. The proposed distributions possess a flexible correlation structure and can be used to fit either positively or negatively correlated and either over- or under-dispersed count data, comparing to the existing models that can only fit positively correlated count data with over-dispersion. The two marginal distributions of Type I bivariate ZIGP share a common parameter of zero inflation while the two marginal distributions of Type II bivariate ZIGP have their own parameters of zero inflation, resulting in a much wider range of applications. The important distributional properties are explored and some useful statistical inference methods including maximum likelihood estimations of parameters, standard errors estimation, bootstrap confidence intervals and related testing hypotheses are developed for the two distributions. A real data are thoroughly analyzed by using the proposed distributions and statistical methods. Several simulation studies are conducted to evaluate the performance of the proposed methods.

  10. SNPMClust: Bivariate Gaussian Genotype Clustering and Calling for Illumina Microarrays

    Directory of Open Access Journals (Sweden)

    Stephen W. Erickson

    2016-07-01

    Full Text Available SNPMClust is an R package for genotype clustering and calling with Illumina microarrays. It was originally developed for studies using the GoldenGate custom genotyping platform but can be used with other Illumina platforms, including Infinium BeadChip. The algorithm first rescales the fluorescent signal intensity data, adds empirically derived pseudo-data to minor allele genotype clusters, then uses the package mclust for bivariate Gaussian model fitting. We compared the accuracy and sensitivity of SNPMClust to that of GenCall, Illumina's proprietary algorithm, on a data set of 94 whole-genome amplified buccal (cheek swab DNA samples. These samples were genotyped on a custom panel which included 1064 SNPs for which the true genotype was known with high confidence. SNPMClust produced uniformly lower false call rates over a wide range of overall call rates.

  11. The relative performance of bivariate causality tests in small samples

    NARCIS (Netherlands)

    Bult, J..R.; Leeflang, P.S.H.; Wittink, D.R.

    1997-01-01

    Causality tests have been applied to establish directional effects and to reduce the set of potential predictors, For the latter type of application only bivariate tests can be used, In this study we compare bivariate causality tests. Although the problem addressed is general and could benefit

  12. Stress-strength reliability for general bivariate distributions

    Directory of Open Access Journals (Sweden)

    Alaa H. Abdel-Hamid

    2016-10-01

    Full Text Available An expression for the stress-strength reliability R=P(X1bivariate distribution. Such distribution includes bivariate compound Weibull, bivariate compound Gompertz, bivariate compound Pareto, among others. In the parametric case, the maximum likelihood estimates of the parameters and reliability function R are obtained. In the non-parametric case, point and interval estimates of R are developed using Govindarajulu's asymptotic distribution-free method when X1 and X2 are dependent. An example is given when the population distribution is bivariate compound Weibull. Simulation is performed, based on different sample sizes to study the performance of estimates.

  13. Identifying the Factors Influence Turkish Deposit Banks to Join Corporate Social Responsibility Activities by Using Panel Probit Method

    Directory of Open Access Journals (Sweden)

    Serhat Yuksel

    2017-02-01

    Full Text Available This study aims to determine the influencing factors of the banks to join corporate social responsibility activities. Within this scope, annual data of 23 deposit banks in Turkey for the periods between 2005 and 2015 was taken into the consideration. In addition to this situation, panel probit model was used in the analysis so as to achieve this objective. According to the results of the analysis, it was determined that there is a negative relationship between CSR activities and nonperforming loans ratio. This situation shows that banks do not prefer to make social responsibility activities in case of higher financial losses. In addition to this situation, it was also identified that there is a positive relationship between return on asset and corporate social responsibility activities of the banks. In other words, it can be understood that Turkish deposit banks, which have higher profitability, joint more CSR activities in comparison with others.

  14. Ordered Probit Analysis of Consumers’ Preferences for Milk and Meat Quality Attributes in the Emerging Cities of Southern India

    Directory of Open Access Journals (Sweden)

    S. PRIYADHARSINI

    2017-09-01

    Full Text Available In order to assess consumer preferences for milk and meat quality attributes, a study was carried out in two Second-Tier cities of Tamil Nadu. Personal interviews were done to collect the data from 160 respondents chosen through a multistage sampling procedure in each of the two cities selected for this study. Ordered Probit model fitted for the attributes of milk showed that: family size had a significant positive preference towards texture, low fat and low price of milk, educated consumers paid greater attention to taste, safety, flavour, packaging and low fat attributes of milk and low income consumers paid less importance on most of the attributes of milk. Ordered Probit model for meat revealed that as the family size increased, the consumers were likely to give more importance to ageing and tenderness and less importance to leanness of meat. Male consumers paid greater attention to colour and females were none concerned with tenderness, cooking quality and price. As the education level increased, the consumers became more and more quality and price conscious. Households having children paid more importance to tenderness and taste attributes of meat, whereas the household having aged people opted for colour, taste, tenderness, cooking quality, leanness and price attributes. Low income consumers paid less importance to quality attributes and the respondents performing more physical activity paid lesser attention towards leanness and more importance to price of the meat. This suggests the need for enhancing the production of quality livestock products, together by developing a well-organized distribution system.

  15. STUDI PERBANDINGAN ANTARA ALGORITMA BIVARIATE MARGINAL DISTRIBUTION DENGAN ALGORITMA GENETIKA

    Directory of Open Access Journals (Sweden)

    Chastine Fatichah

    2006-01-01

    Full Text Available Bivariate Marginal Distribution Algorithm is extended from Estimation of Distribution Algorithm. This heuristic algorithm proposes the new approach for recombination of generate new individual that without crossover and mutation process such as genetic algorithm. Bivariate Marginal Distribution Algorithm uses connectivity variable the pair gene for recombination of generate new individual. Connectivity between variable is doing along optimization process. In this research, genetic algorithm performance with one point crossover is compared with Bivariate Marginal Distribution Algorithm performance in case Onemax, De Jong F2 function, and Traveling Salesman Problem. In this research, experimental results have shown performance the both algorithm is dependence of parameter respectively and also population size that used. For Onemax case with size small problem, Genetic Algorithm perform better with small number of iteration and more fast for get optimum result. However, Bivariate Marginal Distribution Algorithm perform better of result optimization for case Onemax with huge size problem. For De Jong F2 function, Genetic Algorithm perform better from Bivariate Marginal Distribution Algorithm of a number of iteration and time. For case Traveling Salesman Problem, Bivariate Marginal Distribution Algorithm have shown perform better from Genetic Algorithm of optimization result. Abstract in Bahasa Indonesia : Bivariate Marginal Distribution Algorithm merupakan perkembangan lebih lanjut dari Estimation of Distribution Algorithm. Algoritma heuristik ini mengenalkan pendekatan baru dalam melakukan rekombinasi untuk membentuk individu baru, yaitu tidak menggunakan proses crossover dan mutasi seperti pada Genetic Algorithm. Bivariate Marginal Distribution Algorithm menggunakan keterkaitan pasangan variabel dalam melakukan rekombinasi untuk membentuk individu baru. Keterkaitan antar variabel tersebut ditemukan selama proses optimasi berlangsung. Aplikasi yang

  16. Bivariate discrete beta Kernel graduation of mortality data.

    Science.gov (United States)

    Mazza, Angelo; Punzo, Antonio

    2015-07-01

    Various parametric/nonparametric techniques have been proposed in literature to graduate mortality data as a function of age. Nonparametric approaches, as for example kernel smoothing regression, are often preferred because they do not assume any particular mortality law. Among the existing kernel smoothing approaches, the recently proposed (univariate) discrete beta kernel smoother has been shown to provide some benefits. Bivariate graduation, over age and calendar years or durations, is common practice in demography and actuarial sciences. In this paper, we generalize the discrete beta kernel smoother to the bivariate case, and we introduce an adaptive bandwidth variant that may provide additional benefits when data on exposures to the risk of death are available; furthermore, we outline a cross-validation procedure for bandwidths selection. Using simulations studies, we compare the bivariate approach proposed here with its corresponding univariate formulation and with two popular nonparametric bivariate graduation techniques, based on Epanechnikov kernels and on P-splines. To make simulations realistic, a bivariate dataset, based on probabilities of dying recorded for the US males, is used. Simulations have confirmed the gain in performance of the new bivariate approach with respect to both the univariate and the bivariate competitors.

  17. Approximation of bivariate copulas by patched bivariate Fréchet copulas

    KAUST Repository

    Zheng, Yanting

    2011-03-01

    Bivariate Fréchet (BF) copulas characterize dependence as a mixture of three simple structures: comonotonicity, independence and countermonotonicity. They are easily interpretable but have limitations when used as approximations to general dependence structures. To improve the approximation property of the BF copulas and keep the advantage of easy interpretation, we develop a new copula approximation scheme by using BF copulas locally and patching the local pieces together. Error bounds and a probabilistic interpretation of this approximation scheme are developed. The new approximation scheme is compared with several existing copula approximations, including shuffle of min, checkmin, checkerboard and Bernstein approximations and exhibits better performance, especially in characterizing the local dependence. The utility of the new approximation scheme in insurance and finance is illustrated in the computation of the rainbow option prices and stop-loss premiums. © 2010 Elsevier B.V.

  18. Approximation of bivariate copulas by patched bivariate Fréchet copulas

    KAUST Repository

    Zheng, Yanting; Yang, Jingping; Huang, Jianhua Z.

    2011-01-01

    Bivariate Fréchet (BF) copulas characterize dependence as a mixture of three simple structures: comonotonicity, independence and countermonotonicity. They are easily interpretable but have limitations when used as approximations to general dependence structures. To improve the approximation property of the BF copulas and keep the advantage of easy interpretation, we develop a new copula approximation scheme by using BF copulas locally and patching the local pieces together. Error bounds and a probabilistic interpretation of this approximation scheme are developed. The new approximation scheme is compared with several existing copula approximations, including shuffle of min, checkmin, checkerboard and Bernstein approximations and exhibits better performance, especially in characterizing the local dependence. The utility of the new approximation scheme in insurance and finance is illustrated in the computation of the rainbow option prices and stop-loss premiums. © 2010 Elsevier B.V.

  19. Bivariate Rainfall and Runoff Analysis Using Shannon Entropy Theory

    Science.gov (United States)

    Rahimi, A.; Zhang, L.

    2012-12-01

    Rainfall-Runoff analysis is the key component for many hydrological and hydraulic designs in which the dependence of rainfall and runoff needs to be studied. It is known that the convenient bivariate distribution are often unable to model the rainfall-runoff variables due to that they either have constraints on the range of the dependence or fixed form for the marginal distributions. Thus, this paper presents an approach to derive the entropy-based joint rainfall-runoff distribution using Shannon entropy theory. The distribution derived can model the full range of dependence and allow different specified marginals. The modeling and estimation can be proceeded as: (i) univariate analysis of marginal distributions which includes two steps, (a) using the nonparametric statistics approach to detect modes and underlying probability density, and (b) fitting the appropriate parametric probability density functions; (ii) define the constraints based on the univariate analysis and the dependence structure; (iii) derive and validate the entropy-based joint distribution. As to validate the method, the rainfall-runoff data are collected from the small agricultural experimental watersheds located in semi-arid region near Riesel (Waco), Texas, maintained by the USDA. The results of unviariate analysis show that the rainfall variables follow the gamma distribution, whereas the runoff variables have mixed structure and follow the mixed-gamma distribution. With this information, the entropy-based joint distribution is derived using the first moments, the first moments of logarithm transformed rainfall and runoff, and the covariance between rainfall and runoff. The results of entropy-based joint distribution indicate: (1) the joint distribution derived successfully preserves the dependence between rainfall and runoff, and (2) the K-S goodness of fit statistical tests confirm the marginal distributions re-derived reveal the underlying univariate probability densities which further

  20. Bivariational calculations for radiation transfer in an inhomogeneous participating media

    International Nuclear Information System (INIS)

    El Wakil, S.A.; Machali, H.M.; Haggag, M.H.; Attia, M.T.

    1986-07-01

    Equations for radiation transfer are obtained for dispersive media with space dependent albedo. Bivariational bound principle is used to calculate the reflection and transmission coefficients for such media. Numerical results are given and compared. (author)

  1. A bivariate optimal replacement policy for a multistate repairable system

    International Nuclear Information System (INIS)

    Zhang Yuanlin; Yam, Richard C.M.; Zuo, Ming J.

    2007-01-01

    In this paper, a deteriorating simple repairable system with k+1 states, including k failure states and one working state, is studied. It is assumed that the system after repair is not 'as good as new' and the deterioration of the system is stochastic. We consider a bivariate replacement policy, denoted by (T,N), in which the system is replaced when its working age has reached T or the number of failures it has experienced has reached N, whichever occurs first. The objective is to determine the optimal replacement policy (T,N)* such that the long-run expected profit per unit time is maximized. The explicit expression of the long-run expected profit per unit time is derived and the corresponding optimal replacement policy can be determined analytically or numerically. We prove that the optimal policy (T,N)* is better than the optimal policy N* for a multistate simple repairable system. We also show that a general monotone process model for a multistate simple repairable system is equivalent to a geometric process model for a two-state simple repairable system in the sense that they have the same structure for the long-run expected profit (or cost) per unit time and the same optimal policy. Finally, a numerical example is given to illustrate the theoretical results

  2. A comparison of bivariate and univariate QTL mapping in livestock populations

    Directory of Open Access Journals (Sweden)

    Sorensen Daniel

    2003-11-01

    Full Text Available Abstract This study presents a multivariate, variance component-based QTL mapping model implemented via restricted maximum likelihood (REML. The method was applied to investigate bivariate and univariate QTL mapping analyses, using simulated data. Specifically, we report results on the statistical power to detect a QTL and on the precision of parameter estimates using univariate and bivariate approaches. The model and methodology were also applied to study the effectiveness of partitioning the overall genetic correlation between two traits into a component due to many genes of small effect, and one due to the QTL. It is shown that when the QTL has a pleiotropic effect on two traits, a bivariate analysis leads to a higher statistical power of detecting the QTL and to a more precise estimate of the QTL's map position, in particular in the case when the QTL has a small effect on the trait. The increase in power is most marked in cases where the contributions of the QTL and of the polygenic components to the genetic correlation have opposite signs. The bivariate REML analysis can successfully partition the two components contributing to the genetic correlation between traits.

  3. Migration plans of the rural populations of the Third World countries: a probit analysis of micro-level data from Asia, Africa, and Latin America.

    Science.gov (United States)

    Mcdevitt, T M; Hawley, A H; Udry, J R; Gadalla, S; Leoprapai, B; Cardona, R

    1986-07-01

    This study 1) examines the extent to which a given set of microlevel factors has predictive value in different socioeconomic settings and 2) demonstrates the utility of a probit estimation technique in examining plans of rural populations to migrate. Data were collected in 1977-1979 in Thailand, Egypt, and Colombia, 3 countries which differ in culture, extent of urbanization, and proportion of labor force engaged in nonextractive industries. The researchers used identical questionnaires and obtained interviews in 4 rural villages with the "migration shed" of each country's capital city. There were 1088 rural-resident men and women interviewed in Thailand, 1088 in Colombia, and 1376 in Egypt. The researchers gathered information about year-to-year changes in residence, marital status, fertility, housing, employment status, occupation, and industry. While in all 3 countries return moves are relatively frequent, especially among males, the proportions of migrants who have moved 3 or more times do not rise above 10%. The model used portrays the formation of migration intentions of the individual as the outcome of a decision process involving the subjective weighing of perceived differentials in well-being associated with current residence and 1 or more potential destinations, taking into account the direct relocation costs and ability to finance a move. The researchers used dichotomous probit and ordinal probit techniques and 4 variations on the dependant variable to generate some of the results. The only expectancy variable significant in all countries is age. Education is also positively and significantly associated with intentions to move for both sexes in Colombia and Egypt. Marital status is a deterrent to migration plans for males in Colombia and both sexes in Egypt. Previous migration experience fails to show any significant relationship to propensity to move. Conclusions drawn from the data include: 1) the effects of age and economic status appear to increase

  4. GIS-Based bivariate statistical techniques for groundwater potential ...

    Indian Academy of Sciences (India)

    24

    This study shows the potency of two GIS-based data driven bivariate techniques namely ... In the view of these weaknesses , there is a strong requirement for reassessment of .... Font color: Text 1, Not Expanded by / Condensed by , ...... West Bengal (India) using remote sensing, geographical information system and multi-.

  5. Assessing the copula selection for bivariate frequency analysis ...

    Indian Academy of Sciences (India)

    58

    Copulas are applied to overcome the restriction of traditional bivariate frequency ... frequency analysis methods cannot describe the random variable properties that ... In order to overcome the limitation of multivariate distributions, a copula is a ..... The Mann-Kendall (M-K) test is a non-parametric statistical test which is used ...

  6. A New Measure Of Bivariate Asymmetry And Its Evaluation

    International Nuclear Information System (INIS)

    Ferreira, Flavio Henn; Kolev, Nikolai Valtchev

    2008-01-01

    In this paper we propose a new measure of bivariate asymmetry, based on conditional correlation coefficients. A decomposition of the Pearson correlation coefficient in terms of its conditional versions is studied and an example of application of the proposed measure is given.

  7. Building Bivariate Tables: The compareGroups Package for R

    Directory of Open Access Journals (Sweden)

    Isaac Subirana

    2014-05-01

    Full Text Available The R package compareGroups provides functions meant to facilitate the construction of bivariate tables (descriptives of several variables for comparison between groups and generates reports in several formats (LATEX, HTML or plain text CSV. Moreover, bivariate tables can be viewed directly on the R console in a nice format. A graphical user interface (GUI has been implemented to build the bivariate tables more easily for those users who are not familiar with the R software. Some new functions and methods have been incorporated in the newest version of the compareGroups package (version 1.x to deal with time-to-event variables, stratifying tables, merging several tables, and revising the statistical methods used. The GUI interface also has been improved, making it much easier and more intuitive to set the inputs for building the bivariate tables. The ?rst version (version 0.x and this version were presented at the 2010 useR! conference (Sanz, Subirana, and Vila 2010 and the 2011 useR! conference (Sanz, Subirana, and Vila 2011, respectively. Package compareGroups is available from the Comprehensive R Archive Network at http://CRAN.R-project.org/package=compareGroups.

  8. About some properties of bivariate splines with shape parameters

    Science.gov (United States)

    Caliò, F.; Marchetti, E.

    2017-07-01

    The paper presents and proves geometrical properties of a particular bivariate function spline, built and algorithmically implemented in previous papers. The properties typical of this family of splines impact the field of computer graphics in particular that of the reverse engineering.

  9. An Instrumental Variable Probit (IVP Analysis on Depressed Mood in Korea: The Impact of Gender Differences and Other Socio-Economic Factors

    Directory of Open Access Journals (Sweden)

    Lara Gitto

    2015-08-01

    -economic factors (such as education, residence in metropolitan areas, and so on. As the results of the Wald test carried out after the estimations did not allow to reject the null hypothesis of endogeneity, a probit model was run too. Results Overall, women tend to develop depression more frequently than men. There is an inverse effect of education on depressed mood (probability of -24.6% to report a depressed mood due to high school education, as it emerges from the probit model marginal effects, while marital status and the number of family members may act as protective factors (probability to report a depressed mood of -1.0% for each family member. Depression is significantly associated with socio-economic conditions, such as work and income. Living in metropolitan areas is inversely correlated with depression (probability of -4.1% to report a depressed mood estimated through the probit model: this could be explained considering that, in rural areas, people rarely have immediate access to high-quality health services. Conclusion This study outlines the factors that are more likely to impact on depression, and applies an IVP model to take into account the potential endogeneity of some of the predictors of depressive mood, such as female participation to workforce and health status. A probit model has been estimated too. Depression is associated with a wide range of socioeconomic factors, although the strength and direction of the association can differ by gender. Prevention approaches to contrast depressive symptoms might take into consideration the evidence offered by the present study.

  10. An Instrumental Variable Probit (IVP) analysis on depressed mood in Korea: the impact of gender differences and other socio-economic factors.

    Science.gov (United States)

    Gitto, Lara; Noh, Yong-Hwan; Andrés, Antonio Rodríguez

    2015-04-16

    , residence in metropolitan areas, and so on). As the results of the Wald test carried out after the estimations did not allow to reject the null hypothesis of endogeneity, a probit model was run too. Overall, women tend to develop depression more frequently than men. There is an inverse effect of education on depressed mood (probability of -24.6% to report a depressed mood due to high school education, as it emerges from the probit model marginal effects), while marital status and the number of family members may act as protective factors (probability to report a depressed mood of -1.0% for each family member). Depression is significantly associated with socio-economic conditions, such as work and income. Living in metropolitan areas is inversely correlated with depression (probability of -4.1% to report a depressed mood estimated through the probit model): this could be explained considering that, in rural areas, people rarely have immediate access to high-quality health services. This study outlines the factors that are more likely to impact on depression, and applies an IVP model to take into account the potential endogeneity of some of the predictors of depressive mood, such as female participation to workforce and health status. A probit model has been estimated too. Depression is associated with a wide range of socio-economic factors, although the strength and direction of the association can differ by gender. Prevention approaches to contrast depressive symptoms might take into consideration the evidence offered by the present study. © 2015 by Kerman University of Medical Sciences.

  11. Comparison of Six Methods for the Detection of Causality in a Bivariate Time Series

    Czech Academy of Sciences Publication Activity Database

    Krakovská, A.; Jakubík, J.; Chvosteková, M.; Coufal, David; Jajcay, Nikola; Paluš, Milan

    2018-01-01

    Roč. 97, č. 4 (2018), č. článku 042207. ISSN 2470-0045 R&D Projects: GA MZd(CZ) NV15-33250A Institutional support: RVO:67985807 Keywords : comparative study * causality detection * bivariate models * Granger causality * transfer entropy * convergent cross mappings Impact factor: 2.366, year: 2016 https://journals.aps.org/pre/abstract/10.1103/PhysRevE.97.042207

  12. Bivariate return periods of temperature and precipitation explain a large fraction of European crop yields

    Science.gov (United States)

    Zscheischler, Jakob; Orth, Rene; Seneviratne, Sonia I.

    2017-07-01

    Crops are vital for human society. Crop yields vary with climate and it is important to understand how climate and crop yields are linked to ensure future food security. Temperature and precipitation are among the key driving factors of crop yield variability. Previous studies have investigated mostly linear relationships between temperature and precipitation and crop yield variability. Other research has highlighted the adverse impacts of climate extremes, such as drought and heat waves, on crop yields. Impacts are, however, often non-linearly related to multivariate climate conditions. Here we derive bivariate return periods of climate conditions as indicators for climate variability along different temperature-precipitation gradients. We show that in Europe, linear models based on bivariate return periods of specific climate conditions explain on average significantly more crop yield variability (42 %) than models relying directly on temperature and precipitation as predictors (36 %). Our results demonstrate that most often crop yields increase along a gradient from hot and dry to cold and wet conditions, with lower yields associated with hot and dry periods. The majority of crops are most sensitive to climate conditions in summer and to maximum temperatures. The use of bivariate return periods allows the integration of non-linear impacts into climate-crop yield analysis. This offers new avenues to study the link between climate and crop yield variability and suggests that they are possibly more strongly related than what is inferred from conventional linear models.

  13. Global assessment of predictability of water availability: A bivariate probabilistic Budyko analysis

    Science.gov (United States)

    Wang, Weiguang; Fu, Jianyu

    2018-02-01

    Estimating continental water availability is of great importance for water resources management, in terms of maintaining ecosystem integrity and sustaining society development. To more accurately quantify the predictability of water availability, on the basis of univariate probabilistic Budyko framework, a bivariate probabilistic Budyko approach was developed using copula-based joint distribution model for considering the dependence between parameter ω of Wang-Tang's equation and the Normalized Difference Vegetation Index (NDVI), and was applied globally. The results indicate the predictive performance in global water availability is conditional on the climatic condition. In comparison with simple univariate distribution, the bivariate one produces the lower interquartile range under the same global dataset, especially in the regions with higher NDVI values, highlighting the importance of developing the joint distribution by taking into account the dependence structure of parameter ω and NDVI, which can provide more accurate probabilistic evaluation of water availability.

  14. Is the probit 9 security level appropriate for disinfestation using gamma-radiation

    International Nuclear Information System (INIS)

    Ohta, A.T.; Kaneshiro, K.Y.; Kurihara, J.S.; Kanegawa, K.M.; Nagamine, L.R.

    1985-01-01

    The probit 9 concept requires that a given treatment result in 99.9968 percent mortality in an estimated population of 100,000 individuals. The USDA-Hawaiian Fruit Fly Investigations Laboratory has determined that 0.26 kGy is the minimum absorbed dose of gamma-radiation required to prevent adult emergence of the three species of fruit flies in Hawaii: the Mediterranean fruit fly, Ceratitis capitata; the Oriental fruit fly, Dacus dorsalis; and the melon fly, Dacus cucurbitae. However, at dosages higher than 0.26 kGy, the authors observed relatively high rates of egg hatch (10-30 percent). In addition, when eggs are treated at 0.26 kGy, those larvae that do hatch may develop into third instar larvae, and their feeing may decrease the marketability of the fruits. Furthermore, there is some uncertainty as to whether or not importing countries would accept fruits with any living larvae in the shipment. For these reasons, the authors tried to determine the minimum absorbed dosages required to obtain mortality in mature eggs and larvae of the medfly. Results of the research showed that although high egg and larval mortality was observed at dosages of 0.50 to 0.60 kGy in nearly all of the fruit types and varieties studied, 100 percent mortality of mature eggs and larvae was not attained at these dosages. Nevertheless, the authors think that an increase in the minimum absorbed dose higher than that determined using the probit 9 concept (i.e., 0.26 kGy) should be considered because they were able to ascertain that, at dosages from 0.40 to 0.60 kGy, not only is egg hatch greatly reduced but the larvae hatching from these eggs developed only to the late first or early second larval instar stages

  15. Univariate and Bivariate Empirical Mode Decomposition for Postural Stability Analysis

    Directory of Open Access Journals (Sweden)

    Jacques Duchêne

    2008-05-01

    Full Text Available The aim of this paper was to compare empirical mode decomposition (EMD and two new extended methods of  EMD named complex empirical mode decomposition (complex-EMD and bivariate empirical mode decomposition (bivariate-EMD. All methods were used to analyze stabilogram center of pressure (COP time series. The two new methods are suitable to be applied to complex time series to extract complex intrinsic mode functions (IMFs before the Hilbert transform is subsequently applied on the IMFs. The trace of the analytic IMF in the complex plane has a circular form, with each IMF having its own rotation frequency. The area of the circle and the average rotation frequency of IMFs represent efficient indicators of the postural stability status of subjects. Experimental results show the effectiveness of these indicators to identify differences in standing posture between groups.

  16. Probability distributions with truncated, log and bivariate extensions

    CERN Document Server

    Thomopoulos, Nick T

    2018-01-01

    This volume presents a concise and practical overview of statistical methods and tables not readily available in other publications. It begins with a review of the commonly used continuous and discrete probability distributions. Several useful distributions that are not so common and less understood are described with examples and applications in full detail: discrete normal, left-partial, right-partial, left-truncated normal, right-truncated normal, lognormal, bivariate normal, and bivariate lognormal. Table values are provided with examples that enable researchers to easily apply the distributions to real applications and sample data. The left- and right-truncated normal distributions offer a wide variety of shapes in contrast to the symmetrically shaped normal distribution, and a newly developed spread ratio enables analysts to determine which of the three distributions best fits a particular set of sample data. The book will be highly useful to anyone who does statistical and probability analysis. This in...

  17. Chain Plot: A Tool for Exploiting Bivariate Temporal Structures

    OpenAIRE

    Taylor, CC; Zempeni, A

    2004-01-01

    In this paper we present a graphical tool useful for visualizing the cyclic behaviour of bivariate time series. We investigate its properties and link it to the asymmetry of the two variables concerned. We also suggest adding approximate confidence bounds to the points on the plot and investigate the effect of lagging to the chain plot. We conclude our paper by some standard Fourier analysis, relating and comparing this to the chain plot.

  18. Spectrum-based estimators of the bivariate Hurst exponent

    Czech Academy of Sciences Publication Activity Database

    Krištoufek, Ladislav

    2014-01-01

    Roč. 90, č. 6 (2014), art. 062802 ISSN 1539-3755 R&D Projects: GA ČR(CZ) GP14-11402P Institutional support: RVO:67985556 Keywords : bivariate Hurst exponent * power- law cross-correlations * estimation Subject RIV: AH - Economics Impact factor: 2.288, year: 2014 http://library.utia.cas.cz/separaty/2014/E/kristoufek-0436818.pdf

  19. Robust bivariate error detection in skewed data with application to historical radiosonde winds

    KAUST Repository

    Sun, Ying

    2017-01-18

    The global historical radiosonde archives date back to the 1920s and contain the only directly observed measurements of temperature, wind, and moisture in the upper atmosphere, but they contain many random errors. Most of the focus on cleaning these large datasets has been on temperatures, but winds are important inputs to climate models and in studies of wind climatology. The bivariate distribution of the wind vector does not have elliptical contours but is skewed and heavy-tailed, so we develop two methods for outlier detection based on the bivariate skew-t (BST) distribution, using either distance-based or contour-based approaches to flag observations as potential outliers. We develop a framework to robustly estimate the parameters of the BST and then show how the tuning parameter to get these estimates is chosen. In simulation, we compare our methods with one based on a bivariate normal distribution and a nonparametric approach based on the bagplot. We then apply all four methods to the winds observed for over 35,000 radiosonde launches at a single station and demonstrate differences in the number of observations flagged across eight pressure levels and through time. In this pilot study, the method based on the BST contours performs very well.

  20. Robust bivariate error detection in skewed data with application to historical radiosonde winds

    KAUST Repository

    Sun, Ying; Hering, Amanda S.; Browning, Joshua M.

    2017-01-01

    The global historical radiosonde archives date back to the 1920s and contain the only directly observed measurements of temperature, wind, and moisture in the upper atmosphere, but they contain many random errors. Most of the focus on cleaning these large datasets has been on temperatures, but winds are important inputs to climate models and in studies of wind climatology. The bivariate distribution of the wind vector does not have elliptical contours but is skewed and heavy-tailed, so we develop two methods for outlier detection based on the bivariate skew-t (BST) distribution, using either distance-based or contour-based approaches to flag observations as potential outliers. We develop a framework to robustly estimate the parameters of the BST and then show how the tuning parameter to get these estimates is chosen. In simulation, we compare our methods with one based on a bivariate normal distribution and a nonparametric approach based on the bagplot. We then apply all four methods to the winds observed for over 35,000 radiosonde launches at a single station and demonstrate differences in the number of observations flagged across eight pressure levels and through time. In this pilot study, the method based on the BST contours performs very well.

  1. A bivariate process model for maintenance and inspection planning

    NARCIS (Netherlands)

    Newby, M.J.; Barker, C.T.

    2006-01-01

    The paper describes decision making about monitoring and maintenance of systems described by a general stochastic process. The system is monitored and preventive and corrective maintenance actions are carried out in response to the observed system state. The decision process is simplified by using

  2. Bivariate pointing movements on large touch screens: investigating the validity of a refined Fitts' Law.

    Science.gov (United States)

    Bützler, Jennifer; Vetter, Sebastian; Jochems, Nicole; Schlick, Christopher M

    2012-01-01

    On the basis of three empirical studies Fitts' Law was refined for bivariate pointing tasks on large touch screens. In the first study different target width parameters were investigated. The second study considered the effect of the motion angle. Based on the results of the two studies a refined model for movement time in human-computer interaction was formulated. A third study, which is described here in detail, concerns the validation of the refined model. For the validation study 20 subjects had to execute a bivariate pointing task on a large touch screen. In the experimental task 250 rectangular target objects were displayed at a randomly chosen position on the screen covering a broad range of ID values (ID= [1.01; 4.88]). Compared to existing refinements of Fitts' Law, the new model shows highest predictive validity. A promising field of application of the model is the ergonomic design and evaluation of project management software. By using the refined model, software designers can calculate a priori the appropriate angular position and the size of buttons, menus or icons.

  3. An efficient algorithm for generating random number pairs drawn from a bivariate normal distribution

    Science.gov (United States)

    Campbell, C. W.

    1983-01-01

    An efficient algorithm for generating random number pairs from a bivariate normal distribution was developed. Any desired value of the two means, two standard deviations, and correlation coefficient can be selected. Theoretically the technique is exact and in practice its accuracy is limited only by the quality of the uniform distribution random number generator, inaccuracies in computer function evaluation, and arithmetic. A FORTRAN routine was written to check the algorithm and good accuracy was obtained. Some small errors in the correlation coefficient were observed to vary in a surprisingly regular manner. A simple model was developed which explained the qualities aspects of the errors.

  4. Bivariate Drought Analysis Using Streamflow Reconstruction with Tree Ring Indices in the Sacramento Basin, California, USA

    Directory of Open Access Journals (Sweden)

    Jaewon Kwak

    2016-03-01

    Full Text Available Long-term streamflow data are vital for analysis of hydrological droughts. Using an artificial neural network (ANN model and nine tree-ring indices, this study reconstructed the annual streamflow of the Sacramento River for the period from 1560 to 1871. Using the reconstructed streamflow data, the copula method was used for bivariate drought analysis, deriving a hydrological drought return period plot for the Sacramento River basin. Results showed strong correlation among drought characteristics, and the drought with a 20-year return period (17.2 million acre-feet (MAF per year in the Sacramento River basin could be considered a critical level of drought for water shortages.

  5. On the construction of bivariate exponential distributions with an arbitrary correlation coefficient

    DEFF Research Database (Denmark)

    Bladt, Mogens; Nielsen, Bo Friis

    In this paper we use a concept of multivariate phase-type distributions to define a class of bivariate exponential distributions. This class has the following three appealing properties. Firstly, we may construct a pair of exponentially distributed random variables with any feasible correlation...... coefficient (also negative). Secondly, the class satisfies that any linear combination (projection) of the marginal random variables is a phase {type distributions, The latter property is potentially important for the development hypothesis testing in linear models. Thirdly, it is very easy to simulate...

  6. A bivariate space-time downscaler under space and time misalignment.

    Science.gov (United States)

    Berrocal, Veronica J; Gelfand, Alan E; Holland, David M

    2010-12-01

    Ozone and particulate matter PM(2.5) are co-pollutants that have long been associated with increased public health risks. Information on concentration levels for both pollutants come from two sources: monitoring sites and output from complex numerical models that produce concentration surfaces over large spatial regions. In this paper, we offer a fully-model based approach for fusing these two sources of information for the pair of co-pollutants which is computationally feasible over large spatial regions and long periods of time. Due to the association between concentration levels of the two environmental contaminants, it is expected that information regarding one will help to improve prediction of the other. Misalignment is an obvious issue since the monitoring networks for the two contaminants only partly intersect and because the collection rate for PM(2.5) is typically less frequent than that for ozone.Extending previous work in Berrocal et al. (2009), we introduce a bivariate downscaler that provides a flexible class of bivariate space-time assimilation models. We discuss computational issues for model fitting and analyze a dataset for ozone and PM(2.5) for the ozone season during year 2002. We show a modest improvement in predictive performance, not surprising in a setting where we can anticipate only a small gain.

  7. Unadjusted Bivariate Two-Group Comparisons: When Simpler is Better.

    Science.gov (United States)

    Vetter, Thomas R; Mascha, Edward J

    2018-01-01

    Hypothesis testing involves posing both a null hypothesis and an alternative hypothesis. This basic statistical tutorial discusses the appropriate use, including their so-called assumptions, of the common unadjusted bivariate tests for hypothesis testing and thus comparing study sample data for a difference or association. The appropriate choice of a statistical test is predicated on the type of data being analyzed and compared. The unpaired or independent samples t test is used to test the null hypothesis that the 2 population means are equal, thereby accepting the alternative hypothesis that the 2 population means are not equal. The unpaired t test is intended for comparing dependent continuous (interval or ratio) data from 2 study groups. A common mistake is to apply several unpaired t tests when comparing data from 3 or more study groups. In this situation, an analysis of variance with post hoc (posttest) intragroup comparisons should instead be applied. Another common mistake is to apply a series of unpaired t tests when comparing sequentially collected data from 2 study groups. In this situation, a repeated-measures analysis of variance, with tests for group-by-time interaction, and post hoc comparisons, as appropriate, should instead be applied in analyzing data from sequential collection points. The paired t test is used to assess the difference in the means of 2 study groups when the sample observations have been obtained in pairs, often before and after an intervention in each study subject. The Pearson chi-square test is widely used to test the null hypothesis that 2 unpaired categorical variables, each with 2 or more nominal levels (values), are independent of each other. When the null hypothesis is rejected, 1 concludes that there is a probable association between the 2 unpaired categorical variables. When comparing 2 groups on an ordinal or nonnormally distributed continuous outcome variable, the 2-sample t test is usually not appropriate. The

  8. Selection effects in the bivariate brightness distribution for spiral galaxies

    International Nuclear Information System (INIS)

    Phillipps, S.; Disney, M.

    1986-01-01

    The joint distribution of total luminosity and characteristic surface brightness (the bivariate brightness distribution) is investigated for a complete sample of spiral galaxies in the Virgo cluster. The influence of selection and physical limits of various kinds on the apparent distribution are detailed. While the distribution of surface brightness for bright galaxies may be genuinely fairly narrow, faint galaxies exist right across the (quite small) range of accessible surface brightnesses so no statement can be made about the true extent of the distribution. The lack of high surface brightness bright galaxies in the Virgo sample relative to an overall RC2 sample (mostly field galaxies) supports the contention that the star-formation rate is reduced in the inner region of the cluster for environmental reasons. (author)

  9. A public perspective on the adoption of microgeneration technologies in New Zealand: A multivariate probit approach

    International Nuclear Information System (INIS)

    Baskaran, Ramesh; Managi, Shunsuke; Bendig, Mirko

    2013-01-01

    The growing demand for electricity in New Zealand has led to the construction of new hydro-dams or power stations that have had environmental, social and cultural effects. These effects may drive increases in electricity prices, as such prices reflect the cost of running existing power stations as well as building new ones. This study uses Canterbury and Central Otago as case studies because both regions face similar issues in building new hydro-dams and ever-increasing electricity prices that will eventually prompt households to buy power at higher prices. One way for households to respond to these price changes is to generate their own electricity through microgeneration technologies (MGT). The objective of this study is to investigate public perception and preferences regarding MGT and to analyze the factors that influence people’s decision to adopt such new technologies in New Zealand. The study uses a multivariate probit approach to examine households’ willingness to adopt any one MGT system or a combination of the MGT systems. Our findings provide valuable information for policy makers and marketers who wish to promote effective microgeneration technologies. - Highlights: ► We examine New Zealand households’ awareness level for microgeneration technologies (MGT) and empirically explore the factors that determine people’s willingness to adopt for MGT. ► The households are interested and willing to adopt the MGT systems. ► Noticeable heterogeneity exists between groups of households in adopting the MGT. ► No significant regional difference exists in promoting solar hot water policies. ► Public and private sectors incentives are important in promoting the MGT

  10. Endogeneity and Heterogeneity in a Probit Demand Model: Estimation Using Aggregate Data

    OpenAIRE

    Pradeep K. Chintagunta

    2001-01-01

    Two issues that have become increasingly important while estimating the parameters of aggregate demand functions to study firm behavior are the of marketing activities (typically, price) and across consumers in the market under consideration. Ignoring these issues in the estimation of the demand function parameters can lead to biased and inconsistent estimates for the effects of marketing activities. Endogeneity and heterogeneity have achieved prominence in large measure because of the increa...

  11. Reasons for not buying a car : a probit-selection multinomial logit choice model

    NARCIS (Netherlands)

    Gao, Y.; Rasouli, S.; Timmermans, H.J.P.

    2014-01-01

    Generating and maintaining gradients of cell density and extracellular matrix (ECM) components is a prerequisite for the development of functionality of healthy tissue. Therefore, gaining insights into the drivers of spatial organization of cells and the role of ECM during tissue morphogenesis is

  12. Bayesian Analysis of Multilevel Probit Models for Data with Friendship Dependencies

    Science.gov (United States)

    Koskinen, Johan; Stenberg, Sten-Ake

    2012-01-01

    When studying educational aspirations of adolescents, it is unrealistic to assume that the aspirations of pupils are independent of those of their friends. Considerable attention has also been given to the study of peer influence in the educational and behavioral literature. Typically, in empirical studies, the friendship networks have either been…

  13. Regression analysis for bivariate gap time with missing first gap time data.

    Science.gov (United States)

    Huang, Chia-Hui; Chen, Yi-Hau

    2017-01-01

    We consider ordered bivariate gap time while data on the first gap time are unobservable. This study is motivated by the HIV infection and AIDS study, where the initial HIV contracting time is unavailable, but the diagnosis times for HIV and AIDS are available. We are interested in studying the risk factors for the gap time between initial HIV contraction and HIV diagnosis, and gap time between HIV and AIDS diagnoses. Besides, the association between the two gap times is also of interest. Accordingly, in the data analysis we are faced with two-fold complexity, namely data on the first gap time is completely missing, and the second gap time is subject to induced informative censoring due to dependence between the two gap times. We propose a modeling framework for regression analysis of bivariate gap time under the complexity of the data. The estimating equations for the covariate effects on, as well as the association between, the two gap times are derived through maximum likelihood and suitable counting processes. Large sample properties of the resulting estimators are developed by martingale theory. Simulations are performed to examine the performance of the proposed analysis procedure. An application of data from the HIV and AIDS study mentioned above is reported for illustration.

  14. A non-stationary cost-benefit based bivariate extreme flood estimation approach

    Science.gov (United States)

    Qi, Wei; Liu, Junguo

    2018-02-01

    Cost-benefit analysis and flood frequency analysis have been integrated into a comprehensive framework to estimate cost effective design values. However, previous cost-benefit based extreme flood estimation is based on stationary assumptions and analyze dependent flood variables separately. A Non-Stationary Cost-Benefit based bivariate design flood estimation (NSCOBE) approach is developed in this study to investigate influence of non-stationarities in both the dependence of flood variables and the marginal distributions on extreme flood estimation. The dependence is modeled utilizing copula functions. Previous design flood selection criteria are not suitable for NSCOBE since they ignore time changing dependence of flood variables. Therefore, a risk calculation approach is proposed based on non-stationarities in both marginal probability distributions and copula functions. A case study with 54-year observed data is utilized to illustrate the application of NSCOBE. Results show NSCOBE can effectively integrate non-stationarities in both copula functions and marginal distributions into cost-benefit based design flood estimation. It is also found that there is a trade-off between maximum probability of exceedance calculated from copula functions and marginal distributions. This study for the first time provides a new approach towards a better understanding of influence of non-stationarities in both copula functions and marginal distributions on extreme flood estimation, and could be beneficial to cost-benefit based non-stationary bivariate design flood estimation across the world.

  15. A method of moments to estimate bivariate survival functions: the copula approach

    Directory of Open Access Journals (Sweden)

    Silvia Angela Osmetti

    2013-05-01

    Full Text Available In this paper we discuss the problem on parametric and non parametric estimation of the distributions generated by the Marshall-Olkin copula. This copula comes from the Marshall-Olkin bivariate exponential distribution used in reliability analysis. We generalize this model by the copula and different marginal distributions to construct several bivariate survival functions. The cumulative distribution functions are not absolutely continuous and they unknown parameters are often not be obtained in explicit form. In order to estimate the parameters we propose an easy procedure based on the moments. This method consist in two steps: in the first step we estimate only the parameters of marginal distributions and in the second step we estimate only the copula parameter. This procedure can be used to estimate the parameters of complex survival functions in which it is difficult to find an explicit expression of the mixed moments. Moreover it is preferred to the maximum likelihood one for its simplex mathematic form; in particular for distributions whose maximum likelihood parameters estimators can not be obtained in explicit form.

  16. Modeling Unobserved Consideration Sets for Household Panel Data

    NARCIS (Netherlands)

    J.E.M. van Nierop; R. Paap (Richard); B. Bronnenberg; Ph.H.B.F. Franses (Philip Hans)

    2000-01-01

    textabstractWe propose a new method to model consumers' consideration and choice processes. We develop a parsimonious probit type model for consideration and a multinomial probit model for choice, given consideration. Unlike earlier models of consideration ours is not prone to the curse of

  17. Bivariate Genomic Footprinting Detects Changes in Transcription Factor Activity

    Directory of Open Access Journals (Sweden)

    Songjoon Baek

    2017-05-01

    Full Text Available In response to activating signals, transcription factors (TFs bind DNA and regulate gene expression. TF binding can be measured by protection of the bound sequence from DNase digestion (i.e., footprint. Here, we report that 80% of TF binding motifs do not show a measurable footprint, partly because of a variable cleavage pattern within the motif sequence. To more faithfully portray the effect of TFs on chromatin, we developed an algorithm that captures two TF-dependent effects on chromatin accessibility: footprinting and motif-flanking accessibility. The algorithm, termed bivariate genomic footprinting (BaGFoot, efficiently detects TF activity. BaGFoot is robust to different accessibility assays (DNase-seq, ATAC-seq, all examined peak-calling programs, and a variety of cut bias correction approaches. BaGFoot reliably predicts TF binding and provides valuable information regarding the TFs affecting chromatin accessibility in various biological systems and following various biological events, including in cases where an absolute footprint cannot be determined.

  18. Preparation and bivariate analysis of suspensions of human chromosomes

    Energy Technology Data Exchange (ETDEWEB)

    van den Engh, G.J.; Trask, B.J.; Gray, J.W.; Langlois, R.G.; Yu, L.C.

    1985-01-01

    Chromosomes were isolated from a variety of human cell types using a HEPES-buffered hypotonic solution (pH 8.0) containing KCl, MgSO/sub 4/ dithioerythritol, and RNase. The chromosomes isolated by this procedure could be stained with a variety of fluorescent stains including propidium iodide, chromomycin A3, and Hoeschst 33258. Addition of sodium citrate to the stained chromosomes was found to improve the total fluorescence resolution. High-quality bivariate Hoeschst vs. chromomycin fluorescence distributions were obtained for chromosomes isolated from a human fibroblast cell strain, a human colon carcinoma cell line, and human peripheral blood lymphocyte cultures. Good flow karyotypes were also obtained from primary amniotic cell cultures. The Hoeschst vs. chromomycin flow karyotypes of a given cell line, made at different times and at dye concentrations varying over fourfold ranges, show little variation in the relative peak positions of the chromosomes. The size of the DNA in chromosomes isolated using this procedure ranges from 20 to 50 kilobases. The described isolation procedure is simple, it yields high-quality flow karyotypes, and it can be used to prepare chromosomes from clinical samples. 22 references, 7 figures, 1 table.

  19. Epileptic seizure prediction based on a bivariate spectral power methodology.

    Science.gov (United States)

    Bandarabadi, Mojtaba; Teixeira, Cesar A; Direito, Bruno; Dourado, Antonio

    2012-01-01

    The spectral power of 5 frequently considered frequency bands (Alpha, Beta, Gamma, Theta and Delta) for 6 EEG channels is computed and then all the possible pairwise combinations among the 30 features set, are used to create a 435 dimensional feature space. Two new feature selection methods are introduced to choose the best candidate features among those and to reduce the dimensionality of this feature space. The selected features are then fed to Support Vector Machines (SVMs) that classify the cerebral state in preictal and non-preictal classes. The outputs of the SVM are regularized using a method that accounts for the classification dynamics of the preictal class, also known as "Firing Power" method. The results obtained using our feature selection approaches are compared with the ones obtained using minimum Redundancy Maximum Relevance (mRMR) feature selection method. The results in a group of 12 patients of the EPILEPSIAE database, containing 46 seizures and 787 hours multichannel recording for out-of-sample data, indicate the efficiency of the bivariate approach as well as the two new feature selection methods. The best results presented sensitivity of 76.09% (35 of 46 seizures predicted) and a false prediction rate of 0.15(-1).

  20. Bivariate Cointegration Analysis of Energy-Economy Interactions in Iran

    Directory of Open Access Journals (Sweden)

    Ismail Oladimeji Soile

    2015-12-01

    Full Text Available Fixing the prices of energy products below their opportunity cost for welfare and redistribution purposes is common with governments of many oil producing developing countries. This has often resulted in huge energy consumption in developing countries and the question that emerge is whether this increased energy consumption results in higher economic activities. Available statistics show that Iran’s economy growth shrunk for the first time in two decades from 2011 amidst the introduction of pricing reform in 2010 and 2014 suggesting a relationship between energy use and economic growth. Accordingly, the study examined the causality and the likelihood of a long term relationship between energy and economic growth in Iran. Unlike previous studies which have focused on the effects and effectiveness of the reform, the paper investigates the rationale for the reform. The study applied a bivariate cointegration time series econometric approach. The results reveals a one-way causality running from economic growth to energy with no feedback with evidence of long run connection. The implication of this is that energy conservation policy is not inimical to economic growth. This evidence lend further support for the ongoing subsidy reforms in Iran as a measure to check excessive and inefficient use of energy.

  1. Which global stock indices trigger stronger contagion risk in the Vietnamese stock market? Evidence using a bivariate analysis

    OpenAIRE

    Wang Kuan-Min; Lai Hung-Cheng

    2013-01-01

    This paper extends recent investigations into risk contagion effects on stock markets to the Vietnamese stock market. Daily data spanning October 9, 2006 to May 3, 2012 are sourced to empirically validate the contagion effects between stock markets in Vietnam, and China, Japan, Singapore, and the US. To facilitate the validation of contagion effects with market-related coefficients, this paper constructs a bivariate EGARCH model of dynamic conditional correlation coefficients. Using the...

  2. Bivariate return periods of temperature and precipitation explain a large fraction of European crop yields

    Directory of Open Access Journals (Sweden)

    J. Zscheischler

    2017-07-01

    Full Text Available Crops are vital for human society. Crop yields vary with climate and it is important to understand how climate and crop yields are linked to ensure future food security. Temperature and precipitation are among the key driving factors of crop yield variability. Previous studies have investigated mostly linear relationships between temperature and precipitation and crop yield variability. Other research has highlighted the adverse impacts of climate extremes, such as drought and heat waves, on crop yields. Impacts are, however, often non-linearly related to multivariate climate conditions. Here we derive bivariate return periods of climate conditions as indicators for climate variability along different temperature–precipitation gradients. We show that in Europe, linear models based on bivariate return periods of specific climate conditions explain on average significantly more crop yield variability (42 % than models relying directly on temperature and precipitation as predictors (36 %. Our results demonstrate that most often crop yields increase along a gradient from hot and dry to cold and wet conditions, with lower yields associated with hot and dry periods. The majority of crops are most sensitive to climate conditions in summer and to maximum temperatures. The use of bivariate return periods allows the integration of non-linear impacts into climate–crop yield analysis. This offers new avenues to study the link between climate and crop yield variability and suggests that they are possibly more strongly related than what is inferred from conventional linear models.

  3. Asymptotics of bivariate generating functions with algebraic singularities

    Science.gov (United States)

    Greenwood, Torin

    Flajolet and Odlyzko (1990) derived asymptotic formulae the coefficients of a class of uni- variate generating functions with algebraic singularities. Gao and Richmond (1992) and Hwang (1996, 1998) extended these results to classes of multivariate generating functions, in both cases by reducing to the univariate case. Pemantle and Wilson (2013) outlined new multivariate ana- lytic techniques and used them to analyze the coefficients of rational generating functions. After overviewing these methods, we use them to find asymptotic formulae for the coefficients of a broad class of bivariate generating functions with algebraic singularities. Beginning with the Cauchy integral formula, we explicity deform the contour of integration so that it hugs a set of critical points. The asymptotic contribution to the integral comes from analyzing the integrand near these points, leading to explicit asymptotic formulae. Next, we use this formula to analyze an example from current research. In the following chapter, we apply multivariate analytic techniques to quan- tum walks. Bressler and Pemantle (2007) found a (d + 1)-dimensional rational generating function whose coefficients described the amplitude of a particle at a position in the integer lattice after n steps. Here, the minimal critical points form a curve on the (d + 1)-dimensional unit torus. We find asymptotic formulae for the amplitude of a particle in a given position, normalized by the number of steps n, as n approaches infinity. Each critical point contributes to the asymptotics for a specific normalized position. Using Groebner bases in Maple again, we compute the explicit locations of peak amplitudes. In a scaling window of size the square root of n near the peaks, each amplitude is asymptotic to an Airy function.

  4. Historical and future drought in Bangladesh using copula-based bivariate regional frequency analysis

    Science.gov (United States)

    Mortuza, Md Rubayet; Moges, Edom; Demissie, Yonas; Li, Hong-Yi

    2018-02-01

    The study aims at regional and probabilistic evaluation of bivariate drought characteristics to assess both the past and future drought duration and severity in Bangladesh. The procedures involve applying (1) standardized precipitation index to identify drought duration and severity, (2) regional frequency analysis to determine the appropriate marginal distributions for both duration and severity, (3) copula model to estimate the joint probability distribution of drought duration and severity, and (4) precipitation projections from multiple climate models to assess future drought trends. Since drought duration and severity in Bangladesh are often strongly correlated and do not follow same marginal distributions, the joint and conditional return periods of droughts are characterized using the copula-based joint distribution. The country is divided into three homogeneous regions using Fuzzy clustering and multivariate discordancy and homogeneity measures. For given severity and duration values, the joint return periods for a drought to exceed both values are on average 45% larger, while to exceed either value are 40% less than the return periods from the univariate frequency analysis, which treats drought duration and severity independently. These suggest that compared to the bivariate drought frequency analysis, the standard univariate frequency analysis under/overestimate the frequency and severity of droughts depending on how their duration and severity are related. Overall, more frequent and severe droughts are observed in the west side of the country. Future drought trend based on four climate models and two scenarios showed the possibility of less frequent drought in the future (2020-2100) than in the past (1961-2010).

  5. Immigrants' language skills: the immigrant experience in a longitudinal survey

    OpenAIRE

    Barry CHISWICK; Yew LEE; Paul W. MILLER

    2003-01-01

    This paper is concerned with the determinants of English language proficiency among immigrants. It presents a model based on economic incentives, exposure, and efficiency in language acquisition, which it tests using the Longitudinal Survey of Immigrants to Australia. Probit and bivariate probit analyses are employed. The hypotheses are supported by the data. The bivariate probit analysis across waves indicates a "regression to the mean" in the unobserved components of English language profic...

  6. A power study of bivariate LOD score analysis of a complex trait and fear/discomfort with strangers.

    Science.gov (United States)

    Ji, Fei; Lee, Dayoung; Mendell, Nancy Role

    2005-12-30

    Complex diseases are often reported along with disease-related traits (DRT). Sometimes investigators consider both disease and DRT phenotypes separately and sometimes they consider individuals as affected if they have either the disease or the DRT, or both. We propose instead to consider the joint distribution of the disease and the DRT and do a linkage analysis assuming a pleiotropic model. We evaluated our results through analysis of the simulated datasets provided by Genetic Analysis Workshop 14. We first conducted univariate linkage analysis of the simulated disease, Kofendrerd Personality Disorder and one of its simulated associated traits, phenotype b (fear/discomfort with strangers). Subsequently, we considered the bivariate phenotype, which combined the information on Kofendrerd Personality Disorder and fear/discomfort with strangers. We developed a program to perform bivariate linkage analysis using an extension to the Elston-Stewart peeling method of likelihood calculation. Using this program we considered the microsatellites within 30 cM of the gene pleiotropic for this simulated disease and DRT. Based on 100 simulations of 300 families we observed excellent power to detect linkage within 10 cM of the disease locus using the DRT and the bivariate trait.

  7. A Basic Bivariate Structure of Personality Attributes Evident Across Nine Languages.

    Science.gov (United States)

    Saucier, Gerard; Thalmayer, Amber Gayle; Payne, Doris L; Carlson, Robert; Sanogo, Lamine; Ole-Kotikash, Leonard; Church, A Timothy; Katigbak, Marcia S; Somer, Oya; Szarota, Piotr; Szirmák, Zsofia; Zhou, Xinyue

    2014-02-01

    Here, two studies seek to characterize a parsimonious common-denominator personality structure with optimal cross-cultural replicability. Personality differences are observed in all human populations and cultures, but lexicons for personality attributes contain so many distinctions that parsimony is lacking. Models stipulating the most important attributes have been formulated by experts or by empirical studies drawing on experience in a very limited range of cultures. Factor analyses of personality lexicons of nine languages of diverse provenance (Chinese, Korean, Filipino, Turkish, Greek, Polish, Hungarian, Maasai, and Senoufo) were examined, and their common structure was compared to that of several prominent models in psychology. A parsimonious bivariate model showed evidence of substantial convergence and ubiquity across cultures. Analyses involving key markers of these dimensions in English indicate that they are broad dimensions involving the overlapping content of the interpersonal circumplex, models of communion and agency, and morality/warmth and competence. These "Big Two" dimensions-Social Self-Regulation and Dynamism-provide a common-denominator model involving the two most crucial axes of personality variation, ubiquitous across cultures. The Big Two might serve as an umbrella model serving to link diverse theoretical models and associated research literatures. © 2013 Wiley Periodicals, Inc.

  8. A bivariate optimal replacement policy with cumulative repair cost ...

    Indian Academy of Sciences (India)

    Min-Tsai Lai

    Shock model; cumulative damage model; cumulative repair cost limit; preventive maintenance model. 1. Introduction ... with two types of shocks: one type is failure shock, and the other type is damage ...... Theory, methods and applications.

  9. Optical Coherence Tomography Noise Reduction Using Anisotropic Local Bivariate Gaussian Mixture Prior in 3D Complex Wavelet Domain

    OpenAIRE

    Rabbani, Hossein; Sonka, Milan; Abramoff, Michael D.

    2013-01-01

    In this paper, MMSE estimator is employed for noise-free 3D OCT data recovery in 3D complex wavelet domain. Since the proposed distribution for noise-free data plays a key role in the performance of MMSE estimator, a priori distribution for the pdf of noise-free 3D complex wavelet coefficients is proposed which is able to model the main statistical properties of wavelets. We model the coefficients with a mixture of two bivariate Gaussian pdfs with local parameters which are able to capture th...

  10. An assessment on the use of bivariate, multivariate and soft computing techniques for collapse susceptibility in GIS environ

    Science.gov (United States)

    Yilmaz, Işik; Marschalko, Marian; Bednarik, Martin

    2013-04-01

    The paper presented herein compares and discusses the use of bivariate, multivariate and soft computing techniques for collapse susceptibility modelling. Conditional probability (CP), logistic regression (LR) and artificial neural networks (ANN) models representing the bivariate, multivariate and soft computing techniques were used in GIS based collapse susceptibility mapping in an area from Sivas basin (Turkey). Collapse-related factors, directly or indirectly related to the causes of collapse occurrence, such as distance from faults, slope angle and aspect, topographical elevation, distance from drainage, topographic wetness index (TWI), stream power index (SPI), Normalized Difference Vegetation Index (NDVI) by means of vegetation cover, distance from roads and settlements were used in the collapse susceptibility analyses. In the last stage of the analyses, collapse susceptibility maps were produced from the models, and they were then compared by means of their validations. However, Area Under Curve (AUC) values obtained from all three models showed that the map obtained from soft computing (ANN) model looks like more accurate than the other models, accuracies of all three models can be evaluated relatively similar. The results also showed that the conditional probability is an essential method in preparation of collapse susceptibility map and highly compatible with GIS operating features.

  11. On the matched pairs sign test using bivariate ranked set sampling ...

    African Journals Online (AJOL)

    BVRSS) is introduced and investigated. We show that this test is asymptotically more efficient than its counterpart sign test based on a bivariate simple random sample (BVSRS). The asymptotic null distribution and the efficiency of the test are derived.

  12. Optical Coherence Tomography Noise Reduction Using Anisotropic Local Bivariate Gaussian Mixture Prior in 3D Complex Wavelet Domain.

    Science.gov (United States)

    Rabbani, Hossein; Sonka, Milan; Abramoff, Michael D

    2013-01-01

    In this paper, MMSE estimator is employed for noise-free 3D OCT data recovery in 3D complex wavelet domain. Since the proposed distribution for noise-free data plays a key role in the performance of MMSE estimator, a priori distribution for the pdf of noise-free 3D complex wavelet coefficients is proposed which is able to model the main statistical properties of wavelets. We model the coefficients with a mixture of two bivariate Gaussian pdfs with local parameters which are able to capture the heavy-tailed property and inter- and intrascale dependencies of coefficients. In addition, based on the special structure of OCT images, we use an anisotropic windowing procedure for local parameters estimation that results in visual quality improvement. On this base, several OCT despeckling algorithms are obtained based on using Gaussian/two-sided Rayleigh noise distribution and homomorphic/nonhomomorphic model. In order to evaluate the performance of the proposed algorithm, we use 156 selected ROIs from 650 × 512 × 128 OCT dataset in the presence of wet AMD pathology. Our simulations show that the best MMSE estimator using local bivariate mixture prior is for the nonhomomorphic model in the presence of Gaussian noise which results in an improvement of 7.8 ± 1.7 in CNR.

  13. Optical Coherence Tomography Noise Reduction Using Anisotropic Local Bivariate Gaussian Mixture Prior in 3D Complex Wavelet Domain

    Directory of Open Access Journals (Sweden)

    Hossein Rabbani

    2013-01-01

    Full Text Available In this paper, MMSE estimator is employed for noise-free 3D OCT data recovery in 3D complex wavelet domain. Since the proposed distribution for noise-free data plays a key role in the performance of MMSE estimator, a priori distribution for the pdf of noise-free 3D complex wavelet coefficients is proposed which is able to model the main statistical properties of wavelets. We model the coefficients with a mixture of two bivariate Gaussian pdfs with local parameters which are able to capture the heavy-tailed property and inter- and intrascale dependencies of coefficients. In addition, based on the special structure of OCT images, we use an anisotropic windowing procedure for local parameters estimation that results in visual quality improvement. On this base, several OCT despeckling algorithms are obtained based on using Gaussian/two-sided Rayleigh noise distribution and homomorphic/nonhomomorphic model. In order to evaluate the performance of the proposed algorithm, we use 156 selected ROIs from 650 × 512 × 128 OCT dataset in the presence of wet AMD pathology. Our simulations show that the best MMSE estimator using local bivariate mixture prior is for the nonhomomorphic model in the presence of Gaussian noise which results in an improvement of 7.8 ± 1.7 in CNR.

  14. Investigating NARCCAP Precipitation Extremes via Bivariate Extreme Value Theory (Invited)

    Science.gov (United States)

    Weller, G. B.; Cooley, D. S.; Sain, S. R.; Bukovsky, M. S.; Mearns, L. O.

    2013-12-01

    We introduce methodology from statistical extreme value theory to examine the ability of reanalysis-drive regional climate models to simulate past daily precipitation extremes. Going beyond a comparison of summary statistics such as 20-year return values, we study whether the most extreme precipitation events produced by climate model simulations exhibit correspondence to the most extreme events seen in observational records. The extent of this correspondence is formulated via the statistical concept of tail dependence. We examine several case studies of extreme precipitation events simulated by the six models of the North American Regional Climate Change Assessment Program (NARCCAP) driven by NCEP reanalysis. It is found that the NARCCAP models generally reproduce daily winter precipitation extremes along the Pacific coast quite well; in contrast, simulation of past daily summer precipitation extremes in a central US region is poor. Some differences in the strength of extremal correspondence are seen in the central region between models which employ spectral nudging and those which do not. We demonstrate how these techniques may be used to draw a link between extreme precipitation events and large-scale atmospheric drivers, as well as to downscale extreme precipitation simulated by a future run of a regional climate model. Specifically, we examine potential future changes in the nature of extreme precipitation along the Pacific coast produced by the pineapple express (PE) phenomenon. A link between extreme precipitation events and a "PE Index" derived from North Pacific sea-surface pressure fields is found. This link is used to study PE-influenced extreme precipitation produced by a future-scenario climate model run.

  15. Assessing protein conformational sampling methods based on bivariate lag-distributions of backbone angles

    KAUST Repository

    Maadooliat, Mehdi; Gao, Xin; Huang, Jianhua Z.

    2012-01-01

    Despite considerable progress in the past decades, protein structure prediction remains one of the major unsolved problems in computational biology. Angular-sampling-based methods have been extensively studied recently due to their ability to capture the continuous conformational space of protein structures. The literature has focused on using a variety of parametric models of the sequential dependencies between angle pairs along the protein chains. In this article, we present a thorough review of angular-sampling-based methods by assessing three main questions: What is the best distribution type to model the protein angles? What is a reasonable number of components in a mixture model that should be considered to accurately parameterize the joint distribution of the angles? and What is the order of the local sequence-structure dependency that should be considered by a prediction method? We assess the model fits for different methods using bivariate lag-distributions of the dihedral/planar angles. Moreover, the main information across the lags can be extracted using a technique called Lag singular value decomposition (LagSVD), which considers the joint distribution of the dihedral/planar angles over different lags using a nonparametric approach and monitors the behavior of the lag-distribution of the angles using singular value decomposition. As a result, we developed graphical tools and numerical measurements to compare and evaluate the performance of different model fits. Furthermore, we developed a web-tool (http://www.stat.tamu. edu/~madoliat/LagSVD) that can be used to produce informative animations. © The Author 2012. Published by Oxford University Press.

  16. Assessing protein conformational sampling methods based on bivariate lag-distributions of backbone angles

    KAUST Repository

    Maadooliat, Mehdi

    2012-08-27

    Despite considerable progress in the past decades, protein structure prediction remains one of the major unsolved problems in computational biology. Angular-sampling-based methods have been extensively studied recently due to their ability to capture the continuous conformational space of protein structures. The literature has focused on using a variety of parametric models of the sequential dependencies between angle pairs along the protein chains. In this article, we present a thorough review of angular-sampling-based methods by assessing three main questions: What is the best distribution type to model the protein angles? What is a reasonable number of components in a mixture model that should be considered to accurately parameterize the joint distribution of the angles? and What is the order of the local sequence-structure dependency that should be considered by a prediction method? We assess the model fits for different methods using bivariate lag-distributions of the dihedral/planar angles. Moreover, the main information across the lags can be extracted using a technique called Lag singular value decomposition (LagSVD), which considers the joint distribution of the dihedral/planar angles over different lags using a nonparametric approach and monitors the behavior of the lag-distribution of the angles using singular value decomposition. As a result, we developed graphical tools and numerical measurements to compare and evaluate the performance of different model fits. Furthermore, we developed a web-tool (http://www.stat.tamu. edu/~madoliat/LagSVD) that can be used to produce informative animations. © The Author 2012. Published by Oxford University Press.

  17. THE BIVARIATE SIZE-LUMINOSITY RELATIONS FOR LYMAN BREAK GALAXIES AT z {approx} 4-5

    Energy Technology Data Exchange (ETDEWEB)

    Huang, Kuang-Han; Su, Jian [Johns Hopkins University, 3400 North Charles Street, Baltimore, MD 21218 (United States); Ferguson, Henry C. [Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218 (United States); Ravindranath, Swara, E-mail: kuanghan@pha.jhu.edu [The Inter-University Center for Astronomy and Astrophysics, Pune University Campus, Pune 411007, Maharashtra (India)

    2013-03-01

    We study the bivariate size-luminosity distribution of Lyman break galaxies (LBGs) selected at redshifts around 4 and 5 in GOODS and the HUDF fields. We model the size-luminosity distribution as a combination of log-normal distribution (in size) and Schechter function (in luminosity), therefore it enables a more detailed study of the selection effects. We perform extensive simulations to quantify the dropout-selection completenesses and measurement biases and uncertainties in two-dimensional size and magnitude bins, and transform the theoretical size-luminosity distribution to the expected distribution for the observed data. Using maximum-likelihood estimator, we find that the Schechter function parameters for B {sub 435}-dropouts and are consistent with the values in the literature, but the size distributions are wider than expected from the angular momentum distribution of the underlying dark matter halos. The slope of the size-luminosity (RL) relation is similar to those found for local disk galaxies, but considerably shallower than local early-type galaxies.

  18. THE BIVARIATE SIZE-LUMINOSITY RELATIONS FOR LYMAN BREAK GALAXIES AT z ∼ 4-5

    International Nuclear Information System (INIS)

    Huang, Kuang-Han; Su, Jian; Ferguson, Henry C.; Ravindranath, Swara

    2013-01-01

    We study the bivariate size-luminosity distribution of Lyman break galaxies (LBGs) selected at redshifts around 4 and 5 in GOODS and the HUDF fields. We model the size-luminosity distribution as a combination of log-normal distribution (in size) and Schechter function (in luminosity), therefore it enables a more detailed study of the selection effects. We perform extensive simulations to quantify the dropout-selection completenesses and measurement biases and uncertainties in two-dimensional size and magnitude bins, and transform the theoretical size-luminosity distribution to the expected distribution for the observed data. Using maximum-likelihood estimator, we find that the Schechter function parameters for B 435 -dropouts and are consistent with the values in the literature, but the size distributions are wider than expected from the angular momentum distribution of the underlying dark matter halos. The slope of the size-luminosity (RL) relation is similar to those found for local disk galaxies, but considerably shallower than local early-type galaxies.

  19. Bivariate frequency analysis of rainfall intensity and duration for urban stormwater infrastructure design

    Science.gov (United States)

    Jun, Changhyun; Qin, Xiaosheng; Gan, Thian Yew; Tung, Yeou-Koung; De Michele, Carlo

    2017-10-01

    This study presents a storm-event based bivariate frequency analysis approach to determine design rainfalls in which, the number, intensity and duration of actual rainstorm events were considered. To derive more realistic design storms, the occurrence probability of an individual rainstorm event was determined from the joint distribution of storm intensity and duration through a copula model. Hourly rainfall data were used at three climate stations respectively located in Singapore, South Korea and Canada. It was found that the proposed approach could give a more realistic description of rainfall characteristics of rainstorm events and design rainfalls. As results, the design rainfall quantities from actual rainstorm events at the three studied sites are consistently lower than those obtained from the conventional rainfall depth-duration-frequency (DDF) method, especially for short-duration storms (such as 1-h). It results from occurrence probabilities of each rainstorm event and a different angle for rainfall frequency analysis, and could offer an alternative way of describing extreme rainfall properties and potentially help improve the hydrologic design of stormwater management facilities in urban areas.

  20. Bivariable analysis of ventricular late potentials in high resolution ECG records

    International Nuclear Information System (INIS)

    Orosco, L; Laciar, E

    2007-01-01

    In this study the bivariable analysis for ventricular late potentials detection in high-resolution electrocardiographic records is proposed. The standard time-domain analysis and the application of the time-frequency technique to high-resolution ECG records are briefly described as well as their corresponding results. In the proposed technique the time-domain parameter, QRSD and the most significant time-frequency index, EN QRS are used like variables. A bivariable index is defined, that combines the previous parameters. The propose technique allows evaluating the risk of ventricular tachycardia in post-myocardial infarct patients. The results show that the used bivariable index allows discriminating between the patient's population with ventricular tachycardia and the subjects of the control group. Also, it was found that the bivariable technique obtains a good valuation as diagnostic test. It is concluded that comparatively, the valuation of the bivariable technique as diagnostic test is superior to that of the time-domain method and the time-frequency technique evaluated individually

  1. A view on coupled cluster perturbation theory using a bivariational Lagrangian formulation.

    Science.gov (United States)

    Kristensen, Kasper; Eriksen, Janus J; Matthews, Devin A; Olsen, Jeppe; Jørgensen, Poul

    2016-02-14

    We consider two distinct coupled cluster (CC) perturbation series that both expand the difference between the energies of the CCSD (CC with single and double excitations) and CCSDT (CC with single, double, and triple excitations) models in orders of the Møller-Plesset fluctuation potential. We initially introduce the E-CCSD(T-n) series, in which the CCSD amplitude equations are satisfied at the expansion point, and compare it to the recently developed CCSD(T-n) series [J. J. Eriksen et al., J. Chem. Phys. 140, 064108 (2014)], in which not only the CCSD amplitude, but also the CCSD multiplier equations are satisfied at the expansion point. The computational scaling is similar for the two series, and both are term-wise size extensive with a formal convergence towards the CCSDT target energy. However, the two series are different, and the CCSD(T-n) series is found to exhibit a more rapid convergence up through the series, which we trace back to the fact that more information at the expansion point is utilized than for the E-CCSD(T-n) series. The present analysis can be generalized to any perturbation expansion representing the difference between a parent CC model and a higher-level target CC model. In general, we demonstrate that, whenever the parent parameters depend upon the perturbation operator, a perturbation expansion of the CC energy (where only parent amplitudes are used) differs from a perturbation expansion of the CC Lagrangian (where both parent amplitudes and parent multipliers are used). For the latter case, the bivariational Lagrangian formulation becomes more than a convenient mathematical tool, since it facilitates a different and faster convergent perturbation series than the simpler energy-based expansion.

  2. Estimating twin concordance for bivariate competing risks twin data

    DEFF Research Database (Denmark)

    Scheike, Thomas; Holst, Klaus K.; Hjelmborg, Jacob B.

    2014-01-01

    For twin time-to-event data, we consider different concordance probabilities, such as the casewise concordance that are routinely computed as a measure of the lifetime dependence/correlation for specific diseases. The concordance probability here is the probability that both twins have experience...... events with the competing risk death. We thus aim to quantify the degree of dependence through the casewise concordance function and show a significant genetic component...... the event of interest. Under the assumption that both twins are censored at the same time, we show how to estimate this probability in the presence of right censoring, and as a consequence, we can then estimate the casewise twin concordance. In addition, we can model the magnitude of within pair dependence...... over time, and covariates may be further influential on the marginal risk and dependence structure. We establish the estimators large sample properties and suggest various tests, for example, for inferring familial influence. The method is demonstrated and motivated by specific twin data on cancer...

  3. Causal networks clarify productivity-richness interrelations, bivariate plots do not

    Science.gov (United States)

    Grace, James B.; Adler, Peter B.; Harpole, W. Stanley; Borer, Elizabeth T.; Seabloom, Eric W.

    2014-01-01

    Perhaps no other pair of variables in ecology has generated as much discussion as species richness and ecosystem productivity, as illustrated by the reactions by Pierce (2013) and others to Adler et al.'s (2011) report that empirical patterns are weak and inconsistent. Adler et al. (2011) argued we need to move beyond a focus on simplistic bivariate relationships and test mechanistic, multivariate causal hypotheses. We feel the continuing debate over productivity–richness relationships (PRRs) provides a focused context for illustrating the fundamental difficulties of using bivariate relationships to gain scientific understanding.

  4. Using bivariate latent basis growth curve analysis to better understand treatment outcome in youth with anorexia nervosa.

    Science.gov (United States)

    Byrne, Catherine E; Wonderlich, Joseph A; Curby, Timothy; Fischer, Sarah; Lock, James; Le Grange, Daniel

    2018-04-25

    This study explored the relation between eating-related obsessionality and weight restoration utilizing bivariate latent basis growth curve modelling. Eating-related obsessionality is a moderator of treatment outcome for adolescents with anorexia nervosa (AN). This study examined the degree to which the rate of change in eating-related obsessionality was associated with the rate of change in weight over time in family-based treatment (FBT) and individual therapy for AN. Data were drawn from a 2-site randomized controlled trial that compared FBT and adolescent focused therapy for AN. Bivariate latent basis growth curves were used to examine the differences of the relations between trajectories of body weight and symptoms associated with eating and weight obsessionality. In the FBT group, the slope of eating-related obsessionality scores and the slope of weight were significantly (negatively) correlated. This finding indicates that a decrease in overall eating-relating obsessionality is significantly associated with an increase in weight for individuals who received FBT. However, there was no relation between change in obsessionality scores and change in weight in the adolescent focused therapy group. Results suggest that FBT has a specific impact on both weight gain and obsessive compulsive behaviour that is distinct from individual therapy. Copyright © 2018 John Wiley & Sons, Ltd and Eating Disorders Association.

  5. Classification of Knee Joint Vibration Signals Using Bivariate Feature Distribution Estimation and Maximal Posterior Probability Decision Criterion

    Directory of Open Access Journals (Sweden)

    Fang Zheng

    2013-04-01

    Full Text Available Analysis of knee joint vibration or vibroarthrographic (VAG signals using signal processing and machine learning algorithms possesses high potential for the noninvasive detection of articular cartilage degeneration, which may reduce unnecessary exploratory surgery. Feature representation of knee joint VAG signals helps characterize the pathological condition of degenerative articular cartilages in the knee. This paper used the kernel-based probability density estimation method to model the distributions of the VAG signals recorded from healthy subjects and patients with knee joint disorders. The estimated densities of the VAG signals showed explicit distributions of the normal and abnormal signal groups, along with the corresponding contours in the bivariate feature space. The signal classifications were performed by using the Fisher’s linear discriminant analysis, support vector machine with polynomial kernels, and the maximal posterior probability decision criterion. The maximal posterior probability decision criterion was able to provide the total classification accuracy of 86.67% and the area (Az of 0.9096 under the receiver operating characteristics curve, which were superior to the results obtained by either the Fisher’s linear discriminant analysis (accuracy: 81.33%, Az: 0.8564 or the support vector machine with polynomial kernels (accuracy: 81.33%, Az: 0.8533. Such results demonstrated the merits of the bivariate feature distribution estimation and the superiority of the maximal posterior probability decision criterion for analysis of knee joint VAG signals.

  6. Semi-automated detection of aberrant chromosomes in bivariate flow karyotypes

    NARCIS (Netherlands)

    Boschman, G. A.; Manders, E. M.; Rens, W.; Slater, R.; Aten, J. A.

    1992-01-01

    A method is described that is designed to compare, in a standardized procedure, bivariate flow karyotypes of Hoechst 33258 (HO)/Chromomycin A3 (CA) stained human chromosomes from cells with aberrations with a reference flow karyotype of normal chromosomes. In addition to uniform normalization of

  7. Carbon and oxygen isotopic ratio bi-variate distribution for marble artifacts quarry assignment

    International Nuclear Information System (INIS)

    Pentia, M.

    1995-01-01

    Statistical description, by a Gaussian bi-variate probability distribution of 13 C/ 12 C and 18 O/ 16 O isotopic ratios in the ancient marble quarries has been done and the new method for obtaining the confidence level quarry assignment for marble artifacts has been presented. (author) 8 figs., 3 tabs., 4 refs

  8. Technical note: Towards a continuous classification of climate using bivariate colour mapping

    NARCIS (Netherlands)

    Teuling, A.J.

    2011-01-01

    Climate is often defined in terms of discrete classes. Here I use bivariate colour mapping to show that the global distribution of K¨oppen-Geiger climate classes can largely be reproduced by combining the simple means of two key states of the climate system 5 (i.e., air temperature and relative

  9. Applied Statistics: From Bivariate through Multivariate Techniques [with CD-ROM

    Science.gov (United States)

    Warner, Rebecca M.

    2007-01-01

    This book provides a clear introduction to widely used topics in bivariate and multivariate statistics, including multiple regression, discriminant analysis, MANOVA, factor analysis, and binary logistic regression. The approach is applied and does not require formal mathematics; equations are accompanied by verbal explanations. Students are asked…

  10. A simple approximation to the bivariate normal distribution with large correlation coefficient

    NARCIS (Netherlands)

    Albers, Willem/Wim; Kallenberg, W.C.M.

    1994-01-01

    The bivariate normal distribution function is approximated with emphasis on situations where the correlation coefficient is large. The high accuracy of the approximation is illustrated by numerical examples. Moreover, exact upper and lower bounds are presented as well as asymptotic results on the

  11. Effect of catchment properties and flood generation regime on copula selection for bivariate flood frequency analysis

    Science.gov (United States)

    Filipova, Valeriya; Lawrence, Deborah; Klempe, Harald

    2018-02-01

    Applying copula-based bivariate flood frequency analysis is advantageous because the results provide information on both the flood peak and volume. More data are, however, required for such an analysis, and it is often the case that only data series with a limited record length are available. To overcome this issue of limited record length, data regarding climatic and geomorphological properties can be used to complement statistical methods. In this paper, we present a study of 27 catchments located throughout Norway, in which we assess whether catchment properties, flood generation processes and flood regime have an effect on the correlation between flood peak and volume and, in turn, on the selection of copulas. To achieve this, the annual maximum flood events were first classified into events generated primarily by rainfall, snowmelt or a combination of these. The catchments were then classified into flood regime, depending on the predominant flood generation process producing the annual maximum flood events. A contingency table and Fisher's exact test were used to determine the factors that affect the selection of copulas in the study area. The results show that the two-parameter copulas BB1 and BB7 are more commonly selected in catchments with high steepness, high mean annual runoff and rainfall flood regime. These findings suggest that in these types of catchments, the dependence structure between flood peak and volume is more complex and cannot be modeled effectively using a one-parameter copula. The results illustrate that by relating copula types to flood regime and catchment properties, additional information can be supplied for selecting copulas in catchments with limited data.

  12. An improved method for bivariate meta-analysis when within-study correlations are unknown.

    Science.gov (United States)

    Hong, Chuan; D Riley, Richard; Chen, Yong

    2018-03-01

    Multivariate meta-analysis, which jointly analyzes multiple and possibly correlated outcomes in a single analysis, is becoming increasingly popular in recent years. An attractive feature of the multivariate meta-analysis is its ability to account for the dependence between multiple estimates from the same study. However, standard inference procedures for multivariate meta-analysis require the knowledge of within-study correlations, which are usually unavailable. This limits standard inference approaches in practice. Riley et al proposed a working model and an overall synthesis correlation parameter to account for the marginal correlation between outcomes, where the only data needed are those required for a separate univariate random-effects meta-analysis. As within-study correlations are not required, the Riley method is applicable to a wide variety of evidence synthesis situations. However, the standard variance estimator of the Riley method is not entirely correct under many important settings. As a consequence, the coverage of a function of pooled estimates may not reach the nominal level even when the number of studies in the multivariate meta-analysis is large. In this paper, we improve the Riley method by proposing a robust variance estimator, which is asymptotically correct even when the model is misspecified (ie, when the likelihood function is incorrect). Simulation studies of a bivariate meta-analysis, in a variety of settings, show a function of pooled estimates has improved performance when using the proposed robust variance estimator. In terms of individual pooled estimates themselves, the standard variance estimator and robust variance estimator give similar results to the original method, with appropriate coverage. The proposed robust variance estimator performs well when the number of studies is relatively large. Therefore, we recommend the use of the robust method for meta-analyses with a relatively large number of studies (eg, m≥50). When the

  13. A multinomial-logit ordered-probit model for jointly analyzing crash avoidance maneuvers and crash severity

    DEFF Research Database (Denmark)

    Kaplan, Sigal; Prato, Carlo Giacomo

    ' propensity to engage in various corrective maneuvers in the case of the critical event of vehicle travelling. Five lateral and speed control maneuvers are considered: “braking”, “steering”, “braking & steering”, and “other maneuvers”, in addition to a “no action” option. The analyzed data are retrieved from...... the United States National Automotive Sampling System General Estimates System (GES) crash database for the years 2005-2009. Results show (i) the correlation between crash avoidance maneuvers and crash severity, and (ii) the link between drivers' attributes, risky driving behavior, road characteristics...

  14. Smoothing of the bivariate LOD score for non-normal quantitative traits.

    Science.gov (United States)

    Buil, Alfonso; Dyer, Thomas D; Almasy, Laura; Blangero, John

    2005-12-30

    Variance component analysis provides an efficient method for performing linkage analysis for quantitative traits. However, type I error of variance components-based likelihood ratio testing may be affected when phenotypic data are non-normally distributed (especially with high values of kurtosis). This results in inflated LOD scores when the normality assumption does not hold. Even though different solutions have been proposed to deal with this problem with univariate phenotypes, little work has been done in the multivariate case. We present an empirical approach to adjust the inflated LOD scores obtained from a bivariate phenotype that violates the assumption of normality. Using the Collaborative Study on the Genetics of Alcoholism data available for the Genetic Analysis Workshop 14, we show how bivariate linkage analysis with leptokurtotic traits gives an inflated type I error. We perform a novel correction that achieves acceptable levels of type I error.

  15. Testing independence of bivariate interval-censored data using modified Kendall's tau statistic.

    Science.gov (United States)

    Kim, Yuneung; Lim, Johan; Park, DoHwan

    2015-11-01

    In this paper, we study a nonparametric procedure to test independence of bivariate interval censored data; for both current status data (case 1 interval-censored data) and case 2 interval-censored data. To do it, we propose a score-based modification of the Kendall's tau statistic for bivariate interval-censored data. Our modification defines the Kendall's tau statistic with expected numbers of concordant and disconcordant pairs of data. The performance of the modified approach is illustrated by simulation studies and application to the AIDS study. We compare our method to alternative approaches such as the two-stage estimation method by Sun et al. (Scandinavian Journal of Statistics, 2006) and the multiple imputation method by Betensky and Finkelstein (Statistics in Medicine, 1999b). © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Genetics of Obesity Traits: A Bivariate Genome-Wide Association Analysis

    DEFF Research Database (Denmark)

    Wu, Yili; Duan, Haiping; Tian, Xiaocao

    2018-01-01

    Previous genome-wide association studies on anthropometric measurements have identified more than 100 related loci, but only a small portion of heritability in obesity was explained. Here we present a bivariate twin study to look for the genetic variants associated with body mass index and waist......-hip ratio, and to explore the obesity-related pathways in Northern Han Chinese. Cholesky decompositionmodel for 242monozygotic and 140 dizygotic twin pairs indicated a moderate genetic correlation (r = 0.53, 95%CI: 0.42–0.64) between body mass index and waist-hip ratio. Bivariate genome-wide association.......05. Expression quantitative trait loci analysis identified rs2242044 as a significant cis-eQTL in both the normal adipose-subcutaneous (P = 1.7 × 10−9) and adipose-visceral (P = 4.4 × 10−15) tissue. These findings may provide an important entry point to unravel genetic pleiotropy in obesity traits....

  17. On minimum divergence adaptation of discrete bivariate distributions to given marginals

    Czech Academy of Sciences Publication Activity Database

    Vajda, Igor; van der Meulen, E. C.

    2005-01-01

    Roč. 51, č. 1 (2005), s. 313-320 ISSN 0018-9448 R&D Projects: GA ČR GA201/02/1391; GA MŠk 1M0572 Institutional research plan: CEZ:AV0Z10750506 Keywords : approximation of contingency tables * bivariate discrete distributions * minimization of divergences Subject RIV: BD - Theory of Information Impact factor: 2.183, year: 2005

  18. Evidence for bivariate linkage of obesity and HDL-C levels in the Framingham Heart Study.

    Science.gov (United States)

    Arya, Rector; Lehman, Donna; Hunt, Kelly J; Schneider, Jennifer; Almasy, Laura; Blangero, John; Stern, Michael P; Duggirala, Ravindranath

    2003-12-31

    Epidemiological studies have indicated that obesity and low high-density lipoprotein (HDL) levels are strong cardiovascular risk factors, and that these traits are inversely correlated. Despite the belief that these traits are correlated in part due to pleiotropy, knowledge on specific genes commonly affecting obesity and dyslipidemia is very limited. To address this issue, we first conducted univariate multipoint linkage analysis for body mass index (BMI) and HDL-C to identify loci influencing variation in these phenotypes using Framingham Heart Study data relating to 1702 subjects distributed across 330 pedigrees. Subsequently, we performed bivariate multipoint linkage analysis to detect common loci influencing covariation between these two traits. We scanned the genome and identified a major locus near marker D6S1009 influencing variation in BMI (LOD = 3.9) using the program SOLAR. We also identified a major locus for HDL-C near marker D2S1334 on chromosome 2 (LOD = 3.5) and another region near marker D6S1009 on chromosome 6 with suggestive evidence for linkage (LOD = 2.7). Since these two phenotypes have been independently mapped to the same region on chromosome 6q, we used the bivariate multipoint linkage approach using SOLAR. The bivariate linkage analysis of BMI and HDL-C implicated the genetic region near marker D6S1009 as harboring a major gene commonly influencing these phenotypes (bivariate LOD = 6.2; LODeq = 5.5) and appears to improve power to map the correlated traits to a region, precisely. We found substantial evidence for a quantitative trait locus with pleiotropic effects, which appears to influence both BMI and HDL-C phenotypes in the Framingham data.

  19. Can the bivariate Hurst exponent be higher than an average of the separate Hurst exponents?

    Czech Academy of Sciences Publication Activity Database

    Krištoufek, Ladislav

    2015-01-01

    Roč. 431, č. 1 (2015), s. 124-127 ISSN 0378-4371 R&D Projects: GA ČR(CZ) GP14-11402P Institutional support: RVO:67985556 Keywords : Correlations * Power- law cross-correlations * Bivariate Hurst exponent * Spectrum coherence Subject RIV: AH - Economics Impact factor: 1.785, year: 2015 http://library.utia.cas.cz/separaty/2015/E/kristoufek-0452314.pdf

  20. Bivariate constant stress degradation model: LED lighting system reliability estimation with two-stage modelling

    NARCIS (Netherlands)

    Sari, J.K.; Newby, M.J.; Brombacher, A.C.; Tang, L.C.

    2009-01-01

    Light-emitting diode (LED) lamp has received great attention as a potential replacement for the more commercially available lighting technology, such as incandescence and fluorescence lamps. LED which is the main component of LED lamp has a very long lifetime. This means that no or very few failures

  1. A new spatial multiple discrete-continuous modeling approach to land use change analysis.

    Science.gov (United States)

    2013-09-01

    This report formulates a multiple discrete-continuous probit (MDCP) land-use model within a : spatially explicit economic structural framework for land-use change decisions. The spatial : MDCP model is capable of predicting both the type and intensit...

  2. A COMPARISON OF SOME ROBUST BIVARIATE CONTROL CHARTS FOR INDIVIDUAL OBSERVATIONS

    Directory of Open Access Journals (Sweden)

    Moustafa Omar Ahmed Abu - Shawiesh

    2014-06-01

    Full Text Available This paper proposed and considered some bivariate control charts to monitor individual observations from a statistical process control. Usual control charts which use mean and variance-covariance estimators are sensitive to outliers. We consider the following robust alternatives to the classical Hoteling's T2: T2MedMAD, T2MCD, T2MVE a simulation study has been conducted to compare the performance of these control charts. Two real life data are analyzed to illustrate the application of these robust alternatives.

  3. A comparison between multivariate and bivariate analysis used in marketing research

    Directory of Open Access Journals (Sweden)

    Constantin, C.

    2012-01-01

    Full Text Available This paper is about an instrumental research conducted in order to compare the information given by two multivariate data analysis in comparison with the usual bivariate analysis. The outcomes of the research reveal that sometimes the multivariate methods use more information from a certain variable, but sometimes they use only a part of the information considered the most important for certain associations. For this reason, a researcher should use both categories of data analysis in order to obtain entirely useful information.

  4. A behavioral choice model of the use of car-sharing and ride-sourcing services

    Energy Technology Data Exchange (ETDEWEB)

    Dias, Felipe F.; Lavieri, Patrícia S.; Garikapati, Venu M.; Astroza, Sebastian; Pendyala, Ram M.; Bhat, Chandra R.

    2017-07-26

    There are a number of disruptive mobility services that are increasingly finding their way into the marketplace. Two key examples of such services are car-sharing services and ride-sourcing services. In an effort to better understand the influence of various exogenous socio-economic and demographic variables on the frequency of use of ride-sourcing and car-sharing services, this paper presents a bivariate ordered probit model estimated on a survey data set derived from the 2014-2015 Puget Sound Regional Travel Study. Model estimation results show that users of these services tend to be young, well-educated, higher-income, working individuals residing in higher-density areas. There are significant interaction effects reflecting the influence of children and the built environment on disruptive mobility service usage. The model developed in this paper provides key insights into factors affecting market penetration of these services, and can be integrated in larger travel forecasting model systems to better predict the adoption and use of mobility-on-demand services.

  5. Geovisualization of land use and land cover using bivariate maps and Sankey flow diagrams

    Science.gov (United States)

    Strode, Georgianna; Mesev, Victor; Thornton, Benjamin; Jerez, Marjorie; Tricarico, Thomas; McAlear, Tyler

    2018-05-01

    The terms `land use' and `land cover' typically describe categories that convey information about the landscape. Despite the major difference of land use implying some degree of anthropogenic disturbance, the two terms are commonly used interchangeably, especially when anthropogenic disturbance is ambiguous, say managed forestland or abandoned agricultural fields. Cartographically, land use and land cover are also sometimes represented interchangeably within common legends, giving with the impression that the landscape is a seamless continuum of land use parcels spatially adjacent to land cover tracts. We believe this is misleading, and feel we need to reiterate the well-established symbiosis of land uses as amalgams of land covers; in other words land covers are subsets of land use. Our paper addresses this spatially complex, and frequently ambiguous relationship, and posits that bivariate cartographic techniques are an ideal vehicle for representing both land use and land cover simultaneously. In more specific terms, we explore the use of nested symbology as ways to represent graphically land use and land cover, where land cover are circles nested with land use squares. We also investigate bivariate legends for representing statistical covariance as a means for visualizing the combinations of land use and cover. Lastly, we apply Sankey flow diagrams to further illustrate the complex, multifaceted relationships between land use and land cover. Our work is demonstrated on data representing land use and cover data for the US state of Florida.

  6. Xp21 contiguous gene syndromes: Deletion quantitation with bivariate flow karyotyping allows mapping of patient breakpoints

    Energy Technology Data Exchange (ETDEWEB)

    McCabe, E.R.B.; Towbin, J.A. (Baylor College of Medicine, Houston, TX (United States)); Engh, G. van den; Trask, B.J. (Lawrence Livermore National Lab., CA (United States))

    1992-12-01

    Bivariate flow karyotyping was used to estimate the deletion sizes for a series of patients with Xp21 contiguous gene syndromes. The deletion estimates were used to develop an approximate scale for the genomic map in Xp21. The bivariate flow karyotype results were compared with clinical and molecular genetic information on the extent of the patients' deletions, and these various types of data were consistent. The resulting map spans >15 Mb, from the telomeric interval between DXS41 (99-6) and DXS68 (1-4) to a position centromeric to the ornithine transcarbamylase locus. The deletion sizing was considered to be accurate to [plus minus]1 Mb. The map provides information on the relative localization of genes and markers within this region. For example, the map suggests that the adrenal hypoplasia congenita and glycerol kinase genes are physically close to each other, are within 1-2 Mb of the telomeric end of the Duchenne muscular dystrophy (DMD) gene, and are nearer to the DMD locus than to the more distal marker DXS28 (C7). Information of this type is useful in developing genomic strategies for positional cloning in Xp21. These investigations demonstrate that the DNA from patients with Xp21 contiguous gene syndromes can be valuable reagents, not only for ordering loci and markers but also for providing an approximate scale to the map of the Xp21 region surrounding DMD. 44 refs., 3 figs.

  7. Recurrent major depression and right hippocampal volume: A bivariate linkage and association study.

    Science.gov (United States)

    Mathias, Samuel R; Knowles, Emma E M; Kent, Jack W; McKay, D Reese; Curran, Joanne E; de Almeida, Marcio A A; Dyer, Thomas D; Göring, Harald H H; Olvera, Rene L; Duggirala, Ravi; Fox, Peter T; Almasy, Laura; Blangero, John; Glahn, David C

    2016-01-01

    Previous work has shown that the hippocampus is smaller in the brains of individuals suffering from major depressive disorder (MDD) than those of healthy controls. Moreover, right hippocampal volume specifically has been found to predict the probability of subsequent depressive episodes. This study explored the utility of right hippocampal volume as an endophenotype of recurrent MDD (rMDD). We observed a significant genetic correlation between the two traits in a large sample of Mexican American individuals from extended pedigrees (ρg = -0.34, p = 0.013). A bivariate linkage scan revealed a significant pleiotropic quantitative trait locus on chromosome 18p11.31-32 (LOD = 3.61). Bivariate association analysis conducted under the linkage peak revealed a variant (rs574972) within an intron of the gene SMCHD1 meeting the corrected significance level (χ(2) = 19.0, p = 7.4 × 10(-5)). Univariate association analyses of each phenotype separately revealed that the same variant was significant for right hippocampal volume alone, and also revealed a suggestively significant variant (rs12455524) within the gene DLGAP1 for rMDD alone. The results implicate right-hemisphere hippocampal volume as a possible endophenotype of rMDD, and in so doing highlight a potential gene of interest for rMDD risk. © 2015 Wiley Periodicals, Inc.

  8. Which global stock indices trigger stronger contagion risk in the Vietnamese stock market? Evidence using a bivariate analysis

    Directory of Open Access Journals (Sweden)

    Wang Kuan-Min

    2013-01-01

    Full Text Available This paper extends recent investigations into risk contagion effects on stock markets to the Vietnamese stock market. Daily data spanning October 9, 2006 to May 3, 2012 are sourced to empirically validate the contagion effects between stock markets in Vietnam, and China, Japan, Singapore, and the US. To facilitate the validation of contagion effects with market-related coefficients, this paper constructs a bivariate EGARCH model of dynamic conditional correlation coefficients. Using the correlation contagion test and Dungey et al.’s (2005 contagion test, we find contagion effects between the Vietnamese and four other stock markets, namely Japan, Singapore, China, and the US. Second, we show that the Japanese stock market causes stronger contagion risk in the Vietnamese stock market compared to the stock markets of China, Singapore, and the US. Finally, we show that the Chinese and US stock markets cause weaker contagion effects in the Vietnamese stock market because of stronger interdependence effects between the former two markets.

  9. Bivariate quadratic method in quantifying the differential capacitance and energy capacity of supercapacitors under high current operation

    Science.gov (United States)

    Goh, Chin-Teng; Cruden, Andrew

    2014-11-01

    Capacitance and resistance are the fundamental electrical parameters used to evaluate the electrical characteristics of a supercapacitor, namely the dynamic voltage response, energy capacity, state of charge and health condition. In the British Standards EN62391 and EN62576, the constant capacitance method can be further improved with a differential capacitance that more accurately describes the dynamic voltage response of supercapacitors. This paper presents a novel bivariate quadratic based method to model the dynamic voltage response of supercapacitors under high current charge-discharge cycling, and to enable the derivation of the differential capacitance and energy capacity directly from terminal measurements, i.e. voltage and current, rather than from multiple pulsed-current or excitation signal tests across different bias levels. The estimation results the author achieves are in close agreement with experimental measurements, within a relative error of 0.2%, at various high current levels (25-200 A), more accurate than the constant capacitance method (4-7%). The archival value of this paper is the introduction of an improved quantification method for the electrical characteristics of supercapacitors, and the disclosure of the distinct properties of supercapacitors: the nonlinear capacitance-voltage characteristic, capacitance variation between charging and discharging, and distribution of energy capacity across the operating voltage window.

  10. A Bivariate Chebyshev Spectral Collocation Quasilinearization Method for Nonlinear Evolution Parabolic Equations

    Directory of Open Access Journals (Sweden)

    S. S. Motsa

    2014-01-01

    Full Text Available This paper presents a new method for solving higher order nonlinear evolution partial differential equations (NPDEs. The method combines quasilinearisation, the Chebyshev spectral collocation method, and bivariate Lagrange interpolation. In this paper, we use the method to solve several nonlinear evolution equations, such as the modified KdV-Burgers equation, highly nonlinear modified KdV equation, Fisher's equation, Burgers-Fisher equation, Burgers-Huxley equation, and the Fitzhugh-Nagumo equation. The results are compared with known exact analytical solutions from literature to confirm accuracy, convergence, and effectiveness of the method. There is congruence between the numerical results and the exact solutions to a high order of accuracy. Tables were generated to present the order of accuracy of the method; convergence graphs to verify convergence of the method and error graphs are presented to show the excellent agreement between the results from this study and the known results from literature.

  11. Obtaining DDF Curves of Extreme Rainfall Data Using Bivariate Copula and Frequency Analysis

    DEFF Research Database (Denmark)

    Sadri, Sara; Madsen, Henrik; Mikkelsen, Peter Steen

    2009-01-01

    , situated near Copenhagen in Denmark. For rainfall extracted using method 2, the marginal distribution of depth was found to fit the Generalized Pareto distribution while duration was found to fit the Gamma distribution, using the method of L-moments. The volume was fit with a generalized Pareto...... with duration for a given return period and name them DDF (depth-duration-frequency) curves. The copula approach does not assume the rainfall variables are independent or jointly normally distributed. Rainfall series are extracted in three ways: (1) by maximum mean intensity; (2) by depth and duration...... distribution and the duration was fit with a Pearson type III distribution for rainfall extracted using method 3. The Clayton copula was found to be appropriate for bivariate analysis of rainfall depth and duration for both methods 2 and 3. DDF curves derived using the Clayton copula for depth and duration...

  12. A bivariate Chebyshev spectral collocation quasilinearization method for nonlinear evolution parabolic equations.

    Science.gov (United States)

    Motsa, S S; Magagula, V M; Sibanda, P

    2014-01-01

    This paper presents a new method for solving higher order nonlinear evolution partial differential equations (NPDEs). The method combines quasilinearisation, the Chebyshev spectral collocation method, and bivariate Lagrange interpolation. In this paper, we use the method to solve several nonlinear evolution equations, such as the modified KdV-Burgers equation, highly nonlinear modified KdV equation, Fisher's equation, Burgers-Fisher equation, Burgers-Huxley equation, and the Fitzhugh-Nagumo equation. The results are compared with known exact analytical solutions from literature to confirm accuracy, convergence, and effectiveness of the method. There is congruence between the numerical results and the exact solutions to a high order of accuracy. Tables were generated to present the order of accuracy of the method; convergence graphs to verify convergence of the method and error graphs are presented to show the excellent agreement between the results from this study and the known results from literature.

  13. REGRES: A FORTRAN-77 program to calculate nonparametric and ``structural'' parametric solutions to bivariate regression equations

    Science.gov (United States)

    Rock, N. M. S.; Duffy, T. R.

    REGRES allows a range of regression equations to be calculated for paired sets of data values in which both variables are subject to error (i.e. neither is the "independent" variable). Nonparametric regressions, based on medians of all possible pairwise slopes and intercepts, are treated in detail. Estimated slopes and intercepts are output, along with confidence limits, Spearman and Kendall rank correlation coefficients. Outliers can be rejected with user-determined stringency. Parametric regressions can be calculated for any value of λ (the ratio of the variances of the random errors for y and x)—including: (1) major axis ( λ = 1); (2) reduced major axis ( λ = variance of y/variance of x); (3) Y on Xλ = infinity; or (4) X on Y ( λ = 0) solutions. Pearson linear correlation coefficients also are output. REGRES provides an alternative to conventional isochron assessment techniques where bivariate normal errors cannot be assumed, or weighting methods are inappropriate.

  14. The Determinants of University Dropouts: A Bivariate Probability Model with Sample Selection.

    Science.gov (United States)

    Montmarquette, Claude; Mahseredjian, Sophie; Houle, Rachel

    2001-01-01

    Studies determinants of university dropouts, using a longitudinal data set on student enrollments at the University of Montreal. Variables explaining persistence and dropouts are related to a nontraditional class-size effect in first-year required courses and to type of university program. Strong academic performance influences student…

  15. When Should Nintendo Launch its Wii? Insights From a Bivariate Successive Generation Model

    NARCIS (Netherlands)

    Ph.H.B.F. Franses (Philip Hans); C. Hernández-Mireles (Carlos)

    2006-01-01

    textabstractNovember 2006 most likely marks the launch of Sony’s PS3, the successor to PS2. Later, Nintendo is expected to launch the Wii, the successor to the GameCube. We answer the question in the title by analyzing the diffusion of the earlier generations of these consoles, and by using a new

  16. Bivariate tensor product ( p , q $(p, q$ -analogue of Kantorovich-type Bernstein-Stancu-Schurer operators

    Directory of Open Access Journals (Sweden)

    Qing-Bo Cai

    2017-11-01

    Full Text Available Abstract In this paper, we construct a bivariate tensor product generalization of Kantorovich-type Bernstein-Stancu-Schurer operators based on the concept of ( p , q $(p, q$ -integers. We obtain moments and central moments of these operators, give the rate of convergence by using the complete modulus of continuity for the bivariate case and estimate a convergence theorem for the Lipschitz continuous functions. We also give some graphs and numerical examples to illustrate the convergence properties of these operators to certain functions.

  17. Improving risk estimates of runoff producing areas: formulating variable source areas as a bivariate process.

    Science.gov (United States)

    Cheng, Xiaoya; Shaw, Stephen B; Marjerison, Rebecca D; Yearick, Christopher D; DeGloria, Stephen D; Walter, M Todd

    2014-05-01

    Predicting runoff producing areas and their corresponding risks of generating storm runoff is important for developing watershed management strategies to mitigate non-point source pollution. However, few methods for making these predictions have been proposed, especially operational approaches that would be useful in areas where variable source area (VSA) hydrology dominates storm runoff. The objective of this study is to develop a simple approach to estimate spatially-distributed risks of runoff production. By considering the development of overland flow as a bivariate process, we incorporated both rainfall and antecedent soil moisture conditions into a method for predicting VSAs based on the Natural Resource Conservation Service-Curve Number equation. We used base-flow immediately preceding storm events as an index of antecedent soil wetness status. Using nine sub-basins of the Upper Susquehanna River Basin, we demonstrated that our estimated runoff volumes and extent of VSAs agreed with observations. We further demonstrated a method for mapping these areas in a Geographic Information System using a Soil Topographic Index. The proposed methodology provides a new tool for watershed planners for quantifying runoff risks across watersheds, which can be used to target water quality protection strategies. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. Improving runoff risk estimates: Formulating runoff as a bivariate process using the SCS curve number method

    Science.gov (United States)

    Shaw, Stephen B.; Walter, M. Todd

    2009-03-01

    The Soil Conservation Service curve number (SCS-CN) method is widely used to predict storm runoff for hydraulic design purposes, such as sizing culverts and detention basins. As traditionally used, the probability of calculated runoff is equated to the probability of the causative rainfall event, an assumption that fails to account for the influence of variations in soil moisture on runoff generation. We propose a modification to the SCS-CN method that explicitly incorporates rainfall return periods and the frequency of different soil moisture states to quantify storm runoff risks. Soil moisture status is assumed to be correlated to stream base flow. Fundamentally, this approach treats runoff as the outcome of a bivariate process instead of dictating a 1:1 relationship between causative rainfall and resulting runoff volumes. Using data from the Fall Creek watershed in western New York and the headwaters of the French Broad River in the mountains of North Carolina, we show that our modified SCS-CN method improves frequency discharge predictions in medium-sized watersheds in the eastern United States in comparison to the traditional application of the method.

  19. Bivariate empirical mode decomposition for ECG-based biometric identification with emotional data.

    Science.gov (United States)

    Ferdinando, Hany; Seppanen, Tapio; Alasaarela, Esko

    2017-07-01

    Emotions modulate ECG signals such that they might affect ECG-based biometric identification in real life application. It motivated in finding good feature extraction methods where the emotional state of the subjects has minimum impacts. This paper evaluates feature extraction based on bivariate empirical mode decomposition (BEMD) for biometric identification when emotion is considered. Using the ECG signal from the Mahnob-HCI database for affect recognition, the features were statistical distributions of dominant frequency after applying BEMD analysis to ECG signals. The achieved accuracy was 99.5% with high consistency using kNN classifier in 10-fold cross validation to identify 26 subjects when the emotional states of the subjects were ignored. When the emotional states of the subject were considered, the proposed method also delivered high accuracy, around 99.4%. We concluded that the proposed method offers emotion-independent features for ECG-based biometric identification. The proposed method needs more evaluation related to testing with other classifier and variation in ECG signals, e.g. normal ECG vs. ECG with arrhythmias, ECG from various ages, and ECG from other affective databases.

  20. GIS-based bivariate statistical techniques for groundwater potential analysis (an example of Iran)

    Science.gov (United States)

    Haghizadeh, Ali; Moghaddam, Davoud Davoudi; Pourghasemi, Hamid Reza

    2017-12-01

    Groundwater potential analysis prepares better comprehension of hydrological settings of different regions. This study shows the potency of two GIS-based data driven bivariate techniques namely statistical index (SI) and Dempster-Shafer theory (DST) to analyze groundwater potential in Broujerd region of Iran. The research was done using 11 groundwater conditioning factors and 496 spring positions. Based on the ground water potential maps (GPMs) of SI and DST methods, 24.22% and 23.74% of the study area is covered by poor zone of groundwater potential, and 43.93% and 36.3% of Broujerd region is covered by good and very good potential zones, respectively. The validation of outcomes displayed that area under the curve (AUC) of SI and DST techniques are 81.23% and 79.41%, respectively, which shows SI method has slightly a better performance than the DST technique. Therefore, SI and DST methods are advantageous to analyze groundwater capacity and scrutinize the complicated relation between groundwater occurrence and groundwater conditioning factors, which permits investigation of both systemic and stochastic uncertainty. Finally, it can be realized that these techniques are very beneficial for groundwater potential analyzing and can be practical for water-resource management experts.

  1. IDF relationships using bivariate copula for storm events in Peninsular Malaysia

    Science.gov (United States)

    Ariff, N. M.; Jemain, A. A.; Ibrahim, K.; Wan Zin, W. Z.

    2012-11-01

    SummaryIntensity-duration-frequency (IDF) curves are used in many hydrologic designs for the purpose of water managements and flood preventions. The IDF curves available in Malaysia are those obtained from univariate analysis approach which only considers the intensity of rainfalls at fixed time intervals. As several rainfall variables are correlated with each other such as intensity and duration, this paper aims to derive IDF points for storm events in Peninsular Malaysia by means of bivariate frequency analysis. This is achieved through utilizing the relationship between storm intensities and durations using the copula method. Four types of copulas; namely the Ali-Mikhail-Haq (AMH), Frank, Gaussian and Farlie-Gumbel-Morgenstern (FGM) copulas are considered because the correlation between storm intensity, I, and duration, D, are negative and these copulas are appropriate when the relationship between the variables are negative. The correlations are attained by means of Kendall's τ estimation. The analysis was performed on twenty rainfall stations with hourly data across Peninsular Malaysia. Using Akaike's Information Criteria (AIC) for testing goodness-of-fit, both Frank and Gaussian copulas are found to be suitable to represent the relationship between I and D. The IDF points found by the copula method are compared to the IDF curves yielded based on the typical IDF empirical formula of the univariate approach. This study indicates that storm intensities obtained from both methods are in agreement with each other for any given storm duration and for various return periods.

  2. Application of bivariate mapping for hydrological classification and analysis of temporal change and scale effects in Switzerland

    NARCIS (Netherlands)

    Speich, Matthias J.R.; Bernhard, Luzi; Teuling, Ryan; Zappa, Massimiliano

    2015-01-01

    Hydrological classification schemes are important tools for assessing the impacts of a changing climate on the hydrology of a region. In this paper, we present bivariate mapping as a simple means of classifying hydrological data for a quantitative and qualitative assessment of temporal change.

  3. Analysis of input variables of an artificial neural network using bivariate correlation and canonical correlation

    Energy Technology Data Exchange (ETDEWEB)

    Costa, Valter Magalhaes; Pereira, Iraci Martinez, E-mail: valter.costa@usp.b [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2011-07-01

    The monitoring of variables and diagnosis of sensor fault in nuclear power plants or processes industries is very important because a previous diagnosis allows the correction of the fault and, like this, to prevent the production stopped, improving operator's security and it's not provoking economics losses. The objective of this work is to build a set, using bivariate correlation and canonical correlation, which will be the set of input variables of an artificial neural network to monitor the greater number of variables. This methodology was applied to the IEA-R1 Research Reactor at IPEN. Initially, for the input set of neural network we selected the variables: nuclear power, primary circuit flow rate, control/safety rod position and difference in pressure in the core of the reactor, because almost whole of monitoring variables have relation with the variables early described or its effect can be result of the interaction of two or more. The nuclear power is related to the increasing and decreasing of temperatures as well as the amount radiation due fission of the uranium; the rods are controls of power and influence in the amount of radiation and increasing and decreasing of temperatures; the primary circuit flow rate has the function of energy transport by removing the nucleus heat. An artificial neural network was trained and the results were satisfactory since the IEA-R1 Data Acquisition System reactor monitors 64 variables and, with a set of 9 input variables resulting from the correlation analysis, it was possible to monitor 51 variables. (author)

  4. New Colors for Histology: Optimized Bivariate Color Maps Increase Perceptual Contrast in Histological Images.

    Science.gov (United States)

    Kather, Jakob Nikolas; Weis, Cleo-Aron; Marx, Alexander; Schuster, Alexander K; Schad, Lothar R; Zöllner, Frank Gerrit

    2015-01-01

    Accurate evaluation of immunostained histological images is required for reproducible research in many different areas and forms the basis of many clinical decisions. The quality and efficiency of histopathological evaluation is limited by the information content of a histological image, which is primarily encoded as perceivable contrast differences between objects in the image. However, the colors of chromogen and counterstain used for histological samples are not always optimally distinguishable, even under optimal conditions. In this study, we present a method to extract the bivariate color map inherent in a given histological image and to retrospectively optimize this color map. We use a novel, unsupervised approach based on color deconvolution and principal component analysis to show that the commonly used blue and brown color hues in Hematoxylin-3,3'-Diaminobenzidine (DAB) images are poorly suited for human observers. We then demonstrate that it is possible to construct improved color maps according to objective criteria and that these color maps can be used to digitally re-stain histological images. To validate whether this procedure improves distinguishability of objects and background in histological images, we re-stain phantom images and N = 596 large histological images of immunostained samples of human solid tumors. We show that perceptual contrast is improved by a factor of 2.56 in phantom images and up to a factor of 2.17 in sets of histological tumor images. Thus, we provide an objective and reliable approach to measure object distinguishability in a given histological image and to maximize visual information available to a human observer. This method could easily be incorporated in digital pathology image viewing systems to improve accuracy and efficiency in research and diagnostics.

  5. New Colors for Histology: Optimized Bivariate Color Maps Increase Perceptual Contrast in Histological Images.

    Directory of Open Access Journals (Sweden)

    Jakob Nikolas Kather

    Full Text Available Accurate evaluation of immunostained histological images is required for reproducible research in many different areas and forms the basis of many clinical decisions. The quality and efficiency of histopathological evaluation is limited by the information content of a histological image, which is primarily encoded as perceivable contrast differences between objects in the image. However, the colors of chromogen and counterstain used for histological samples are not always optimally distinguishable, even under optimal conditions.In this study, we present a method to extract the bivariate color map inherent in a given histological image and to retrospectively optimize this color map. We use a novel, unsupervised approach based on color deconvolution and principal component analysis to show that the commonly used blue and brown color hues in Hematoxylin-3,3'-Diaminobenzidine (DAB images are poorly suited for human observers. We then demonstrate that it is possible to construct improved color maps according to objective criteria and that these color maps can be used to digitally re-stain histological images.To validate whether this procedure improves distinguishability of objects and background in histological images, we re-stain phantom images and N = 596 large histological images of immunostained samples of human solid tumors. We show that perceptual contrast is improved by a factor of 2.56 in phantom images and up to a factor of 2.17 in sets of histological tumor images.Thus, we provide an objective and reliable approach to measure object distinguishability in a given histological image and to maximize visual information available to a human observer. This method could easily be incorporated in digital pathology image viewing systems to improve accuracy and efficiency in research and diagnostics.

  6. Using bivariate signal analysis to characterize the epileptic focus: the benefit of surrogates.

    Science.gov (United States)

    Andrzejak, R G; Chicharro, D; Lehnertz, K; Mormann, F

    2011-04-01

    The disease epilepsy is related to hypersynchronous activity of networks of neurons. While acute epileptic seizures are the most extreme manifestation of this hypersynchronous activity, an elevated level of interdependence of neuronal dynamics is thought to persist also during the seizure-free interval. In multichannel recordings from brain areas involved in the epileptic process, this interdependence can be reflected in an increased linear cross correlation but also in signal properties of higher order. Bivariate time series analysis comprises a variety of approaches, each with different degrees of sensitivity and specificity for interdependencies reflected in lower- or higher-order properties of pairs of simultaneously recorded signals. Here we investigate which approach is best suited to detect putatively elevated interdependence levels in signals recorded from brain areas involved in the epileptic process. For this purpose, we use the linear cross correlation that is sensitive to lower-order signatures of interdependence, a nonlinear interdependence measure that integrates both lower- and higher-order properties, and a surrogate-corrected nonlinear interdependence measure that aims to specifically characterize higher-order properties. We analyze intracranial electroencephalographic recordings of the seizure-free interval from 29 patients with an epileptic focus located in the medial temporal lobe. Our results show that all three approaches detect higher levels of interdependence for signals recorded from the brain hemisphere containing the epileptic focus as compared to signals recorded from the opposite hemisphere. For the linear cross correlation, however, these differences are not significant. For the nonlinear interdependence measure, results are significant but only of moderate accuracy with regard to the discriminative power for the focal and nonfocal hemispheres. The highest significance and accuracy is obtained for the surrogate-corrected nonlinear

  7. Analysis of input variables of an artificial neural network using bivariate correlation and canonical correlation

    International Nuclear Information System (INIS)

    Costa, Valter Magalhaes; Pereira, Iraci Martinez

    2011-01-01

    The monitoring of variables and diagnosis of sensor fault in nuclear power plants or processes industries is very important because a previous diagnosis allows the correction of the fault and, like this, to prevent the production stopped, improving operator's security and it's not provoking economics losses. The objective of this work is to build a set, using bivariate correlation and canonical correlation, which will be the set of input variables of an artificial neural network to monitor the greater number of variables. This methodology was applied to the IEA-R1 Research Reactor at IPEN. Initially, for the input set of neural network we selected the variables: nuclear power, primary circuit flow rate, control/safety rod position and difference in pressure in the core of the reactor, because almost whole of monitoring variables have relation with the variables early described or its effect can be result of the interaction of two or more. The nuclear power is related to the increasing and decreasing of temperatures as well as the amount radiation due fission of the uranium; the rods are controls of power and influence in the amount of radiation and increasing and decreasing of temperatures; the primary circuit flow rate has the function of energy transport by removing the nucleus heat. An artificial neural network was trained and the results were satisfactory since the IEA-R1 Data Acquisition System reactor monitors 64 variables and, with a set of 9 input variables resulting from the correlation analysis, it was possible to monitor 51 variables. (author)

  8. A non-parametric conditional bivariate reference region with an application to height/weight measurements on normal girls

    DEFF Research Database (Denmark)

    Petersen, Jørgen Holm

    2009-01-01

    A conceptually simple two-dimensional conditional reference curve is described. The curve gives a decision basis for determining whether a bivariate response from an individual is "normal" or "abnormal" when taking into account that a third (conditioning) variable may influence the bivariate...... response. The reference curve is not only characterized analytically but also by geometric properties that are easily communicated to medical doctors - the users of such curves. The reference curve estimator is completely non-parametric, so no distributional assumptions are needed about the two......-dimensional response. An example that will serve to motivate and illustrate the reference is the study of the height/weight distribution of 7-8-year-old Danish school girls born in 1930, 1950, or 1970....

  9. Bivariate- distribution for transition matrix elements in Breit-Wigner to Gaussian domains of interacting particle systems.

    Science.gov (United States)

    Kota, V K B; Chavda, N D; Sahu, R

    2006-04-01

    Interacting many-particle systems with a mean-field one-body part plus a chaos generating random two-body interaction having strength lambda exhibit Poisson to Gaussian orthogonal ensemble and Breit-Wigner (BW) to Gaussian transitions in level fluctuations and strength functions with transition points marked by lambda = lambda c and lambda = lambda F, respectively; lambda F > lambda c. For these systems a theory for the matrix elements of one-body transition operators is available, as valid in the Gaussian domain, with lambda > lambda F, in terms of orbital occupation numbers, level densities, and an integral involving a bivariate Gaussian in the initial and final energies. Here we show that, using a bivariate-t distribution, the theory extends below from the Gaussian regime to the BW regime up to lambda = lambda c. This is well tested in numerical calculations for 6 spinless fermions in 12 single-particle states.

  10. Bivariate genome-wide association meta-analysis of pediatric musculoskeletal traits reveals pleiotropic effects at the SREBF1/TOM1L2 locus

    DEFF Research Database (Denmark)

    Medina-Gomez, Carolina; Kemp, John P; Dimou, Niki L

    2017-01-01

    Bone mineral density is known to be a heritable, polygenic trait whereas genetic variants contributing to lean mass variation remain largely unknown. We estimated the shared SNP heritability and performed a bivariate GWAS meta-analysis of total-body lean mass (TB-LM) and total-body less head bone...... as in human muscle tissue. This is the first bivariate GWAS meta-analysis to demonstrate genetic factors with pleiotropic effects on bone mineral density and lean mass.Bone mineral density and lean skeletal mass are heritable traits. Here, Medina-Gomez and colleagues perform bivariate GWAS analyses of total...

  11. Powerful bivariate genome-wide association analyses suggest the SOX6 gene influencing both obesity and osteoporosis phenotypes in males.

    Directory of Open Access Journals (Sweden)

    Yao-Zhong Liu

    2009-08-01

    Full Text Available Current genome-wide association studies (GWAS are normally implemented in a univariate framework and analyze different phenotypes in isolation. This univariate approach ignores the potential genetic correlation between important disease traits. Hence this approach is difficult to detect pleiotropic genes, which may exist for obesity and osteoporosis, two common diseases of major public health importance that are closely correlated genetically.To identify such pleiotropic genes and the key mechanistic links between the two diseases, we here performed the first bivariate GWAS of obesity and osteoporosis. We searched for genes underlying co-variation of the obesity phenotype, body mass index (BMI, with the osteoporosis risk phenotype, hip bone mineral density (BMD, scanning approximately 380,000 SNPs in 1,000 unrelated homogeneous Caucasians, including 499 males and 501 females. We identified in the male subjects two SNPs in intron 1 of the SOX6 (SRY-box 6 gene, rs297325 and rs4756846, which were bivariately associated with both BMI and hip BMD, achieving p values of 6.82x10(-7 and 1.47x10(-6, respectively. The two SNPs ranked at the top in significance for bivariate association with BMI and hip BMD in the male subjects among all the approximately 380,000 SNPs examined genome-wide. The two SNPs were replicated in a Framingham Heart Study (FHS cohort containing 3,355 Caucasians (1,370 males and 1,985 females from 975 families. In the FHS male subjects, the two SNPs achieved p values of 0.03 and 0.02, respectively, for bivariate association with BMI and femoral neck BMD. Interestingly, SOX6 was previously found to be essential to both cartilage formation/chondrogenesis and obesity-related insulin resistance, suggesting the gene's dual role in both bone and fat.Our findings, together with the prior biological evidence, suggest the SOX6 gene's importance in co-regulation of obesity and osteoporosis.

  12. Powerful Bivariate Genome-Wide Association Analyses Suggest the SOX6 Gene Influencing Both Obesity and Osteoporosis Phenotypes in Males

    Science.gov (United States)

    Liu, Yao-Zhong; Pei, Yu-Fang; Liu, Jian-Feng; Yang, Fang; Guo, Yan; Zhang, Lei; Liu, Xiao-Gang; Yan, Han; Wang, Liang; Zhang, Yin-Ping; Levy, Shawn; Recker, Robert R.; Deng, Hong-Wen

    2009-01-01

    Background Current genome-wide association studies (GWAS) are normally implemented in a univariate framework and analyze different phenotypes in isolation. This univariate approach ignores the potential genetic correlation between important disease traits. Hence this approach is difficult to detect pleiotropic genes, which may exist for obesity and osteoporosis, two common diseases of major public health importance that are closely correlated genetically. Principal Findings To identify such pleiotropic genes and the key mechanistic links between the two diseases, we here performed the first bivariate GWAS of obesity and osteoporosis. We searched for genes underlying co-variation of the obesity phenotype, body mass index (BMI), with the osteoporosis risk phenotype, hip bone mineral density (BMD), scanning ∼380,000 SNPs in 1,000 unrelated homogeneous Caucasians, including 499 males and 501 females. We identified in the male subjects two SNPs in intron 1 of the SOX6 (SRY-box 6) gene, rs297325 and rs4756846, which were bivariately associated with both BMI and hip BMD, achieving p values of 6.82×10−7 and 1.47×10−6, respectively. The two SNPs ranked at the top in significance for bivariate association with BMI and hip BMD in the male subjects among all the ∼380,000 SNPs examined genome-wide. The two SNPs were replicated in a Framingham Heart Study (FHS) cohort containing 3,355 Caucasians (1,370 males and 1,985 females) from 975 families. In the FHS male subjects, the two SNPs achieved p values of 0.03 and 0.02, respectively, for bivariate association with BMI and femoral neck BMD. Interestingly, SOX6 was previously found to be essential to both cartilage formation/chondrogenesis and obesity-related insulin resistance, suggesting the gene's dual role in both bone and fat. Conclusions Our findings, together with the prior biological evidence, suggest the SOX6 gene's importance in co-regulation of obesity and osteoporosis. PMID:19714249

  13. Modelling Stochastic Route Choice Behaviours with a Closed-Form Mixed Logit Model

    Directory of Open Access Journals (Sweden)

    Xinjun Lai

    2015-01-01

    Full Text Available A closed-form mixed Logit approach is proposed to model the stochastic route choice behaviours. It combines both the advantages of Probit and Logit to provide a flexible form in alternatives correlation and a tractable form in expression; besides, the heterogeneity in alternative variance can also be addressed. Paths are compared by pairs where the superiority of the binary Probit can be fully used. The Probit-based aggregation is also used for a nested Logit structure. Case studies on both numerical and empirical examples demonstrate that the new method is valid and practical. This paper thus provides an operational solution to incorporate the normal distribution in route choice with an analytical expression.

  14. Bayes factor covariance testing in item response models

    NARCIS (Netherlands)

    Fox, J.P.; Mulder, J.; Sinharay, Sandip

    2017-01-01

    Two marginal one-parameter item response theory models are introduced, by integrating out the latent variable or random item parameter. It is shown that both marginal response models are multivariate (probit) models with a compound symmetry covariance structure. Several common hypotheses concerning

  15. Bayes Factor Covariance Testing in Item Response Models

    NARCIS (Netherlands)

    Fox, Jean-Paul; Mulder, Joris; Sinharay, Sandip

    2017-01-01

    Two marginal one-parameter item response theory models are introduced, by integrating out the latent variable or random item parameter. It is shown that both marginal response models are multivariate (probit) models with a compound symmetry covariance structure. Several common hypotheses concerning

  16. The Earnings Impact of Training Duration in a Developing Country. An Ordered Probit Selection Model of Colombia's "Servicio Nacional de Aprendizaje" (SENA).

    Science.gov (United States)

    Jimenez, Emmanuel; Kugler, Bernardo

    1987-01-01

    Estimates the earnings impact of an extensive inservice training program in the developing world, Colombia's Servicio Nacional de Aprendizaje (SENA), through a comparison of nongraduates' and graduates' earnings profiles. (JOW)

  17. Comparison on models for genetic evaluation of non-return rate and success in first insemination of the Danish Holstein cow

    DEFF Research Database (Denmark)

    Sun, C; Su, G

    2010-01-01

    The aim of is study was to compare a linear Gaussian model with logit model and probit model for genetic evaluation of non-return rate within 56 d after first-insemination (NRR56) and success in first insemination (SFI). The whole dataset used in the analysis contained 471,742 records from...... the EBV of proven bulls, obtained from the whole dataset and from a reduced dataset which only contains the first-crop daughters of sires; 2) χ2 statistic for the expected and observed frequency in a cross validation. Heritabilities estimated using linear, probit and logit models were 0.011, 0.014 and 0....... Model validation showed that there was no difference between probit model and logit model, but the two models were better than linear model in stability and predictive ability for genetic evaluation of NRR56 and SFI. However, based on the whole dataset, the correlations between EBV estimated using...

  18. A bivariate space–time downscaler under space and time misalignment

    OpenAIRE

    Berrocal, Veronica J.; Gelfand, Alan E.; Holland, David M.

    2010-01-01

    Ozone and particulate matter, PM2.5, are co-pollutants that have long been associated with increased public health risks. Information on concentration levels for both pollutants comes from two sources: monitoring sites and output from complex numerical models that produce concentration surfaces over large spatial regions. In this paper, we offer a fully-model-based approach for fusing these two sources of information for the pair of co-pollutants which is computationally ...

  19. Migration of particles on heterogeneous bivariate lattices: the universal analytical expressions for the diffusion coefficients

    Czech Academy of Sciences Publication Activity Database

    Tarasenko, Alexander; Boháč, Petr; Jastrabík, Lubomír

    2015-01-01

    Roč. 74, Nov (2015), s. 556-560 ISSN 1386-9477 R&D Projects: GA MŠk LO1409; GA TA ČR TA03010743 Institutional support: RVO:68378271 Keywords : surface diffusion * heterogeneous lattices * lattice-gas models * kinetic Monte Carlo simulation Subject RIV: BM - Solid Matter Physics ; Magnetism Impact factor: 1.904, year: 2015

  20. On the Construction of Bivariate Exponential Distributions with an Arbitrary Correlation Coefficient

    DEFF Research Database (Denmark)

    Bladt, Mogens; Nielsen, Bo Friis

    2010-01-01

    coefficient (also negative). Secondly, the class satisfies that any linear combination (projection) of the marginal random variables is a phase-type distribution. The latter property is partially important for the development of hypothesis testing in linear models. Finally, it is easy to simulate...

  1. The Bivariate (Complex) Fibonacci and Lucas Polynomials: An Historical Investigation with the Maple's Help

    Science.gov (United States)

    Alves, Francisco Regis Vieira; Catarino, Paula Maria Machado Cruz

    2016-01-01

    The current research around the Fibonacci's and Lucas' sequence evidences the scientific vigor of both mathematical models that continue to inspire and provide numerous specializations and generalizations, especially from the sixties. One of the current of research and investigations around the Generalized Sequence of Lucas, involves it's…

  2. Investigating the relationship between costs and outcomes for English mental health providers: a bi-variate multi-level regression analysis.

    Science.gov (United States)

    Moran, Valerie; Jacobs, Rowena

    2018-06-01

    Provider payment systems for mental health care that incentivize cost control and quality improvement have been a policy focus in a number of countries. In England, a new prospective provider payment system is being introduced to mental health that should encourage providers to control costs and improve outcomes. The aim of this research is to investigate the relationship between costs and outcomes to ascertain whether there is a trade-off between controlling costs and improving outcomes. The main data source is the Mental Health Minimum Data Set (MHMDS) for the years 2011/12 and 2012/13. Costs are calculated using NHS reference cost data while outcomes are measured using the Health of the Nation Outcome Scales (HoNOS). We estimate a bivariate multi-level model with costs and outcomes simultaneously. We calculate the correlation and plot the pairwise relationship between residual costs and outcomes at the provider level. After controlling for a range of demographic, need, social, and treatment variables, residual variation in costs and outcomes remains at the provider level. The correlation between residual costs and outcomes is negative, but very small, suggesting that cost-containment efforts by providers should not undermine outcome-improving efforts under the new payment system.

  3. BROJA-2PID: A Robust Estimator for Bivariate Partial Information Decomposition

    Directory of Open Access Journals (Sweden)

    Abdullah Makkeh

    2018-04-01

    Full Text Available Makkeh, Theis, and Vicente found that Cone Programming model is the most robust to compute the Bertschinger et al. partial information decomposition (BROJA PID measure. We developed a production-quality robust software that computes the BROJA PID measure based on the Cone Programming model. In this paper, we prove the important property of strong duality for the Cone Program and prove an equivalence between the Cone Program and the original Convex problem. Then, we describe in detail our software, explain how to use it, and perform some experiments comparing it to other estimators. Finally, we show that the software can be extended to compute some quantities of a trivaraite PID measure.

  4. Development and Implementation of a Telecommuting Evaluation Framework, and Modeling the Executive Telecommuting Adoption Process

    Science.gov (United States)

    Vora, V. P.; Mahmassani, H. S.

    2002-02-01

    This work proposes and implements a comprehensive evaluation framework to document the telecommuter, organizational, and societal impacts of telecommuting through telecommuting programs. Evaluation processes and materials within the outlined framework are also proposed and implemented. As the first component of the evaluation process, the executive survey is administered within a public sector agency. The survey data is examined through exploratory analysis and is compared to a previous survey of private sector executives. The ordinal probit, dynamic probit, and dynamic generalized ordinal probit (DGOP) models of telecommuting adoption are calibrated to identify factors which significantly influence executive adoption preferences and to test the robustness of such factors. The public sector DGOP model of executive willingness to support telecommuting under different program scenarios is compared with an equivalent private sector DGOP model. Through the telecommuting program, a case study of telecommuting travel impacts is performed to further substantiate research.

  5. Mortality as a bivariate function of age and size in indeterminate growers

    DEFF Research Database (Denmark)

    Colchero, Fernando; Schaible, Ralf

    2014-01-01

    Mortality in organisms that grow indefinitely, known as indeterminate growers, is thought to be driven primarily by size. However, a number of ageing mechanisms also act as functions of age. Thus, to explain mortality in these species, both size and age need to be explicitly modelled. Here we...... contribution of age, as a proxy for chronological deterioration, is of typical senescence; while a seemingly senescent population can have underlying age-related negative senescence, which is, however, overcome by negative underlying size effects. We show how inference about these unobserved processes can...

  6. Aquaculture in artificially developed wetlands in urban areas: an application of the bivariate relationship between soil and surface water in landscape ecology.

    Science.gov (United States)

    Paul, Abhijit

    2011-01-01

    Wetlands show a strong bivariate relationship between soil and surface water. Artificially developed wetlands help to build landscape ecology and make built environments sustainable. The bheries, wetlands of eastern Calcutta (India), utilize the city sewage to develop urban aquaculture that supports the local fish industries and opens a new frontier in sustainable environmental planning research.

  7. A comparison of the effect of 5-bromodeoxyuridine substitution on 33258 Hoechst- and DAPI-fluorescence of isolated chromosomes by bivariate flow karyotyping

    NARCIS (Netherlands)

    Buys, C. H.; Mesa, J.; van der Veen, A. Y.; Aten, J. A.

    1986-01-01

    Application of the fluorescent DNA-intercalator propidium iodide for stabilization of the mitotic chromosome structure during isolation of chromosomes from V79 Chinese hamster cells and subsequent staining with the fluorochromes 33258 Hoechst or DAPI allowed bivariate flow karyotyping of isolated

  8. A Bayesian Multi-Level Factor Analytic Model of Consumer Price Sensitivities across Categories

    Science.gov (United States)

    Duvvuri, Sri Devi; Gruca, Thomas S.

    2010-01-01

    Identifying price sensitive consumers is an important problem in marketing. We develop a Bayesian multi-level factor analytic model of the covariation among household-level price sensitivities across product categories that are substitutes. Based on a multivariate probit model of category incidence, this framework also allows the researcher to…

  9. Quasi-bivariate variational mode decomposition as a tool of scale analysis in wall-bounded turbulence

    Science.gov (United States)

    Wang, Wenkang; Pan, Chong; Wang, Jinjun

    2018-01-01

    The identification and separation of multi-scale coherent structures is a critical task for the study of scale interaction in wall-bounded turbulence. Here, we propose a quasi-bivariate variational mode decomposition (QB-VMD) method to extract structures with various scales from instantaneous two-dimensional (2D) velocity field which has only one primary dimension. This method is developed from the one-dimensional VMD algorithm proposed by Dragomiretskiy and Zosso (IEEE Trans Signal Process 62:531-544, 2014) to cope with a quasi-2D scenario. It poses the feature of length-scale bandwidth constraint along the decomposed dimension, together with the central frequency re-balancing along the non-decomposed dimension. The feasibility of this method is tested on both a synthetic flow field and a turbulent boundary layer at moderate Reynolds number (Re_{τ } = 3458) measured by 2D particle image velocimetry (PIV). Some other popular scale separation tools, including pseudo-bi-dimensional empirical mode decomposition (PB-EMD), bi-dimensional EMD (B-EMD) and proper orthogonal decomposition (POD), are also tested for comparison. Among all these methods, QB-VMD shows advantages in both scale characterization and energy recovery. More importantly, the mode mixing problem, which degrades the performance of EMD-based methods, is avoided or minimized in QB-VMD. Finally, QB-VMD analysis of the wall-parallel plane in the log layer (at y/δ = 0.12) of the studied turbulent boundary layer shows the coexistence of large- or very large-scale motions (LSMs or VLSMs) and inner-scaled structures, which can be fully decomposed in both physical and spectral domains.

  10. Bayesian mixture models for assessment of gene differential behaviour and prediction of pCR through the integration of copy number and gene expression data.

    Directory of Open Access Journals (Sweden)

    Filippo Trentini

    Full Text Available We consider modeling jointly microarray RNA expression and DNA copy number data. We propose Bayesian mixture models that define latent Gaussian probit scores for the DNA and RNA, and integrate between the two platforms via a regression of the RNA probit scores on the DNA probit scores. Such a regression conveniently allows us to include additional sample specific covariates such as biological conditions and clinical outcomes. The two developed methods are aimed respectively to make inference on differential behaviour of genes in patients showing different subtypes of breast cancer and to predict the pathological complete response (pCR of patients borrowing strength across the genomic platforms. Posterior inference is carried out via MCMC simulations. We demonstrate the proposed methodology using a published data set consisting of 121 breast cancer patients.

  11. Remarques concernant la probité scientifique

    Directory of Open Access Journals (Sweden)

    Roxana Iordache

    2015-08-01

    Full Text Available On constate, ces derniers temps, que certains jeunes enseignants expérimentent le plagiat sur des théories déjà publiées et généralement acceptées par les milieux scientifiques. D'autres enseignants faussent le contenu d'idées d'un travail, ou de l'autre, dans l'espoir d'augmenter leurs propres mérites. C' est avec stupeur que nous découvrons dans les Actes du IXe Colloque de linguistique Latine (Madrid, 1998, vol. 1, les pages de Mme Mirka Maraldi (Université de Bologne, intitulées «Concessive ut: parataxis, hypotaxis and correlation» (pages 487- 500. Mme M. Maraldi critique à la page 493 du volume supra mentionné notre étude sur le ut concessif du Jatin, en oubliant complètement d 'indiquer dans le texte et dans les notes de cette page, ainsi que de toutes les autres pages, le litre de notre étude, le lieu de parution et la page (ou les pages de nos soi-disant erreurs. On critique plusieurs fois «Jordache's analysis» - un syntagme vague, en fait! Précisons en mȇme temps que Mme M. Maraldi mentionne avec beaucoup de souci, à chaque page, les données des autres articles (Iieu d'apparition, page etc., etc, quoique, pour la plupart, il s'agisse de travaux peu importants pour le sujet en discussion.

  12. Remarques concernant la probité scientifique

    Directory of Open Access Journals (Sweden)

    Roxana Iordache

    2000-12-01

    Full Text Available On constate, ces derniers temps, que certains jeunes enseignants expérimentent le plagiat sur des théories déjà publiées et généralement acceptées par les milieux scientifiques. D'autres enseignants faussent le contenu d'idees d'un travail, ou de l'autre, dans l'espoir d'augmenter leurs propres mérites.

  13. Total, Direct, and Indirect Effects in Logit Models

    DEFF Research Database (Denmark)

    Karlson, Kristian Bernt; Holm, Anders; Breen, Richard

    It has long been believed that the decomposition of the total effect of one variable on another into direct and indirect effects, while feasible in linear models, is not possible in non-linear probability models such as the logit and probit. In this paper we present a new and simple method...... average partial effects, as defined by Wooldridge (2002). We present the method graphically and illustrate it using the National Educational Longitudinal Study of 1988...

  14. Maximum likelihood estimation of the parameters of a bivariate Gaussian-Weibull distribution from machine stress-rated data

    Science.gov (United States)

    Steve P. Verrill; David E. Kretschmann; James W. Evans

    2016-01-01

    Two important wood properties are stiffness (modulus of elasticity, MOE) and bending strength (modulus of rupture, MOR). In the past, MOE has often been modeled as a Gaussian and MOR as a lognormal or a two- or threeparameter Weibull. It is well known that MOE and MOR are positively correlated. To model the simultaneous behavior of MOE and MOR for the purposes of wood...

  15. Reliability Implications in Wood Systems of a Bivariate Gaussian-Weibull Distribution and the Associated Univariate Pseudo-truncated Weibull

    Science.gov (United States)

    Steve P. Verrill; James W. Evans; David E. Kretschmann; Cherilyn A. Hatfield

    2014-01-01

    Two important wood properties are the modulus of elasticity (MOE) and the modulus of rupture (MOR). In the past, the statistical distribution of the MOE has often been modeled as Gaussian, and that of the MOR as lognormal or as a two- or three-parameter Weibull distribution. It is well known that MOE and MOR are positively correlated. To model the simultaneous behavior...

  16. Small Sample Properties of Asymptotically Efficient Estimators of the Parameters of a Bivariate Gaussian–Weibull Distribution

    Science.gov (United States)

    Steve P. Verrill; James W. Evans; David E. Kretschmann; Cherilyn A. Hatfield

    2012-01-01

    Two important wood properties are stiffness (modulus of elasticity or MOE) and bending strength (modulus of rupture or MOR). In the past, MOE has often been modeled as a Gaussian and MOR as a lognormal or a two or three parameter Weibull. It is well known that MOE and MOR are positively correlated. To model the simultaneous behavior of MOE and MOR for the purposes of...

  17. Poisson versus threshold models for genetic analysis of clinical mastitis in US Holsteins.

    Science.gov (United States)

    Vazquez, A I; Weigel, K A; Gianola, D; Bates, D M; Perez-Cabal, M A; Rosa, G J M; Chang, Y M

    2009-10-01

    Typically, clinical mastitis is coded as the presence or absence of disease in a given lactation, and records are analyzed with either linear models or binary threshold models. Because the presence of mastitis may include cows with multiple episodes, there is a loss of information when counts are treated as binary responses. Poisson models are appropriated for random variables measured as the number of events, and although these models are used extensively in studying the epidemiology of mastitis, they have rarely been used for studying the genetic aspects of mastitis. Ordinal threshold models are pertinent for ordered categorical responses; although one can hypothesize that the number of clinical mastitis episodes per animal reflects a continuous underlying increase in mastitis susceptibility, these models have rarely been used in genetic analysis of mastitis. The objective of this study was to compare probit, Poisson, and ordinal threshold models for the genetic evaluation of US Holstein sires for clinical mastitis. Mastitis was measured as a binary trait or as the number of mastitis cases. Data from 44,908 first-parity cows recorded in on-farm herd management software were gathered, edited, and processed for the present study. The cows were daughters of 1,861 sires, distributed over 94 herds. Predictive ability was assessed via a 5-fold cross-validation using 2 loss functions: mean squared error of prediction (MSEP) as the end point and a cost difference function. The heritability estimates were 0.061 for mastitis measured as a binary trait in the probit model and 0.085 and 0.132 for the number of mastitis cases in the ordinal threshold and Poisson models, respectively; because of scale differences, only the probit and ordinal threshold models are directly comparable. Among healthy animals, MSEP was smallest for the probit model, and the cost function was smallest for the ordinal threshold model. Among diseased animals, MSEP and the cost function were smallest

  18. Histogram Estimators of Bivariate Densities

    National Research Council Canada - National Science Library

    Husemann, Joyce A

    1986-01-01

    One-dimensional fixed-interval histogram estimators of univariate probability density functions are less efficient than the analogous variable-interval estimators which are constructed from intervals...

  19. BIMOND3, Monotone Bivariate Interpolation

    International Nuclear Information System (INIS)

    Fritsch, F.N.; Carlson, R.E.

    2001-01-01

    1 - Description of program or function: BIMOND is a FORTRAN-77 subroutine for piecewise bi-cubic interpolation to data on a rectangular mesh, which reproduces the monotonousness of the data. A driver program, BIMOND1, is provided which reads data, computes the interpolating surface parameters, and evaluates the function on a mesh suitable for plotting. 2 - Method of solution: Monotonic piecewise bi-cubic Hermite interpolation is used. 3 - Restrictions on the complexity of the problem: The current version of the program can treat data which are monotone in only one of the independent variables, but cannot handle piecewise monotone data

  20. Regional Analysis of Precipitation by Means of Bivariate Distribution Adjusted by Maximum Entropy; Analisis regional de precipitacion con base en una distribucion bivariada ajustada por maxima entropia

    Energy Technology Data Exchange (ETDEWEB)

    Escalante Sandoval, Carlos A.; Dominguez Esquivel, Jose Y. [Universidad Nacional Autonoma de Mexico (Mexico)

    2001-09-01

    The principle of maximum entropy (POME) is used to derive an alternative method of parameter estimation for the bivariate Gumbel distribution. A simple algorithm for this parameter estimation technique is presented. This method is applied to analyze the precipitation in a region of Mexico. Design events are compered with those obtained by the maximum likelihood procedure. According to the results, the proposed technique is a suitable option to be considered when performing frequency analysis of precipitation with small samples. [Spanish] El principio de maxima entropia, conocido como POME, es utilizado para derivar un procedimiento alternativo de estimacion de parametros de la distribucion bivariada de valores extremos con marginales Gumbel. El modelo se aplica al analisis de la precipitacion maxima en 24 horas en una region de Mexico y los eventos de diseno obtenidos son comparados con los proporcionados por la tecnica de maxima verosimilitud. De acuerdo con los resultados obtenidos, se concluye que la tecnica propuesta representa una buena opcion, sobre todo para el caso de muestras pequenas.

  1. Association of Supply Type with Fecal Contamination of Source Water and Household Stored Drinking Water in Developing Countries: A Bivariate Meta-analysis.

    Science.gov (United States)

    Shields, Katherine F; Bain, Robert E S; Cronk, Ryan; Wright, Jim A; Bartram, Jamie

    2015-12-01

    Access to safe drinking water is essential for health. Monitoring access to drinking water focuses on water supply type at the source, but there is limited evidence on whether quality differences at the source persist in water stored in the household. We assessed the extent of fecal contamination at the source and in household stored water (HSW) and explored the relationship between contamination at each sampling point and water supply type. We performed a bivariate random-effects meta-analysis of 45 studies, identified through a systematic review, that reported either the proportion of samples free of fecal indicator bacteria and/or individual sample bacteria counts for source and HSW, disaggregated by supply type. Water quality deteriorated substantially between source and stored water. The mean percentage of contaminated samples (noncompliance) at the source was 46% (95% CI: 33, 60%), whereas mean noncompliance in HSW was 75% (95% CI: 64, 84%). Water supply type was significantly associated with noncompliance at the source (p water (OR = 0.2; 95% CI: 0.1, 0.5) and HSW (OR = 0.3; 95% CI: 0.2, 0.8) from piped supplies had significantly lower odds of contamination compared with non-piped water, potentially due to residual chlorine. Piped water is less likely to be contaminated compared with other water supply types at both the source and in HSW. A focus on upgrading water services to piped supplies may help improve safety, including for those drinking stored water.

  2. Modelling the monetary policy reaction function of the Colombian Central Bank

    OpenAIRE

    Otero, Jesus; Ramírez, Manuel

    2008-01-01

    This paper proposes a simple Ordered Probit model to analyse the monetary policy reaction function of the Colombian Central Bank. There is evidence that the reaction function is asymmetric, in the sense that the Bank increases the Bank rate when the gap between observed inflation and the inflation target (lagged once) is positive, but it does not reduce the Bank rate when the gap is negative. This behaviour suggests that the Bank is more interested in fulfilling the announced inflation target...

  3. Diagnostic value of sTREM-1 in bronchoalveolar lavage fluid in ICU patients with bacterial lung infections: a bivariate meta-analysis.

    Science.gov (United States)

    Shi, Jia-Xin; Li, Jia-Shu; Hu, Rong; Li, Chun-Hua; Wen, Yan; Zheng, Hong; Zhang, Feng; Li, Qin

    2013-01-01

    The serum soluble triggering receptor expressed on myeloid cells-1 (sTREM-1) is a useful biomarker in differentiating bacterial infections from others. However, the diagnostic value of sTREM-1 in bronchoalveolar lavage fluid (BALF) in lung infections has not been well established. We performed a meta-analysis to assess the accuracy of sTREM-1 in BALF for diagnosis of bacterial lung infections in intensive care unit (ICU) patients. We searched PUBMED, EMBASE and Web of Knowledge (from January 1966 to October 2012) databases for relevant studies that reported diagnostic accuracy data of BALF sTREM-1 in the diagnosis of bacterial lung infections in ICU patients. Pooled sensitivity, specificity, and positive and negative likelihood ratios were calculated by a bivariate regression analysis. Measures of accuracy and Q point value (Q*) were calculated using summary receiver operating characteristic (SROC) curve. The potential between-studies heterogeneity was explored by subgroup analysis. Nine studies were included in the present meta-analysis. Overall, the prevalence was 50.6%; the sensitivity was 0.87 (95% confidence interval (CI), 0.72-0.95); the specificity was 0.79 (95% CI, 0.56-0.92); the positive likelihood ratio (PLR) was 4.18 (95% CI, 1.78-9.86); the negative likelihood ratio (NLR) was 0.16 (95% CI, 0.07-0.36), and the diagnostic odds ratio (DOR) was 25.60 (95% CI, 7.28-89.93). The area under the SROC curve was 0.91 (95% CI, 0.88-0.93), with a Q* of 0.83. Subgroup analysis showed that the assay method and cutoff value influenced the diagnostic accuracy of sTREM-1. BALF sTREM-1 is a useful biomarker of bacterial lung infections in ICU patients. Further studies are needed to confirm the optimized cutoff value.

  4. Endogenous Women's Autonomy and the Use of Reproductive Health Services: Empirical Evidence from Tajikistan

    OpenAIRE

    Yusuke Kamiya

    2010-01-01

    Though gender equity is widely considered to be a key to improving maternal health in developing countries, little empirical evidence has been presented to support this claim. This paper investigates whether or not and how female autonomy within the household affects women's use of reproductive health care in Tajikistan, where the situation of maternal health and gender equity is worse compared with neighbouring countries. Estimation is performed using bivariate probit models in which woman's...

  5. Taxes and Bribes in Uganda

    OpenAIRE

    Jagger, Pamela; Shively, Gerald

    2014-01-01

    Using data from 433 firms operating along Uganda’s charcoal and timber supply chains we investigate patterns of bribe payment and tax collection between supply chain actors and government officials responsible for collecting taxes and fees. We examine the factors associated with the presence and magnitude of bribe and tax payments using a series of bivariate probit and Tobit regression models. We find empirical support for a number of hypotheses related to payments, highlighting the role of q...

  6. Modelling world gold prices and USD foreign exchange relationship using multivariate GARCH model

    Science.gov (United States)

    Ping, Pung Yean; Ahmad, Maizah Hura Binti

    2014-12-01

    World gold price is a popular investment commodity. The series have often been modeled using univariate models. The objective of this paper is to show that there is a co-movement between gold price and USD foreign exchange rate. Using the effect of the USD foreign exchange rate on the gold price, a model that can be used to forecast future gold prices is developed. For this purpose, the current paper proposes a multivariate GARCH (Bivariate GARCH) model. Using daily prices of both series from 01.01.2000 to 05.05.2014, a causal relation between the two series understudied are found and a bivariate GARCH model is produced.

  7. Inference for the Bivariate and Multivariate Hidden Truncated Pareto(type II) and Pareto(type IV) Distribution and Some Measures of Divergence Related to Incompatibility of Probability Distribution

    Science.gov (United States)

    Ghosh, Indranil

    2011-01-01

    Consider a discrete bivariate random variable (X, Y) with possible values x[subscript 1], x[subscript 2],..., x[subscript I] for X and y[subscript 1], y[subscript 2],..., y[subscript J] for Y. Further suppose that the corresponding families of conditional distributions, for X given values of Y and of Y for given values of X are available. We…

  8. Comparison and validation of shallow landslides susceptibility maps generated by bi-variate and multi-variate linear probabilistic GIS-based techniques. A case study from Ribeira Quente Valley (S. Miguel Island, Azores)

    Science.gov (United States)

    Marques, R.; Amaral, P.; Zêzere, J. L.; Queiroz, G.; Goulart, C.

    2009-04-01

    Slope instability research and susceptibility mapping is a fundamental component of hazard assessment and is of extreme importance for risk mitigation, land-use management and emergency planning. Landslide susceptibility zonation has been actively pursued during the last two decades and several methodologies are still being improved. Among all the methods presented in the literature, indirect quantitative probabilistic methods have been extensively used. In this work different linear probabilistic methods, both bi-variate and multi-variate (Informative Value, Fuzzy Logic, Weights of Evidence and Logistic Regression), were used for the computation of the spatial probability of landslide occurrence, using the pixel as mapping unit. The methods used are based on linear relationships between landslides and 9 considered conditioning factors (altimetry, slope angle, exposition, curvature, distance to streams, wetness index, contribution area, lithology and land-use). It was assumed that future landslides will be conditioned by the same factors as past landslides in the study area. The presented work was developed for Ribeira Quente Valley (S. Miguel Island, Azores), a study area of 9,5 km2, mainly composed of volcanic deposits (ash and pumice lapilli) produced by explosive eruptions in Furnas Volcano. This materials associated to the steepness of the slopes (38,9% of the area has slope angles higher than 35°, reaching a maximum of 87,5°), make the area very prone to landslide activity. A total of 1.495 shallow landslides were mapped (at 1:5.000 scale) and included in a GIS database. The total affected area is 401.744 m2 (4,5% of the study area). Most slope movements are translational slides frequently evolving into debris-flows. The landslides are elongated, with maximum length generally equivalent to the slope extent, and their width normally does not exceed 25 m. The failure depth rarely exceeds 1,5 m and the volume is usually smaller than 700 m3. For modelling

  9. The effect of patient satisfaction with pharmacist consultation on medication adherence: an instrumental variable approach

    Directory of Open Access Journals (Sweden)

    Gu NY

    2008-12-01

    Full Text Available There are limited studies on quantifying the impact of patient satisfaction with pharmacist consultation on patient medication adherence. Objectives: The objective of this study is to evaluate the effect of patient satisfaction with pharmacist consultation services on medication adherence in a large managed care organization. Methods: We analyzed data from a patient satisfaction survey of 6,916 patients who had used pharmacist consultation services in Kaiser Permanente Southern California from 1993 to 1996. We compared treating patient satisfaction as exogenous, in a single-equation probit model, with a bivariate probit model where patient satisfaction was treated as endogenous. Different sets of instrumental variables were employed, including measures of patients' emotional well-being and patients' propensity to fill their prescriptions at a non-Kaiser Permanente (KP pharmacy. The Smith-Blundell test was used to test whether patient satisfaction was endogenous. Over-identification tests were used to test the validity of the instrumental variables. The Staiger-Stock weak instrument test was used to evaluate the explanatory power of the instrumental variables. Results: All tests indicated that the instrumental variables method was valid and the instrumental variables used have significant explanatory power. The single equation probit model indicated that the effect of patient satisfaction with pharmacist consultation was significant (p<0.010. However, the bivariate probit models revealed that the marginal effect of pharmacist consultation on medication adherence was significantly greater than the single equation probit. The effect increased from 7% to 30% (p<0.010 after controlling for endogeneity bias. Conclusion: After appropriate adjustment for endogeneity bias, patients satisfied with their pharmacy services are substantially more likely to adhere to their medication. The results have important policy implications given the increasing focus

  10. Determining Effects of Genes, Environment, and Gene X Environment Interaction That Are Common to Breast and Ovarian Cancers Via Bivariate Logistic Regression

    National Research Council Canada - National Science Library

    Ramakrishnan, Viswanathan

    2003-01-01

    .... A generalized estimation equations (GEE) logistic regression model was used for the modeling. A shared trait is defined for two discrete traits based upon explicit patterns of trait concordance and discordance within twin pairs...

  11. A comparison of dose-response models for death from hematological depression

    International Nuclear Information System (INIS)

    Morris, M.D.; Jones, T.D.

    1987-01-01

    Many radiation-induced lethality experiments that have been published for various mammalian species have been compiled into a database suitable to study interspecific variability of radiosensitivity, dose-rate dependence of sensitivity, dose-response behavior within each experiment, etc. The data compiled were restricted to continuous and nearly continuous exposures to photon radiations having source energies above 100 keV. Also, photon source energy, exposure geometry, and body weight considerations were used to select studies where the dose to hematopoietic marrow was nearly uniform, i.e., < +- 20%. The data base reflects 13 mammalian test species ranging from mouse to cattle. Some 211 studies were compiled but only 105 were documented in adequate detail to be useful in development and evaluation of dose-response models of interest to practical human exposures. Of the 105 studies, 70 were for various rodent species, and 35 were for nonrodent groups ranging from standard laboratory primates (body weight ∼5 kg) to cattle (body weight 375 kg). This paper considers seven different dose-response models which are tested for validity against those 105 studies. The dose-response models included: a right-skewed extreme value, a left-skewed extreme value model, log-logistic, log-probit, logistic, probit, and Weibull models. In general, the log transformed models did not improve model performance and the extreme value models did not seem consistent with the preponderance of the data. Overall, the probit and the logistic models seemed preferable over the Weibull model. 30 refs., 8 tabs

  12. Review Genetic prediction models and heritability estimates for ...

    African Journals Online (AJOL)

    edward

    2015-05-09

    May 9, 2015 ... Instead, through stepwise inclusion of type traits in the PH model, the .... Great Britain uses a bivariate animal model for all breeds, ... Štípková, 2012) and then applying linear models to the combined datasets with the ..... multivariate analyses, it is difficult to use indicator traits to estimate longevity early in life ...

  13. Modeling animal movements using stochastic differential equations

    Science.gov (United States)

    Haiganoush K. Preisler; Alan A. Ager; Bruce K. Johnson; John G. Kie

    2004-01-01

    We describe the use of bivariate stochastic differential equations (SDE) for modeling movements of 216 radiocollared female Rocky Mountain elk at the Starkey Experimental Forest and Range in northeastern Oregon. Spatially and temporally explicit vector fields were estimated using approximating difference equations and nonparametric regression techniques. Estimated...

  14. Correlations and Non-Linear Probability Models

    DEFF Research Database (Denmark)

    Breen, Richard; Holm, Anders; Karlson, Kristian Bernt

    2014-01-01

    the dependent variable of the latent variable model and its predictor variables. We show how this correlation can be derived from the parameters of non-linear probability models, develop tests for the statistical significance of the derived correlation, and illustrate its usefulness in two applications. Under......Although the parameters of logit and probit and other non-linear probability models are often explained and interpreted in relation to the regression coefficients of an underlying linear latent variable model, we argue that they may also be usefully interpreted in terms of the correlations between...... certain circumstances, which we explain, the derived correlation provides a way of overcoming the problems inherent in cross-sample comparisons of the parameters of non-linear probability models....

  15. Evaluation of Statistical Methods for Modeling Historical Resource Production and Forecasting

    Science.gov (United States)

    Nanzad, Bolorchimeg

    This master's thesis project consists of two parts. Part I of the project compares modeling of historical resource production and forecasting of future production trends using the logit/probit transform advocated by Rutledge (2011) with conventional Hubbert curve fitting, using global coal production as a case study. The conventional Hubbert/Gaussian method fits a curve to historical production data whereas a logit/probit transform uses a linear fit to a subset of transformed production data. Within the errors and limitations inherent in this type of statistical modeling, these methods provide comparable results. That is, despite that apparent goodness-of-fit achievable using the Logit/Probit methodology, neither approach provides a significant advantage over the other in either explaining the observed data or in making future projections. For mature production regions, those that have already substantially passed peak production, results obtained by either method are closely comparable and reasonable, and estimates of ultimately recoverable resources obtained by either method are consistent with geologically estimated reserves. In contrast, for immature regions, estimates of ultimately recoverable resources generated by either of these alternative methods are unstable and thus, need to be used with caution. Although the logit/probit transform generates high quality-of-fit correspondence with historical production data, this approach provides no new information compared to conventional Gaussian or Hubbert-type models and may have the effect of masking the noise and/or instability in the data and the derived fits. In particular, production forecasts for immature or marginally mature production systems based on either method need to be regarded with considerable caution. Part II of the project investigates the utility of a novel alternative method for multicyclic Hubbert modeling tentatively termed "cycle-jumping" wherein overlap of multiple cycles is limited. The

  16. Goodness-of-Fit Assessment of Item Response Theory Models

    Science.gov (United States)

    Maydeu-Olivares, Alberto

    2013-01-01

    The article provides an overview of goodness-of-fit assessment methods for item response theory (IRT) models. It is now possible to obtain accurate "p"-values of the overall fit of the model if bivariate information statistics are used. Several alternative approaches are described. As the validity of inferences drawn on the fitted model…

  17. Simultaneous use of serum IgG and IgM for risk scoring of suspected early Lyme borreliosis: graphical and bivariate analyses

    DEFF Research Database (Denmark)

    Dessau, Ram B; Ejlertsen, Tove; Hilden, Jørgen

    2010-01-01

    The laboratory diagnosis of early disseminated Lyme borreliosis (LB) rests on IgM and IgG antibodies in serum. The purpose of this study was to refine the statistical interpretation of IgM and IgG by combining the diagnostic evidence provided by the two immunoglobulins and exploiting the whole...... range of the quantitative variation in test values. ELISA assays based on purified flagella antigen were performed on sera from 815 healthy Danish blood donors as negative controls and 117 consecutive patients with confirmed neuroborreliosis (NB). A logistic regression model combining the standardized...... units of the IgM and IgG ELISA assays was constructed and the resulting disease risks graphically evaluated by receiver operating characteristic and 'predictiveness' curves. The combined model improves the discrimination between NB patients and blood donors. Hence, it is possible to report a predicted...

  18. QR-GARCH-M Model for Risk-Return Tradeoff in U.S. Stock Returns and Business Cycles

    OpenAIRE

    Nyberg, Henri

    2010-01-01

    In the empirical finance literature findings on the risk return tradeoff in excess stock market returns are ambiguous. In this study, we develop a new QR-GARCH-M model combining a probit model for a binary business cycle indicator and a regime switching GARCH-in-mean model for excess stock market return with the business cycle indicator defining the regime. Estimation results show that there is statistically significant variation in the U.S. excess stock returns over the business cycle. Howev...

  19. Estimation of the Thurstonian model for the 2-AC protocol

    DEFF Research Database (Denmark)

    Christensen, Rune Haubo Bojesen; Lee, Hye-Seong; Brockhoff, Per B.

    2012-01-01

    . This relationship makes it possible to extract estimates and standard errors of δ and τ from general statistical software, and furthermore, it makes it possible to combine standard regression modelling with the Thurstonian model for the 2-AC protocol. A model for replicated 2-AC data is proposed using cumulative......The 2-AC protocol is a 2-AFC protocol with a “no-difference” option and is technically identical to the paired preference test with a “no-preference” option. The Thurstonian model for the 2-AC protocol is parameterized by δ and a decision parameter τ, the estimates of which can be obtained...... by fairly simple well-known methods. In this paper we describe how standard errors of the parameters can be obtained and how exact power computations can be performed. We also show how the Thurstonian model for the 2-AC protocol is closely related to a statistical model known as a cumulative probit model...

  20. Effects of supplementary private health insurance on physician visits in Korea.

    Science.gov (United States)

    Kang, Sungwook; You, Chang Hoon; Kwon, Young Dae; Oh, Eun-Hwan

    2009-12-01

    The coverage of social health insurance has remained limited, despite it being compulsory in Korea. Accordingly, Koreans have come to rely upon supplementary private health insurance (PHI) to cover their medical costs. We examined the effects of supplementary PHI on physician visits in Korea. This study used individual data from 11,043 respondents who participated in the Korean Labor and Income Panel Survey in 2001. We conducted a single probit model to identify the relationship between PHI and physician visits, with adjustment for the following covariates: demographic characteristics, socioeconomic status, health status, and health-related behavior. Finally, we performed a bivariate probit model to examine the true effect of PHI on physician visits, with adjustment for the above covariates plus unobservable covariates that might affect not only physician visit, but also the purchase of PHI. We found that about 38% of all respondents had one or more private health plans. Forty-five percent of all respondents visited one or more physicians, and 49% of those who were privately insured had physician visits compared with 42% of the uninsured. The single probit model showed that those with PHI were about 14 percentage points more likely to visit physicians than those who do not have PHI. However, this distinction disappears in the bivariate probit model. This result might have been a consequence of the nature of private health plans in Korea. Private insurance companies pay a fixed amount directly to their enrollees in case of illness/injury, and the individuals are responsible subsequently for purchasing their own healthcare services. This study demonstrated the potential of Korean PHI to address the problem of moral hazard. These results serve as a reference for policy makers when considering how to finance healthcare services, as well as to contain healthcare expenditure.

  1. Extensão bivariada do índice de confiabilidade univariado para avaliação da estabilidade fenotípica Bivariate extension of univariate reliability index for evaluating phenotypic stability

    Directory of Open Access Journals (Sweden)

    Suzankelly Cunha Arruda de Abreu

    2004-10-01

    Full Text Available Com o presente trabalho, objetiva-se realizar a derivação teórica da extensão bivariada dos métodos de Annicchiarico (1992 e Annicchiarico et al. (1995 para estudar a estabilidade fenotípica. A partir dos ensaios com genótipos em ambientes e mensurações de duas variáveis, cada genótipo teve seu valor padronizado com relação a cada variável k = 1, 2. Essa padronização foi realizada em função da média do ambiente, da seguinte forma: Wijk = Yijk/×100 ; em que Wijk representa o valor padronizado do genótipo i, no ambiente j para a variável k; representa a média observada do genótipo , no ambiente para a variável k e , a média de todos genótipos para o ambiente e variável k. Com os valores padronizados foram estimados o vetor média e a matriz de variância e covariância de cada genótipo. Foi obtida a derivação teórica da extensão bivariada do índice de risco (Ii de Annicchiarico com sucesso e foi proposto um segundo índice de risco baseado nas probabilidades bivariada (Prb i; os dois índices apresentaram grande concordância nos resultados obtidos em um exemplo ilustrativo com genótipos de melões.The objective of this work was to obtain the theoretical derivation of the bivariate extension to the methods proposed by Annicchiarico (1992 and Annicchiarico et al. (1995 for studing phenotypic stability. Considering assays with genotypes in environments and two variates, every genotype had the response of each variate (k = 1, 2 standardized. This standardization has been made using the environment means as follows: Wijk = Yijk/×100 ; where Wijk represents the ith genotype standard value in the jth environment for the kth variate; represents the observed mean of the ith genotype, in jth environment for the kth variate e the overall genotypes means for jth environment to kth variate. Considering the standardized values, the genotypes mean vector and covariance matrix were estimated. The theoretical derivation of the

  2. A Jump-Diffusion Model with Stochastic Volatility and Durations

    DEFF Research Database (Denmark)

    Wei, Wei; Pelletier, Denis

    jumps in two ways: as exogenous sampling intervals, and through the interaction with volatility. We adopt a bivariate Ornstein-Ulenbeck process to model intraday volatility and conditional duration. We develop a MCMC algorithm for the inference on irregularly spaced multivariate processes with jumps...

  3. Analysis of the asymmetrical shortest two-server queueing model

    NARCIS (Netherlands)

    J.W. Cohen

    1995-01-01

    textabstractThis study presents the analytic solution for the asymmetrical two-server queueing model with arriving customers joining the shorter queue for the case with Poisson arrivals and negative exponentially distributed service times. The bivariate generating function of the stationary joint

  4. A binary logistic regression model with complex sampling design of ...

    African Journals Online (AJOL)

    2017-09-03

    Sep 3, 2017 ... Bi-variable and multi-variable binary logistic regression model with complex sampling design was fitted. .... Data was entered into STATA-12 and analyzed using. SPSS-21. .... lack of access/too far or costs too much. 35. 1.2.

  5. Bank Lending Policy, Credit Scoring and Value at Risk

    OpenAIRE

    Jacobson, Tor; Roszbach, Kasper

    1998-01-01

    In this paper we apply a bivariate probit model to investigate the implications of bank lending policy. In the first equation we model the bank´s decision to grant a loan, in the second the probability of default. We confirm that banks provide loans in a way that is not consistent with default risk minimization. The lending policy must thus either be inefficient or be the result of some other type of optimizing behavior than expected profit maximization. Value at Risk, being a value weighted ...

  6. A Vehicle for Bivariate Data Analysis

    Science.gov (United States)

    Roscoe, Matt B.

    2016-01-01

    Instead of reserving the study of probability and statistics for special fourth-year high school courses, the Common Core State Standards for Mathematics (CCSSM) takes a "statistics for all" approach. The standards recommend that students in grades 6-8 learn to summarize and describe data distributions, understand probability, draw…

  7. Spectral density regression for bivariate extremes

    KAUST Repository

    Castro Camilo, Daniela; de Carvalho, Miguel

    2016-01-01

    can be seen as an extension of the Nadaraya–Watson estimator where the usual scalar responses are replaced by mean constrained densities on the unit interval. Numerical experiments with the methods illustrate their resilience in a variety of contexts

  8. A Composite Likelihood Inference in Latent Variable Models for Ordinal Longitudinal Responses

    Science.gov (United States)

    Vasdekis, Vassilis G. S.; Cagnone, Silvia; Moustaki, Irini

    2012-01-01

    The paper proposes a composite likelihood estimation approach that uses bivariate instead of multivariate marginal probabilities for ordinal longitudinal responses using a latent variable model. The model considers time-dependent latent variables and item-specific random effects to be accountable for the interdependencies of the multivariate…

  9. Multivariate modelling of endophenotypes associated with the metabolic syndrome in Chinese twins

    DEFF Research Database (Denmark)

    Pang, Z; Zhang, D; Li, S

    2010-01-01

    AIMS/HYPOTHESIS: The common genetic and environmental effects on endophenotypes related to the metabolic syndrome have been investigated using bivariate and multivariate twin models. This paper extends the pairwise analysis approach by introducing independent and common pathway models to Chinese...

  10. Testing the normality assumption in the sample selection model with an application to travel demand

    NARCIS (Netherlands)

    van der Klaauw, B.; Koning, R.H.

    2003-01-01

    In this article we introduce a test for the normality assumption in the sample selection model. The test is based on a flexible parametric specification of the density function of the error terms in the model. This specification follows a Hermite series with bivariate normality as a special case.

  11. Testing the normality assumption in the sample selection model with an application to travel demand

    NARCIS (Netherlands)

    van der Klauw, B.; Koning, R.H.

    In this article we introduce a test for the normality assumption in the sample selection model. The test is based on a flexible parametric specification of the density function of the error terms in the model. This specification follows a Hermite series with bivariate normality as a special case.

  12. International Sign Predictability of Stock Returns: The Role of the United States

    DEFF Research Database (Denmark)

    Nyberg, Henri; Pönkä, Harri

    from the U.S. to foreign markets. We introduce a new bivariate probit model that allows for such a contemporaneous predictive linkage from one market to the other. Our in-sample and out-of-sample forecasting results indicate superior predictive performance of the new model over the competing models...... by statistical measures and market timing performance, suggesting gradual diffusion of predictive information from the U.S. to the other markets.......We study the directional predictability of monthly excess stock market returns in the U.S. and ten other markets using univariate and bivariate binary response models. Our main interest is on the potential benefits of predicting the signs of the returns jointly, focusing on the predictive power...

  13. Taxes and Bribes in Uganda

    Science.gov (United States)

    Jagger, Pamela; Shively, Gerald

    2016-01-01

    Using data from 433 firms operating along Uganda’s charcoal and timber supply chains we investigate patterns of bribe payment and tax collection between supply chain actors and government officials responsible for collecting taxes and fees. We examine the factors associated with the presence and magnitude of bribe and tax payments using a series of bivariate probit and Tobit regression models. We find empirical support for a number of hypotheses related to payments, highlighting the role of queuing, capital-at-risk, favouritism, networks, and role in the supply chain. We also find that taxes crowd-in bribery in the charcoal market. PMID:27274568

  14. Taxes and Bribes in Uganda.

    Science.gov (United States)

    Jagger, Pamela; Shively, Gerald

    Using data from 433 firms operating along Uganda's charcoal and timber supply chains we investigate patterns of bribe payment and tax collection between supply chain actors and government officials responsible for collecting taxes and fees. We examine the factors associated with the presence and magnitude of bribe and tax payments using a series of bivariate probit and Tobit regression models. We find empirical support for a number of hypotheses related to payments, highlighting the role of queuing, capital-at-risk, favouritism, networks, and role in the supply chain. We also find that taxes crowd-in bribery in the charcoal market.

  15. Identifying the Source of Misfit in Item Response Theory Models.

    Science.gov (United States)

    Liu, Yang; Maydeu-Olivares, Alberto

    2014-01-01

    When an item response theory model fails to fit adequately, the items for which the model provides a good fit and those for which it does not must be determined. To this end, we compare the performance of several fit statistics for item pairs with known asymptotic distributions under maximum likelihood estimation of the item parameters: (a) a mean and variance adjustment to bivariate Pearson's X(2), (b) a bivariate subtable analog to Reiser's (1996) overall goodness-of-fit test, (c) a z statistic for the bivariate residual cross product, and (d) Maydeu-Olivares and Joe's (2006) M2 statistic applied to bivariate subtables. The unadjusted Pearson's X(2) with heuristically determined degrees of freedom is also included in the comparison. For binary and ordinal data, our simulation results suggest that the z statistic has the best Type I error and power behavior among all the statistics under investigation when the observed information matrix is used in its computation. However, if one has to use the cross-product information, the mean and variance adjusted X(2) is recommended. We illustrate the use of pairwise fit statistics in 2 real-data examples and discuss possible extensions of the current research in various directions.

  16. Idealized models of the joint probability distribution of wind speeds

    Science.gov (United States)

    Monahan, Adam H.

    2018-05-01

    The joint probability distribution of wind speeds at two separate locations in space or points in time completely characterizes the statistical dependence of these two quantities, providing more information than linear measures such as correlation. In this study, we consider two models of the joint distribution of wind speeds obtained from idealized models of the dependence structure of the horizontal wind velocity components. The bivariate Rice distribution follows from assuming that the wind components have Gaussian and isotropic fluctuations. The bivariate Weibull distribution arises from power law transformations of wind speeds corresponding to vector components with Gaussian, isotropic, mean-zero variability. Maximum likelihood estimates of these distributions are compared using wind speed data from the mid-troposphere, from different altitudes at the Cabauw tower in the Netherlands, and from scatterometer observations over the sea surface. While the bivariate Rice distribution is more flexible and can represent a broader class of dependence structures, the bivariate Weibull distribution is mathematically simpler and may be more convenient in many applications. The complexity of the mathematical expressions obtained for the joint distributions suggests that the development of explicit functional forms for multivariate speed distributions from distributions of the components will not be practical for more complicated dependence structure or more than two speed variables.

  17. A Smooth Transition Logit Model of the Effects of Deregulation in the Electricity Market

    DEFF Research Database (Denmark)

    Hurn, A.S.; Silvennoinen, Annastiina; Teräsvirta, Timo

    We consider a nonlinear vector model called the logistic vector smooth transition autoregressive model. The bivariate single-transition vector smooth transition regression model of Camacho (2004) is generalised to a multivariate and multitransition one. A modelling strategy consisting of specific......We consider a nonlinear vector model called the logistic vector smooth transition autoregressive model. The bivariate single-transition vector smooth transition regression model of Camacho (2004) is generalised to a multivariate and multitransition one. A modelling strategy consisting...... of specification, including testing linearity, estimation and evaluation of these models is constructed. Nonlinear least squares estimation of the parameters of the model is discussed. Evaluation by misspecification tests is carried out using tests derived in a companion paper. The use of the modelling strategy...

  18. A model for predicting Inactivity in the European Banking Sector

    Directory of Open Access Journals (Sweden)

    Themistokles Lazarides

    2015-08-01

    Full Text Available Purpose – The paper will addresses the issue of inactivity and will try to detect its causes using econometric models. The Banking sector of Europe has been under transformation or restructuring for almost half a century. Design/methodology/approach – Probit models and descriptive statistics have been used to create a system that predicts inactivity. The data was collected from Bankscope. Findings – The results of the econometric models show that from the six groups of indicators, four have been found to be statistically important (performance, size, ownership, corporate governance. These findings are consistent with the theory. Research limitations/implications – The limitation is that Bankscope does not provide any longitudinal data regarding ownership, management structure and there are some many missing values before 2007 for some of the financial ratios and data. Originality/value – The paper's value and innovation is that it has given a systemic approach to find indicators of inactivity.

  19. Bayesian inference in an item response theory model with a generalized student t link function

    Science.gov (United States)

    Azevedo, Caio L. N.; Migon, Helio S.

    2012-10-01

    In this paper we introduce a new item response theory (IRT) model with a generalized Student t-link function with unknown degrees of freedom (df), named generalized t-link (GtL) IRT model. In this model we consider only the difficulty parameter in the item response function. GtL is an alternative to the two parameter logit and probit models, since the degrees of freedom (df) play a similar role to the discrimination parameter. However, the behavior of the curves of the GtL is different from those of the two parameter models and the usual Student t link, since in GtL the curve obtained from different df's can cross the probit curves in more than one latent trait level. The GtL model has similar proprieties to the generalized linear mixed models, such as the existence of sufficient statistics and easy parameter interpretation. Also, many techniques of parameter estimation, model fit assessment and residual analysis developed for that models can be used for the GtL model. We develop fully Bayesian estimation and model fit assessment tools through a Metropolis-Hastings step within Gibbs sampling algorithm. We consider a prior sensitivity choice concerning the degrees of freedom. The simulation study indicates that the algorithm recovers all parameters properly. In addition, some Bayesian model fit assessment tools are considered. Finally, a real data set is analyzed using our approach and other usual models. The results indicate that our model fits the data better than the two parameter models.

  20. Multivariate longitudinal data analysis with mixed effects hidden Markov models.

    Science.gov (United States)

    Raffa, Jesse D; Dubin, Joel A

    2015-09-01

    Multiple longitudinal responses are often collected as a means to capture relevant features of the true outcome of interest, which is often hidden and not directly measurable. We outline an approach which models these multivariate longitudinal responses as generated from a hidden disease process. We propose a class of models which uses a hidden Markov model with separate but correlated random effects between multiple longitudinal responses. This approach was motivated by a smoking cessation clinical trial, where a bivariate longitudinal response involving both a continuous and a binomial response was collected for each participant to monitor smoking behavior. A Bayesian method using Markov chain Monte Carlo is used. Comparison of separate univariate response models to the bivariate response models was undertaken. Our methods are demonstrated on the smoking cessation clinical trial dataset, and properties of our approach are examined through extensive simulation studies. © 2015, The International Biometric Society.

  1. Modeling rainfall-runoff relationship using multivariate GARCH model

    Science.gov (United States)

    Modarres, R.; Ouarda, T. B. M. J.

    2013-08-01

    The traditional hydrologic time series approaches are used for modeling, simulating and forecasting conditional mean of hydrologic variables but neglect their time varying variance or the second order moment. This paper introduces the multivariate Generalized Autoregressive Conditional Heteroscedasticity (MGARCH) modeling approach to show how the variance-covariance relationship between hydrologic variables varies in time. These approaches are also useful to estimate the dynamic conditional correlation between hydrologic variables. To illustrate the novelty and usefulness of MGARCH models in hydrology, two major types of MGARCH models, the bivariate diagonal VECH and constant conditional correlation (CCC) models are applied to show the variance-covariance structure and cdynamic correlation in a rainfall-runoff process. The bivariate diagonal VECH-GARCH(1,1) and CCC-GARCH(1,1) models indicated both short-run and long-run persistency in the conditional variance-covariance matrix of the rainfall-runoff process. The conditional variance of rainfall appears to have a stronger persistency, especially long-run persistency, than the conditional variance of streamflow which shows a short-lived drastic increasing pattern and a stronger short-run persistency. The conditional covariance and conditional correlation coefficients have different features for each bivariate rainfall-runoff process with different degrees of stationarity and dynamic nonlinearity. The spatial and temporal pattern of variance-covariance features may reflect the signature of different physical and hydrological variables such as drainage area, topography, soil moisture and ground water fluctuations on the strength, stationarity and nonlinearity of the conditional variance-covariance for a rainfall-runoff process.

  2. Modelling and Forecasting Stock Price Movements with Serially Dependent Determinants

    Directory of Open Access Journals (Sweden)

    Rasika Yatigammana

    2018-05-01

    Full Text Available The direction of price movements are analysed under an ordered probit framework, recognising the importance of accounting for discreteness in price changes. By extending the work of Hausman et al. (1972 and Yang and Parwada (2012,This paper focuses on improving the forecast performance of the model while infusing a more practical perspective by enhancing flexibility. This is achieved by extending the existing framework to generate short term multi period ahead forecasts for better decision making, whilst considering the serial dependence structure. This approach enhances the flexibility and adaptability of the model to future price changes, particularly targeting risk minimisation. Empirical evidence is provided, based on seven stocks listed on the Australian Securities Exchange (ASX. The prediction success varies between 78 and 91 per cent for in-sample and out-of-sample forecasts for both the short term and long term.

  3. A state-dependent model for inflation forecasting

    OpenAIRE

    Andrea Stella; James H. Stock

    2012-01-01

    We develop a parsimonious bivariate model of inflation and unemployment that allows for persistent variation in trend inflation and the NAIRU. The model, which consists of five unobserved components (including the trends) with stochastic volatility, implies a time-varying VAR for changes in the rates of inflation and unemployment. The implied backwards-looking Phillips curve has a time-varying slope that is steeper in the 1970s than in the 1990s. Pseudo out-of-sample forecasting experiments i...

  4. Web based health surveys: Using a Two Step Heckman model to examine their potential for population health analysis.

    Science.gov (United States)

    Morrissey, Karyn; Kinderman, Peter; Pontin, Eleanor; Tai, Sara; Schwannauer, Mathias

    2016-08-01

    In June 2011 the BBC Lab UK carried out a web-based survey on the causes of mental distress. The 'Stress Test' was launched on 'All in the Mind' a BBC Radio 4 programme and the test's URL was publicised on radio and TV broadcasts, and made available via BBC web pages and social media. Given the large amount of data created, over 32,800 participants, with corresponding diagnosis, demographic and socioeconomic characteristics; the dataset are potentially an important source of data for population based research on depression and anxiety. However, as respondents self-selected to participate in the online survey, the survey may comprise a non-random sample. It may be only individuals that listen to BBC Radio 4 and/or use their website that participated in the survey. In this instance using the Stress Test data for wider population based research may create sample selection bias. Focusing on the depression component of the Stress Test, this paper presents an easy-to-use method, the Two Step Probit Selection Model, to detect and statistically correct selection bias in the Stress Test. Using a Two Step Probit Selection Model; this paper did not find a statistically significant selection on unobserved factors for participants of the Stress Test. That is, survey participants who accessed and completed an online survey are not systematically different from non-participants on the variables of substantive interest. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. Recognition Memory zROC Slopes for Items with Correct versus Incorrect Source Decisions Discriminate the Dual Process and Unequal Variance Signal Detection Models

    Science.gov (United States)

    Starns, Jeffrey J.; Rotello, Caren M.; Hautus, Michael J.

    2014-01-01

    We tested the dual process and unequal variance signal detection models by jointly modeling recognition and source confidence ratings. The 2 approaches make unique predictions for the slope of the recognition memory zROC function for items with correct versus incorrect source decisions. The standard bivariate Gaussian version of the unequal…

  6. Variable selection and model choice in geoadditive regression models.

    Science.gov (United States)

    Kneib, Thomas; Hothorn, Torsten; Tutz, Gerhard

    2009-06-01

    Model choice and variable selection are issues of major concern in practical regression analyses, arising in many biometric applications such as habitat suitability analyses, where the aim is to identify the influence of potentially many environmental conditions on certain species. We describe regression models for breeding bird communities that facilitate both model choice and variable selection, by a boosting algorithm that works within a class of geoadditive regression models comprising spatial effects, nonparametric effects of continuous covariates, interaction surfaces, and varying coefficients. The major modeling components are penalized splines and their bivariate tensor product extensions. All smooth model terms are represented as the sum of a parametric component and a smooth component with one degree of freedom to obtain a fair comparison between the model terms. A generic representation of the geoadditive model allows us to devise a general boosting algorithm that automatically performs model choice and variable selection.

  7. Forecasting Macedonian Business Cycle Turning Points Using Qual Var Model

    Directory of Open Access Journals (Sweden)

    Petrovska Magdalena

    2016-09-01

    Full Text Available This paper aims at assessing the usefulness of leading indicators in business cycle research and forecast. Initially we test the predictive power of the economic sentiment indicator (ESI within a static probit model as a leading indicator, commonly perceived to be able to provide a reliable summary of the current economic conditions. We further proceed analyzing how well an extended set of indicators performs in forecasting turning points of the Macedonian business cycle by employing the Qual VAR approach of Dueker (2005. In continuation, we evaluate the quality of the selected indicators in pseudo-out-of-sample context. The results show that the use of survey-based indicators as a complement to macroeconomic data work satisfactory well in capturing the business cycle developments in Macedonia.

  8. Physical work environment: testing an expanded model of job satisfaction in a sample of registered nurses.

    Science.gov (United States)

    Djukic, Maja; Kovner, Christine; Budin, Wendy C; Norman, Robert

    2010-01-01

    The impact of personal, organizational, and economic factors on nurses' job satisfaction have been studied extensively, but few studies exist in which the effects of physical work environment--including perceptions of architectural, interior design, and ambient features on job satisfaction-are examined. The purpose of this study was to examine the effect of perceived physical work environment on job satisfaction, adjusting for multiple personal, organizational, and economic determinants of job satisfaction. A cross-sectional, predictive design and a Web-based survey instrument were used to collect data from staff registered nurses in a large metropolitan hospital. The survey included 34 questions about multiple job satisfaction determinants, including 18 Likert-type measures with established good validity (comparative fit index = .97, Tucker-Lewis index = .98, root mean square error of approximation = .06) and reliability (r ≥ .70). A response rate of 48.5% resulted in a sample of 362, with 80% power to detect a medium effect of perceived physical environment on job satisfaction. On average, nurses had negative perceptions of physical work environment (M = 2.9, SD = 2.2). Although physical environment was related positively to job satisfaction (r =.256, p = .01) in bivariate analysis, in ordered probit regression, no effect of physical work environment on job satisfaction was found. In future studies, this relationship should be examined in larger and more representative samples of nurses. Qualitative methods should be used to explore how negatively perceived physical work environment impacts nurses. Rebuilding of U.S. hospitals, with a planned investment of $200 billion without considering how physical environment contributes to nurse work outcomes, threatens to exacerbate organizational nurse turnover.

  9. Testing for the Endogenous Nature between Women's Empowerment and Antenatal Health Care Utilization: Evidence from a Cross-Sectional Study in Egypt

    Science.gov (United States)

    Hussein, Mohamed Ali

    2014-01-01

    Women's relative lack of decision-making power and their unequal access to employment, finances, education, basic health care, and other resources are considered to be the root causes of their ill-health and that of their children. The main purpose of this paper is to examine the interactive relation between women's empowerment and the use of maternal health care. Two model specifications are tested. One assumes no correlation between empowerment and antenatal care while the second specification allows for correlation. Both the univariate and the recursive bivariate probit models are tested. The data used in this study is EDHS 2008. Factor Analysis Technique is also used to construct some of the explanatory variables such as the availability and quality of health services indicators. The findings show that women's empowerment and receiving regular antenatal care are simultaneously determined and the recursive bivariate probit is a better approximation to the relationship between them. Women's empowerment has significant and positive impact on receiving regular antenatal care. The availability and quality of health services do significantly increase the likelihood of receiving regular antenatal care. PMID:25140310

  10. Testing for the Endogenous Nature between Women’s Empowerment and Antenatal Health Care Utilization: Evidence from a Cross-Sectional Study in Egypt

    Directory of Open Access Journals (Sweden)

    Hassan H. M. Zaky

    2014-01-01

    Full Text Available Women’s relative lack of decision-making power and their unequal access to employment, finances, education, basic health care, and other resources are considered to be the root causes of their ill-health and that of their children. The main purpose of this paper is to examine the interactive relation between women’s empowerment and the use of maternal health care. Two model specifications are tested. One assumes no correlation between empowerment and antenatal care while the second specification allows for correlation. Both the univariate and the recursive bivariate probit models are tested. The data used in this study is EDHS 2008. Factor Analysis Technique is also used to construct some of the explanatory variables such as the availability and quality of health services indicators. The findings show that women’s empowerment and receiving regular antenatal care are simultaneously determined and the recursive bivariate probit is a better approximation to the relationship between them. Women’s empowerment has significant and positive impact on receiving regular antenatal care. The availability and quality of health services do significantly increase the likelihood of receiving regular antenatal care.

  11. Joint models for noise annoyance and willingness to pay for road noise reduction

    DEFF Research Database (Denmark)

    Fosgerau, Mogens; Bue Bjørner, Thomas

    2006-01-01

    Recent contingent valuation (CV) studies of the willingness to pay (WTP) for road noise reduction have used stated annoyance as an independent variable. We argue that this may be inappropriate due to potential endogeneity bias. Instead, an alternative model is proposed that treats both WTP...... and annoyance as endogenous variables in a simultaneous equation model as a combination of a linear regression with an ordered probit with correlated error terms and possibly common parameters. Thus, information on stated annoyance is utilised to estimate WTP with increased efficiency. Application of the model...... to a dataset from Copenhagen indicates a potential for improving the precision of the estimate of WTP for noise reduction with CV data....

  12. Longitudinal beta-binomial modeling using GEE for overdispersed binomial data.

    Science.gov (United States)

    Wu, Hongqian; Zhang, Ying; Long, Jeffrey D

    2017-03-15

    Longitudinal binomial data are frequently generated from multiple questionnaires and assessments in various scientific settings for which the binomial data are often overdispersed. The standard generalized linear mixed effects model may result in severe underestimation of standard errors of estimated regression parameters in such cases and hence potentially bias the statistical inference. In this paper, we propose a longitudinal beta-binomial model for overdispersed binomial data and estimate the regression parameters under a probit model using the generalized estimating equation method. A hybrid algorithm of the Fisher scoring and the method of moments is implemented for computing the method. Extensive simulation studies are conducted to justify the validity of the proposed method. Finally, the proposed method is applied to analyze functional impairment in subjects who are at risk of Huntington disease from a multisite observational study of prodromal Huntington disease. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  13. Implementing Modifed Burg Algorithms in Multivariate Subset Autoregressive Modeling

    Directory of Open Access Journals (Sweden)

    A. Alexandre Trindade

    2003-02-01

    Full Text Available The large number of parameters in subset vector autoregressive models often leads one to procure fast, simple, and efficient alternatives or precursors to maximum likelihood estimation. We present the solution of the multivariate subset Yule-Walker equations as one such alternative. In recent work, Brockwell, Dahlhaus, and Trindade (2002, show that the Yule-Walker estimators can actually be obtained as a special case of a general recursive Burg-type algorithm. We illustrate the structure of this Algorithm, and discuss its implementation in a high-level programming language. Applications of the Algorithm in univariate and bivariate modeling are showcased in examples. Univariate and bivariate versions of the Algorithm written in Fortran 90 are included in the appendix, and their use illustrated.

  14. [Compatible biomass models of natural spruce (Picea asperata)].

    Science.gov (United States)

    Wang, Jin Chi; Deng, Hua Feng; Huang, Guo Sheng; Wang, Xue Jun; Zhang, Lu

    2017-10-01

    By using nonlinear measurement error method, the compatible tree volume and above ground biomass equations were established based on the volume and biomass data of 150 sampling trees of natural spruce (Picea asperata). Two approaches, controlling directly under total aboveground biomass and controlling jointly from level to level, were used to design the compatible system for the total aboveground biomass and the biomass of four components (stem, bark, branch and foliage), and the total ground biomass could be estimated independently or estimated simultaneously in the system. The results showed that the R 2 of the one variable and bivariate compatible tree volume and aboveground biomass equations were all above 0.85, and the maximum value reached 0.99. The prediction effect of the volume equations could be improved significantly when tree height was included as predictor, while it was not significant in biomass estimation. For the compatible biomass systems, the one variable model based on controlling jointly from level to level was better than the model using controlling directly under total above ground biomass, but the bivariate models of the two methods were similar. Comparing the imitative effects of the one variable and bivariate compatible biomass models, the results showed that the increase of explainable variables could significantly improve the fitness of branch and foliage biomass, but had little effect on other components. Besides, there was almost no difference between the two methods of estimation based on the comparison.

  15. Estimation of rank correlation for clustered data.

    Science.gov (United States)

    Rosner, Bernard; Glynn, Robert J

    2017-06-30

    It is well known that the sample correlation coefficient (R xy ) is the maximum likelihood estimator of the Pearson correlation (ρ xy ) for independent and identically distributed (i.i.d.) bivariate normal data. However, this is not true for ophthalmologic data where X (e.g., visual acuity) and Y (e.g., visual field) are available for each eye and there is positive intraclass correlation for both X and Y in fellow eyes. In this paper, we provide a regression-based approach for obtaining the maximum likelihood estimator of ρ xy for clustered data, which can be implemented using standard mixed effects model software. This method is also extended to allow for estimation of partial correlation by controlling both X and Y for a vector U_ of other covariates. In addition, these methods can be extended to allow for estimation of rank correlation for clustered data by (i) converting ranks of both X and Y to the probit scale, (ii) estimating the Pearson correlation between probit scores for X and Y, and (iii) using the relationship between Pearson and rank correlation for bivariate normally distributed data. The validity of the methods in finite-sized samples is supported by simulation studies. Finally, two examples from ophthalmology and analgesic abuse are used to illustrate the methods. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  16. Modeling soil water content for vegetation modeling improvement

    Science.gov (United States)

    Cianfrani, Carmen; Buri, Aline; Zingg, Barbara; Vittoz, Pascal; Verrecchia, Eric; Guisan, Antoine

    2016-04-01

    Soil water content (SWC) is known to be important for plants as it affects the physiological processes regulating plant growth. Therefore, SWC controls plant distribution over the Earth surface, ranging from deserts and grassland to rain forests. Unfortunately, only a few data on SWC are available as its measurement is very time consuming and costly and needs specific laboratory tools. The scarcity of SWC measurements in geographic space makes it difficult to model and spatially project SWC over larger areas. In particular, it prevents its inclusion in plant species distribution model (SDMs) as predictor. The aims of this study were, first, to test a new methodology allowing problems of the scarcity of SWC measurements to be overpassed and second, to model and spatially project SWC in order to improve plant SDMs with the inclusion of SWC parameter. The study was developed in four steps. First, SWC was modeled by measuring it at 10 different pressures (expressed in pF and ranging from pF=0 to pF=4.2). The different pF represent different degrees of soil water availability for plants. An ensemble of bivariate models was built to overpass the problem of having only a few SWC measurements (n = 24) but several predictors to include in the model. Soil texture (clay, silt, sand), organic matter (OM), topographic variables (elevation, aspect, convexity), climatic variables (precipitation) and hydrological variables (river distance, NDWI) were used as predictors. Weighted ensemble models were built using only bivariate models with adjusted-R2 > 0.5 for each SWC at different pF. The second step consisted in running plant SDMs including modeled SWC jointly with the conventional topo-climatic variable used for plant SDMs. Third, SDMs were only run using the conventional topo-climatic variables. Finally, comparing the models obtained in the second and third steps allowed assessing the additional predictive power of SWC in plant SDMs. SWC ensemble models remained very good, with

  17. Two-vehicle injury severity models based on integration of pavement management and traffic engineering factors.

    Science.gov (United States)

    Jiang, Ximiao; Huang, Baoshan; Yan, Xuedong; Zaretzki, Russell L; Richards, Stephen

    2013-01-01

    The severity of traffic-related injuries has been studied by many researchers in recent decades. However, the evaluation of many factors is still in dispute and, until this point, few studies have taken into account pavement management factors as points of interest. The objective of this article is to evaluate the combined influences of pavement management factors and traditional traffic engineering factors on the injury severity of 2-vehicle crashes. This study examines 2-vehicle rear-end, sideswipe, and angle collisions that occurred on Tennessee state routes from 2004 to 2008. Both the traditional ordered probit (OP) model and Bayesian ordered probit (BOP) model with weak informative prior were fitted for each collision type. The performances of these models were evaluated based on the parameter estimates and deviances. The results indicated that pavement management factors played identical roles in all 3 collision types. Pavement serviceability produces significant positive effects on the severity of injuries. The pavement distress index (PDI), rutting depth (RD), and rutting depth difference between right and left wheels (RD_df) were not significant in any of these 3 collision types. The effects of traffic engineering factors varied across collision types, except that a few were consistently significant in all 3 collision types, such as annual average daily traffic (AADT), rural-urban location, speed limit, peaking hour, and light condition. The findings of this study indicated that improved pavement quality does not necessarily lessen the severity of injuries when a 2-vehicle crash occurs. The effects of traffic engineering factors are not universal but vary by the type of crash. The study also found that the BOP model with a weak informative prior can be used as an alternative but was not superior to the traditional OP model in terms of overall performance.

  18. A vine copula mixed effect model for trivariate meta-analysis of diagnostic test accuracy studies accounting for disease prevalence.

    Science.gov (United States)

    Nikoloulopoulos, Aristidis K

    2017-10-01

    A bivariate copula mixed model has been recently proposed to synthesize diagnostic test accuracy studies and it has been shown that it is superior to the standard generalized linear mixed model in this context. Here, we call trivariate vine copulas to extend the bivariate meta-analysis of diagnostic test accuracy studies by accounting for disease prevalence. Our vine copula mixed model includes the trivariate generalized linear mixed model as a special case and can also operate on the original scale of sensitivity, specificity, and disease prevalence. Our general methodology is illustrated by re-analyzing the data of two published meta-analyses. Our study suggests that there can be an improvement on trivariate generalized linear mixed model in fit to data and makes the argument for moving to vine copula random effects models especially because of their richness, including reflection asymmetric tail dependence, and computational feasibility despite their three dimensionality.

  19. Summary goodness-of-fit statistics for binary generalized linear models with noncanonical link functions.

    Science.gov (United States)

    Canary, Jana D; Blizzard, Leigh; Barry, Ronald P; Hosmer, David W; Quinn, Stephen J

    2016-05-01

    Generalized linear models (GLM) with a canonical logit link function are the primary modeling technique used to relate a binary outcome to predictor variables. However, noncanonical links can offer more flexibility, producing convenient analytical quantities (e.g., probit GLMs in toxicology) and desired measures of effect (e.g., relative risk from log GLMs). Many summary goodness-of-fit (GOF) statistics exist for logistic GLM. Their properties make the development of GOF statistics relatively straightforward, but it can be more difficult under noncanonical links. Although GOF tests for logistic GLM with continuous covariates (GLMCC) have been applied to GLMCCs with log links, we know of no GOF tests in the literature specifically developed for GLMCCs that can be applied regardless of link function chosen. We generalize the Tsiatis GOF statistic originally developed for logistic GLMCCs, (TG), so that it can be applied under any link function. Further, we show that the algebraically related Hosmer-Lemeshow (HL) and Pigeon-Heyse (J(2) ) statistics can be applied directly. In a simulation study, TG, HL, and J(2) were used to evaluate the fit of probit, log-log, complementary log-log, and log models, all calculated with a common grouping method. The TG statistic consistently maintained Type I error rates, while those of HL and J(2) were often lower than expected if terms with little influence were included. Generally, the statistics had similar power to detect an incorrect model. An exception occurred when a log GLMCC was incorrectly fit to data generated from a logistic GLMCC. In this case, TG had more power than HL or J(2) . © 2015 John Wiley & Sons Ltd/London School of Economics.

  20. Reliability modeling of degradation of products with multiple performance characteristics based on gamma processes

    International Nuclear Information System (INIS)

    Pan Zhengqiang; Balakrishnan, Narayanaswamy

    2011-01-01

    Many highly reliable products usually have complex structure, with their reliability being evaluated by two or more performance characteristics. In certain physical situations, the degradation of these performance characteristics would be always positive and strictly increasing. In such a case, the gamma process is usually considered as a degradation process due to its independent and non-negative increments properties. In this paper, we suppose that a product has two dependent performance characteristics and that their degradation can be modeled by gamma processes. For such a bivariate degradation involving two performance characteristics, we propose to use a bivariate Birnbaum-Saunders distribution and its marginal distributions to approximate the reliability function. Inferential method for the corresponding model parameters is then developed. Finally, for an illustration of the proposed model and method, a numerical example about fatigue cracks is discussed and some computational results are presented.

  1. Statistical models for competing risk analysis

    International Nuclear Information System (INIS)

    Sather, H.N.

    1976-08-01

    Research results on three new models for potential applications in competing risks problems. One section covers the basic statistical relationships underlying the subsequent competing risks model development. Another discusses the problem of comparing cause-specific risk structure by competing risks theory in two homogeneous populations, P1 and P2. Weibull models which allow more generality than the Berkson and Elveback models are studied for the effect of time on the hazard function. The use of concomitant information for modeling single-risk survival is extended to the multiple failure mode domain of competing risks. The model used to illustrate the use of this methodology is a life table model which has constant hazards within pre-designated intervals of the time scale. Two parametric models for bivariate dependent competing risks, which provide interesting alternatives, are proposed and examined

  2. Bayesian Estimation and Selection of Nonlinear Vector Error Correction Models: The Case of the Sugar-Ethanol-Oil Nexus in Brazil

    OpenAIRE

    Kelvin Balcombe; George Rapsomanikis

    2008-01-01

    Nonlinear adjustment toward long-run price equilibrium relationships in the sugar-ethanol-oil nexus in Brazil is examined. We develop generalized bivariate error correction models that allow for cointegration between sugar, ethanol, and oil prices, where dynamic adjustments are potentially nonlinear functions of the disequilibrium errors. A range of models are estimated using Bayesian Monte Carlo Markov Chain algorithms and compared using Bayesian model selection methods. The results suggest ...

  3. Rigorously testing multialternative decision field theory against random utility models.

    Science.gov (United States)

    Berkowitsch, Nicolas A J; Scheibehenne, Benjamin; Rieskamp, Jörg

    2014-06-01

    Cognitive models of decision making aim to explain the process underlying observed choices. Here, we test a sequential sampling model of decision making, multialternative decision field theory (MDFT; Roe, Busemeyer, & Townsend, 2001), on empirical grounds and compare it against 2 established random utility models of choice: the probit and the logit model. Using a within-subject experimental design, participants in 2 studies repeatedly choose among sets of options (consumer products) described on several attributes. The results of Study 1 showed that all models predicted participants' choices equally well. In Study 2, in which the choice sets were explicitly designed to distinguish the models, MDFT had an advantage in predicting the observed choices. Study 2 further revealed the occurrence of multiple context effects within single participants, indicating an interdependent evaluation of choice options and correlations between different context effects. In sum, the results indicate that sequential sampling models can provide relevant insights into the cognitive process underlying preferential choices and thus can lead to better choice predictions. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  4. Modelling

    CERN Document Server

    Spädtke, P

    2013-01-01

    Modeling of technical machines became a standard technique since computer became powerful enough to handle the amount of data relevant to the specific system. Simulation of an existing physical device requires the knowledge of all relevant quantities. Electric fields given by the surrounding boundary as well as magnetic fields caused by coils or permanent magnets have to be known. Internal sources for both fields are sometimes taken into account, such as space charge forces or the internal magnetic field of a moving bunch of charged particles. Used solver routines are briefly described and some bench-marking is shown to estimate necessary computing times for different problems. Different types of charged particle sources will be shown together with a suitable model to describe the physical model. Electron guns are covered as well as different ion sources (volume ion sources, laser ion sources, Penning ion sources, electron resonance ion sources, and H$^-$-sources) together with some remarks on beam transport.

  5. Modelling female fertility traits in beef cattle using linear and non-linear models.

    Science.gov (United States)

    Naya, H; Peñagaricano, F; Urioste, J I

    2017-06-01

    Female fertility traits are key components of the profitability of beef cattle production. However, these traits are difficult and expensive to measure, particularly under extensive pastoral conditions, and consequently, fertility records are in general scarce and somehow incomplete. Moreover, fertility traits are usually dominated by the effects of herd-year environment, and it is generally assumed that relatively small margins are kept for genetic improvement. New ways of modelling genetic variation in these traits are needed. Inspired in the methodological developments made by Prof. Daniel Gianola and co-workers, we assayed linear (Gaussian), Poisson, probit (threshold), censored Poisson and censored Gaussian models to three different kinds of endpoints, namely calving success (CS), number of days from first calving (CD) and number of failed oestrus (FE). For models involving FE and CS, non-linear models overperformed their linear counterparts. For models derived from CD, linear versions displayed better adjustment than the non-linear counterparts. Non-linear models showed consistently higher estimates of heritability and repeatability in all cases (h 2  linear models; h 2  > 0.23 and r > 0.24, for non-linear models). While additive and permanent environment effects showed highly favourable correlations between all models (>0.789), consistency in selecting the 10% best sires showed important differences, mainly amongst the considered endpoints (FE, CS and CD). In consequence, endpoints should be considered as modelling different underlying genetic effects, with linear models more appropriate to describe CD and non-linear models better for FE and CS. © 2017 Blackwell Verlag GmbH.

  6. Probabilistic modelling of drought events in China via 2-dimensional joint copula

    Science.gov (United States)

    Ayantobo, Olusola O.; Li, Yi; Song, Songbai; Javed, Tehseen; Yao, Ning

    2018-04-01

    Probabilistic modelling of drought events is a significant aspect of water resources management and planning. In this study, popularly applied and several relatively new bivariate Archimedean copulas were employed to derive regional and spatial based copula models to appraise drought risk in mainland China over 1961-2013. Drought duration (Dd), severity (Ds), and peak (Dp), as indicated by Standardized Precipitation Evapotranspiration Index (SPEI), were extracted according to the run theory and fitted with suitable marginal distributions. The maximum likelihood estimation (MLE) and curve fitting method (CFM) were used to estimate the copula parameters of nineteen bivariate Archimedean copulas. Drought probabilities and return periods were analysed based on appropriate bivariate copula in sub-region I-VII and entire mainland China. The goodness-of-fit tests as indicated by the CFM showed that copula NN19 in sub-regions III, IV, V, VI and mainland China, NN20 in sub-region I and NN13 in sub-region VII are the best for modeling drought variables. Bivariate drought probability across mainland China is relatively high, and the highest drought probabilities are found mainly in the Northwestern and Southwestern China. Besides, the result also showed that different sub-regions might suffer varying drought risks. The drought risks as observed in Sub-region III, VI and VII, are significantly greater than other sub-regions. Higher probability of droughts of longer durations in the sub-regions also corresponds to shorter return periods with greater drought severity. These results may imply tremendous challenges for the water resources management in different sub-regions, particularly the Northwestern and Southwestern China.

  7. Single toxin dose-response models revisited

    Energy Technology Data Exchange (ETDEWEB)

    Demidenko, Eugene, E-mail: eugened@dartmouth.edu [Department of Biomedical Data Science, Geisel School of Medicine at Dartmouth, Hanover, NH03756 (United States); Glaholt, SP, E-mail: sglaholt@indiana.edu [Indiana University, School of Public & Environmental Affairs, Bloomington, IN47405 (United States); Department of Biological Sciences, Dartmouth College, Hanover, NH03755 (United States); Kyker-Snowman, E, E-mail: ek2002@wildcats.unh.edu [Department of Natural Resources and the Environment, University of New Hampshire, Durham, NH03824 (United States); Shaw, JR, E-mail: joeshaw@indiana.edu [Indiana University, School of Public & Environmental Affairs, Bloomington, IN47405 (United States); Chen, CY, E-mail: Celia.Y.Chen@dartmouth.edu [Department of Biological Sciences, Dartmouth College, Hanover, NH03755 (United States)

    2017-01-01

    The goal of this paper is to offer a rigorous analysis of the sigmoid shape single toxin dose-response relationship. The toxin efficacy function is introduced and four special points, including maximum toxin efficacy and inflection points, on the dose-response curve are defined. The special points define three phases of the toxin effect on mortality: (1) toxin concentrations smaller than the first inflection point or (2) larger then the second inflection point imply low mortality rate, and (3) concentrations between the first and the second inflection points imply high mortality rate. Probabilistic interpretation and mathematical analysis for each of the four models, Hill, logit, probit, and Weibull is provided. Two general model extensions are introduced: (1) the multi-target hit model that accounts for the existence of several vital receptors affected by the toxin, and (2) model with a nonzero mortality at zero concentration to account for natural mortality. Special attention is given to statistical estimation in the framework of the generalized linear model with the binomial dependent variable as the mortality count in each experiment, contrary to the widespread nonlinear regression treating the mortality rate as continuous variable. The models are illustrated using standard EPA Daphnia acute (48 h) toxicity tests with mortality as a function of NiCl or CuSO{sub 4} toxin. - Highlights: • The paper offers a rigorous study of a sigmoid dose-response relationship. • The concentration with highest mortality rate is rigorously defined. • A table with four special points for five morality curves is presented. • Two new sigmoid dose-response models have been introduced. • The generalized linear model is advocated for estimation of sigmoid dose-response relationship.

  8. An operator calculus for surface and volume modeling

    Science.gov (United States)

    Gordon, W. J.

    1984-01-01

    The mathematical techniques which form the foundation for most of the surface and volume modeling techniques used in practice are briefly described. An outline of what may be termed an operator calculus for the approximation and interpolation of functions of more than one independent variable is presented. By considering the linear operators associated with bivariate and multivariate interpolation/approximation schemes, it is shown how they can be compounded by operator multiplication and Boolean addition to obtain a distributive lattice of approximation operators. It is then demonstrated via specific examples how this operator calculus leads to practical techniques for sculptured surface and volume modeling.

  9. An analytical study of physical models with inherited temporal and spatial memory

    Science.gov (United States)

    Jaradat, Imad; Alquran, Marwan; Al-Khaled, Kamel

    2018-04-01

    Du et al. (Sci. Reb. 3, 3431 (2013)) demonstrated that the fractional derivative order can be physically interpreted as a memory index by fitting the test data of memory phenomena. The aim of this work is to study analytically the joint effect of the memory index on time and space coordinates simultaneously. For this purpose, we introduce a novel bivariate fractional power series expansion that is accompanied by twofold fractional derivatives ordering α, β\\in(0,1]. Further, some convergence criteria concerning our expansion are presented and an analog of the well-known bivariate Taylor's formula in the sense of mixed fractional derivatives is obtained. Finally, in order to show the functionality and efficiency of this expansion, we employ the corresponding Taylor's series method to obtain closed-form solutions of various physical models with inherited time and space memory.

  10. Bayes Factor Covariance Testing in Item Response Models.

    Science.gov (United States)

    Fox, Jean-Paul; Mulder, Joris; Sinharay, Sandip

    2017-12-01

    Two marginal one-parameter item response theory models are introduced, by integrating out the latent variable or random item parameter. It is shown that both marginal response models are multivariate (probit) models with a compound symmetry covariance structure. Several common hypotheses concerning the underlying covariance structure are evaluated using (fractional) Bayes factor tests. The support for a unidimensional factor (i.e., assumption of local independence) and differential item functioning are evaluated by testing the covariance components. The posterior distribution of common covariance components is obtained in closed form by transforming latent responses with an orthogonal (Helmert) matrix. This posterior distribution is defined as a shifted-inverse-gamma, thereby introducing a default prior and a balanced prior distribution. Based on that, an MCMC algorithm is described to estimate all model parameters and to compute (fractional) Bayes factor tests. Simulation studies are used to show that the (fractional) Bayes factor tests have good properties for testing the underlying covariance structure of binary response data. The method is illustrated with two real data studies.

  11. Acceptability of GM foods among Pakistani consumers

    Science.gov (United States)

    Ali, Akhter; Rahut, Dil Bahadur; Imtiaz, Muhammad

    2016-01-01

    ABSTRACT In Pakistan majority of the consumers do not have information about genetically modified (GM) foods. In developing countries particularly in Pakistan few studies have focused on consumers' acceptability about GM foods. Using comprehensive primary dataset collected from 320 consumers in 2013 from Pakistan, this study analyzes the determinants of consumers' acceptability of GM foods. The data was analyzed by employing the bivariate probit model and censored least absolute deviation (CLAD) models. The empirical results indicated that urban consumers are more aware of GM foods compared to rural consumers. The acceptance of GM foods was more among females' consumers as compared to male consumers. In addition, the older consumers were more willing to accept GM food compared to young consumers. The acceptability of GM foods was also higher among wealthier households. Low price is the key factor leading to the acceptability of GM foods. The acceptability of the GM foods also reduces the risks among Pakistani consumers. PMID:27494790

  12. Acceptability of GM foods among Pakistani consumers.

    Science.gov (United States)

    Ali, Akhter; Rahut, Dil Bahadur; Imtiaz, Muhammad

    2016-04-02

    In Pakistan majority of the consumers do not have information about genetically modified (GM) foods. In developing countries particularly in Pakistan few studies have focused on consumers' acceptability about GM foods. Using comprehensive primary dataset collected from 320 consumers in 2013 from Pakistan, this study analyzes the determinants of consumers' acceptability of GM foods. The data was analyzed by employing the bivariate probit model and censored least absolute deviation (CLAD) models. The empirical results indicated that urban consumers are more aware of GM foods compared to rural consumers. The acceptance of GM foods was more among females' consumers as compared to male consumers. In addition, the older consumers were more willing to accept GM food compared to young consumers. The acceptability of GM foods was also higher among wealthier households. Low price is the key factor leading to the acceptability of GM foods. The acceptability of the GM foods also reduces the risks among Pakistani consumers.

  13. Dissecting the correlation structure of a bivariate phenotype ...

    Indian Academy of Sciences (India)

    Unknown

    We use Monte-Carlo simulations to evaluate the performance of the proposed test under different trait parameters and quantitative trait distributions. An application of the method is illustrated using data on two alcohol-related phenotypes from a project on the collaborative study on the genetics of alcoholism. [Ghosh S 2005 ...

  14. Interpreting Bivariate Regression Coefficients: Going beyond the Average

    Science.gov (United States)

    Halcoussis, Dennis; Phillips, G. Michael

    2010-01-01

    Statistics, econometrics, investment analysis, and data analysis classes often review the calculation of several types of averages, including the arithmetic mean, geometric mean, harmonic mean, and various weighted averages. This note shows how each of these can be computed using a basic regression framework. By recognizing when a regression model…

  15. GIS-based bivariate statistical techniques for groundwater potential ...

    Indian Academy of Sciences (India)

    Ali Haghizadeh

    2017-11-23

    Nov 23, 2017 ... regions. This study shows the potency of two GIS-based data driven ... growth of these tools has also prepared another ..... Urban. 30467. 3. 0.06. 0.20. 0.74. 0.80. −0.64. Distance from road ..... and artificial neural networks for potential groundwater .... ping: A case study at Mehran region, Iran; Catena 137.

  16. DBH Prediction Using Allometry Described by Bivariate Copula Distribution

    Science.gov (United States)

    Xu, Q.; Hou, Z.; Li, B.; Greenberg, J. A.

    2017-12-01

    Forest biomass mapping based on single tree detection from the airborne laser scanning (ALS) usually depends on an allometric equation that relates diameter at breast height (DBH) with per-tree aboveground biomass. The incapability of the ALS technology in directly measuring DBH leads to the need to predict DBH with other ALS-measured tree-level structural parameters. A copula-based method is proposed in the study to predict DBH with the ALS-measured tree height and crown diameter using a dataset measured in the Lassen National Forest in California. Instead of exploring an explicit mathematical equation that explains the underlying relationship between DBH and other structural parameters, the copula-based prediction method utilizes the dependency between cumulative distributions of these variables, and solves the DBH based on an assumption that for a single tree, the cumulative probability of each structural parameter is identical. Results show that compared with the bench-marking least-square linear regression and the k-MSN imputation, the copula-based method obtains better accuracy in the DBH for the Lassen National Forest. To assess the generalization of the proposed method, prediction uncertainty is quantified using bootstrapping techniques that examine the variability of the RMSE of the predicted DBH. We find that the copula distribution is reliable in describing the allometric relationship between tree-level structural parameters, and it contributes to the reduction of prediction uncertainty.

  17. Bivariate copulas on the exponentially weighted moving average control chart

    Directory of Open Access Journals (Sweden)

    Sasigarn Kuvattana

    2016-10-01

    Full Text Available This paper proposes four types of copulas on the Exponentially Weighted Moving Average (EWMA control chart when observations are from an exponential distribution using a Monte Carlo simulation approach. The performance of the control chart is based on the Average Run Length (ARL which is compared for each copula. Copula functions for specifying dependence between random variables are used and measured by Kendall’s tau. The results show that the Normal copula can be used for almost all shifts.

  18. An assessment on the use of bivariate, multivariate and soft ...

    Indian Academy of Sciences (India)

    techniques were used in GIS based collapse susceptibility mapping in an area from Sivas basin ... c Indian Academy of Sciences. 371 .... were constructed by using statistical and/or soft ...... Social Sciences, 106 Thousand Oaks, California, p.

  19. A Bivariate Extension to Traditional Empirical Orthogonal Function Analysis

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg; Hilger, Klaus Baggesen; Andersen, Ole Baltazar

    2002-01-01

    This paper describes the application of canonical correlations analysis to the joint analysis of global monthly mean values of 1996-1997 sea surface temperature (SST) and height (SSH) data. The SST data are considered as one set and the SSH data as another set of multivariate observations, both w...... as for example an increase in the SST will lead to an increase in the SSH. The analysis clearly shows the build-up of one of the largest El Niño events on record. Also the analysis indicates a phase lag of approximately one month between the SST and SSH fields....

  20. BIVARIATE SYMMETRICAL STATISTICS OF LONG-RANGE DEPENDENT OBSERVATIONS

    NARCIS (Netherlands)

    DEHLING, H; TAQQU, MS

    Let (X(j))j infinity = 1 be a stationary, mean-zero Gaussian sequence with covariances r(k) = EX(k+1)X1 satisfying r(0) = 1 and r(k) = k-D L(k) where D is small and L is slowly varying at infinity. Consider the sequence Y(j) = G(X(j)), j = 1,2,..., where G is any measurable function. We obtain the

  1. Spatial occupancy models for large data sets

    Science.gov (United States)

    Johnson, Devin S.; Conn, Paul B.; Hooten, Mevin B.; Ray, Justina C.; Pond, Bruce A.

    2013-01-01

    Since its development, occupancy modeling has become a popular and useful tool for ecologists wishing to learn about the dynamics of species occurrence over time and space. Such models require presence–absence data to be collected at spatially indexed survey units. However, only recently have researchers recognized the need to correct for spatially induced overdisperison by explicitly accounting for spatial autocorrelation in occupancy probability. Previous efforts to incorporate such autocorrelation have largely focused on logit-normal formulations for occupancy, with spatial autocorrelation induced by a random effect within a hierarchical modeling framework. Although useful, computational time generally limits such an approach to relatively small data sets, and there are often problems with algorithm instability, yielding unsatisfactory results. Further, recent research has revealed a hidden form of multicollinearity in such applications, which may lead to parameter bias if not explicitly addressed. Combining several techniques, we present a unifying hierarchical spatial occupancy model specification that is particularly effective over large spatial extents. This approach employs a probit mixture framework for occupancy and can easily accommodate a reduced-dimensional spatial process to resolve issues with multicollinearity and spatial confounding while improving algorithm convergence. Using open-source software, we demonstrate this new model specification using a case study involving occupancy of caribou (Rangifer tarandus) over a set of 1080 survey units spanning a large contiguous region (108 000 km2) in northern Ontario, Canada. Overall, the combination of a more efficient specification and open-source software allows for a facile and stable implementation of spatial occupancy models for large data sets.

  2. Segmentation and intensity estimation of microarray images using a gamma-t mixture model.

    Science.gov (United States)

    Baek, Jangsun; Son, Young Sook; McLachlan, Geoffrey J

    2007-02-15

    We present a new approach to the analysis of images for complementary DNA microarray experiments. The image segmentation and intensity estimation are performed simultaneously by adopting a two-component mixture model. One component of this mixture corresponds to the distribution of the background intensity, while the other corresponds to the distribution of the foreground intensity. The intensity measurement is a bivariate vector consisting of red and green intensities. The background intensity component is modeled by the bivariate gamma distribution, whose marginal densities for the red and green intensities are independent three-parameter gamma distributions with different parameters. The foreground intensity component is taken to be the bivariate t distribution, with the constraint that the mean of the foreground is greater than that of the background for each of the two colors. The degrees of freedom of this t distribution are inferred from the data but they could be specified in advance to reduce the computation time. Also, the covariance matrix is not restricted to being diagonal and so it allows for nonzero correlation between R and G foreground intensities. This gamma-t mixture model is fitted by maximum likelihood via the EM algorithm. A final step is executed whereby nonparametric (kernel) smoothing is undertaken of the posterior probabilities of component membership. The main advantages of this approach are: (1) it enjoys the well-known strengths of a mixture model, namely flexibility and adaptability to the data; (2) it considers the segmentation and intensity simultaneously and not separately as in commonly used existing software, and it also works with the red and green intensities in a bivariate framework as opposed to their separate estimation via univariate methods; (3) the use of the three-parameter gamma distribution for the background red and green intensities provides a much better fit than the normal (log normal) or t distributions; (4) the

  3. A Simulation-Based Dynamic Stochastic Route Choice Model for Evacuation

    Directory of Open Access Journals (Sweden)

    Xing Zhao

    2012-01-01

    Full Text Available This paper establishes a dynamic stochastic route choice model for evacuation to simulate the propagation process of traffic flow and estimate the stochastic route choice under evacuation situations. The model contains a lane-group-based cell transmission model (CTM which sets different traffic capacities for links with different turning movements to flow out in an evacuation situation, an actual impedance model which is to obtain the impedance of each route in time units at each time interval and a stochastic route choice model according to the probit-based stochastic user equilibrium. In this model, vehicles loading at each origin at each time interval are assumed to choose an evacuation route under determinate road network, signal design, and OD demand. As a case study, the proposed model is validated on the network nearby Nanjing Olympic Center after the opening ceremony of the 10th National Games of the People's Republic of China. The traffic volumes and clearing time at five exit points of the evacuation zone are calculated by the model to compare with survey data. The results show that this model can appropriately simulate the dynamic route choice and evolution process of the traffic flow on the network in an evacuation situation.

  4. A Markov Chain Model for Contagion

    Directory of Open Access Journals (Sweden)

    Angelos Dassios

    2014-11-01

    Full Text Available We introduce a bivariate Markov chain counting process with contagion for modelling the clustering arrival of loss claims with delayed settlement for an insurance company. It is a general continuous-time model framework that also has the potential to be applicable to modelling the clustering arrival of events, such as jumps, bankruptcies, crises and catastrophes in finance, insurance and economics with both internal contagion risk and external common risk. Key distributional properties, such as the moments and probability generating functions, for this process are derived. Some special cases with explicit results and numerical examples and the motivation for further actuarial applications are also discussed. The model can be considered a generalisation of the dynamic contagion process introduced by Dassios and Zhao (2011.

  5. PUBLIC APPROVAL OF PLANT AND ANIMAL BIOTECHNOLOGY IN KOREA: AN ORDERED PROBIT ANALYSIS

    OpenAIRE

    Hallman, William K.; Onyango, Benjamin M.; Govindasamy, Ramu; Jang, Ho-Min; Puduri, Venkata S.

    2004-01-01

    This study analyzes predictors of Korean public acceptance of the use of biotechnology to create genetically modified food products. Results indicate that the consumers with above average knowledge of specific outcomes of genetic modification were more likely than those with inaccurate or no knowledge to approve use of plant or animal genetic modification for the creation of new food products. Young South Koreans consumers (ages 20 to 29 years old) were more likely than old consumers (ages 50...

  6. Clinical implications of alternative TCP models for nonuniform dose distributions

    International Nuclear Information System (INIS)

    Deasy, J. O.

    1995-01-01

    Several tumor control probability (TCP) models for nonuniform dose distributions were compared, including: (a) a logistic/inter-patient-heterogeneity model, (b) a probit/inter-patient-heterogeneity model, (c) a Poisson/radioresistant-strain/identical-patients model, (d) a Poisson/inter-patient-heterogeneity model and (e) a Poisson/intra-tumor- and inter-patient-heterogeneity model. The models were analyzed in terms of the probability of controlling a single tumor voxel (the voxel control probability, or VCP), as a function of voxel volume and dose. Alternatively, the VCP surface can be thought of as the effect of a small cold spot. The models based on the Poisson equation which include inter-patient heterogeneity ((d) and (e)) have VCP surfaces (VCP as a function of dose and volume) which have a threshold 'waterfall' shape: below the waterfall (in dose), VCP is nearly zero. The threshold dose decreases with decreasing voxel volume. However, models (a), (b), and (c) all show a high probability of controlling a voxel (VCP>50%) with very low dose (e.g., 1 Gy) if the voxel is small (smaller than about 10 -3 of the tumor volume). Model (c) does not have the waterfall shape at low volumes due to the assumption of patient uniformity and a neglect of the effect of the clonogens which are more radiosensitive (and more numerous). Models (a) and (b) deviate from the waterfall shape at low volumes due to numerical differences between the functions used and the Poisson function. Hence, the Possion models which include inter-patient heterogeneities ((d) and (e)) are more sensitive to the effects of small cold spots than the other models considered

  7. Child Labour and Child Schooling in Rural Ethiopia: Nature and Trade-Off

    Science.gov (United States)

    Haile, Getinet; Haile, Beliyou

    2012-01-01

    We examine work participation and schooling for children aged 7-15 using survey data from rural Ethiopia. Bivariate probit and age-adjusted educational attainment equations have been estimated. Male children are found to be more likely to attend school than their female counterparts. "Specialization" in child labour is also found, with…

  8. Children's emotional and behavioral problems and their mothers' labor supply.

    Science.gov (United States)

    Richard, Patrick; Gaskin, Darrell J; Alexandre, Pierre K; Burke, Laura S; Younis, Mustafa

    2014-01-01

    It has been documented that about 20% of children and adolescents suffer from a diagnosable mental or addictive disorder in the United States. The high prevalence of children's emotional and behavioral problems (EBP) might have a negative effect on their mothers' labor market outcomes because children with EBP require additional time for treatment. However, these children may require additional financial resources, which might promote mothers' labor supply. Previous studies have only considered chronic conditions in analyzing the impact of children's health on parental work activities. Moreover, most of these studies have not accounted for endogeneity in children's health. This article estimates the effects of children's EBP on their mothers' labor supply by family structure while accounting for endogeneity in children's health. We used the 1997 and 2002 Child Development Supplements (CDS) to the Panel Study of Income Dynamics (PSID). We used probit and bivariate probit models to estimate mothers' probability of employment, and tobit and instrumental variable tobit models to estimate the effects of children's EBP on their mothers' work hours. Findings show negative effects of children's EBP on their married mothers' employment and on their single mothers' work hours. © The Author(s) 2014.

  9. Children’s Emotional and Behavioral Problems and Their Mothers’ Labor Supply

    Directory of Open Access Journals (Sweden)

    Patrick Richard PhD

    2014-11-01

    Full Text Available It has been documented that about 20% of children and adolescents suffer from a diagnosable mental or addictive disorder in the United States. The high prevalence of children’s emotional and behavioral problems (EBP might have a negative effect on their mothers’ labor market outcomes because children with EBP require additional time for treatment. However, these children may require additional financial resources, which might promote mothers’ labor supply. Previous studies have only considered chronic conditions in analyzing the impact of children’s health on parental work activities. Moreover, most of these studies have not accounted for endogeneity in children’s health. This article estimates the effects of children’s EBP on their mothers’ labor supply by family structure while accounting for endogeneity in children’s health. We used the 1997 and 2002 Child Development Supplements (CDS to the Panel Study of Income Dynamics (PSID. We used probit and bivariate probit models to estimate mothers’ probability of employment, and tobit and instrumental variable tobit models to estimate the effects of children’s EBP on their mothers’ work hours. Findings show negative effects of children’s EBP on their married mothers’ employment and on their single mothers’ work hours.

  10. An evaluation of substance misuse treatment providers used by an employee assistance program.

    Science.gov (United States)

    Miller, N A

    1992-05-01

    Structural measures of access, continuity, and quality of substance misuse treatment services were compared in 30 fee-for-service (FFS) facilities and nine health maintenance organizations (HMOs). Probit models related effects of the provider system (FFS or HMO) and the system's structural characteristics to 243 employees' access to and outcomes from treatment. Access was decreased in Independent Practice Association (IPA)/network HMOs and in all facilities which did not employ an addictionologist or provide coordinated treatment services. When bivariate correlations were examined, both use of copayments and imposing limits to the levels of treatment covered were negatively related to access, while a facility's provision of ongoing professional development was positively associated with access. These correlations did not remain significant in the multivariate probits. Receiving treatment in a staff model HMO and facing limits to the levels of treatment covered were negatively associated with attaining sufficient progress, while receiving treatment in a facility which provided ongoing professional development was positively related to progress: these effects did not remain significant in multivariate analyses. Implications for employee assistance program (EAP) staff in their role as case managers and for EAP staff and employers in their shared role as purchasers of treatment are discussed.

  11. Substitution of Formal and Informal Home Care Service Use and Nursing Home Service Use: Health Outcomes, Decision-Making Preferences, and Implications for a Public Health Policy.

    Science.gov (United States)

    Chen, Chia-Ching; Yamada, Tetsuji; Nakashima, Taeko; Chiu, I-Ming

    2017-01-01

    The purposes of this study are: (1) to empirically identify decision-making preferences of long-term health-care use, especially informal and formal home care (FHC) service use; (2) to evaluate outcomes vs. costs based on substitutability of informal and FHC service use; and (3) to investigate health outcome disparity based on substitutability. The methods of ordinary least squares, a logit model, and a bivariate probit model are used by controlling for socioeconomic, demographic, and physical/mental health factors to investigate outcomes and costs based substitutability of informal and formal health-care use. The data come from the 2013 Japanese Study of Aging and Retirement (JSTAR), which is designed by Keizai-Sangyo Kenkyu-jo, Hitotsubashi University, and the University of Tokyo. The JSTAR is a globally comparable data survey of the elderly. There exists a complement relationship between the informal home care (IHC) and community-based FHC services, and the elasticity's ranges from 0.18 to 0.22. These are reasonable results, which show that unobservable factors are positively related to IHC and community-based FHC, but negatively related to nursing home (NH) services based on our bivariate probit model. Regarding health-care outcome efficiency issue, the IHC is the best one among three types of elderly care: IHC, community-based FHC, and NH services. Health improvement/outcome of elderly with the IHC is heavier concentrated on IHC services than the elderly care services by community-based FHC and NH care services. Policy makers need to address a diversity of health outcomes and efficiency of services based on providing services to elderly through resource allocation to the different types of long-term care. A provision of partial or full compensation for elderly care at home is recommendable and a viable option to improve their quality of lives.

  12. Substitution of Formal and Informal Home Care Service Use and Nursing Home Service Use: Health Outcomes, Decision-Making Preferences, and Implications for a Public Health Policy

    Directory of Open Access Journals (Sweden)

    Chia-Ching Chen

    2017-11-01

    Full Text Available ObjectivesThe purposes of this study are: (1 to empirically identify decision-making preferences of long-term health-care use, especially informal and formal home care (FHC service use; (2 to evaluate outcomes vs. costs based on substitutability of informal and FHC service use; and (3 to investigate health outcome disparity based on substitutability.Methodology and dataThe methods of ordinary least squares, a logit model, and a bivariate probit model are used by controlling for socioeconomic, demographic, and physical/mental health factors to investigate outcomes and costs based substitutability of informal and formal health-care use. The data come from the 2013 Japanese Study of Aging and Retirement (JSTAR, which is designed by Keizai-Sangyo Kenkyu-jo, Hitotsubashi University, and the University of Tokyo. The JSTAR is a globally comparable data survey of the elderly.ResultsThere exists a complement relationship between the informal home care (IHC and community-based FHC services, and the elasticity’s ranges from 0.18 to 0.22. These are reasonable results, which show that unobservable factors are positively related to IHC and community-based FHC, but negatively related to nursing home (NH services based on our bivariate probit model. Regarding health-care outcome efficiency issue, the IHC is the best one among three types of elderly care: IHC, community-based FHC, and NH services. Health improvement/outcome of elderly with the IHC is heavier concentrated on IHC services than the elderly care services by community-based FHC and NH care services.ConclusionPolicy makers need to address a diversity of health outcomes and efficiency of services based on providing services to elderly through resource allocation to the different types of long-term care. A provision of partial or full compensation for elderly care at home is recommendable and a viable option to improve their quality of lives.

  13. Does prenatal care benefit maternal health? A study of post-partum maternal care use.

    Science.gov (United States)

    Liu, Tsai-Ching; Chen, Bradley; Chan, Yun-Shan; Chen, Chin-Shyan

    2015-10-01

    Most studies on prenatal care focus on its effects on infant health, while studying less about the effects on maternal health. Using the Longitudinal Health Insurance claims data in Taiwan in a recursive bivariate probit model, this study examines the impact of adequate prenatal care on the probability of post-partum maternal hospitalization during the first 6 months after birth. The results show that adequate prenatal care significantly reduces the probability of post-partum maternal hospitalization among women who have had vaginal delivery by 43.8%. This finding suggests that the benefits of prenatal care may have been underestimated among women with vaginal delivery. Timely and adequate prenatal care not only creates a positive impact on infant health, but also yields significant benefits for post-partum maternal health. However, we do not find similar benefits of prenatal care for women undergoing a cesarean section. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  14. Low Self-Control and the Victim-Offender Overlap: A Gendered Analysis.

    Science.gov (United States)

    Flexon, Jamie L; Meldrum, Ryan C; Piquero, Alex R

    2016-07-01

    The overlap between victimization and offending is well documented. Yet, there have been fewer investigations of the reasons underlying this relationship. One possible, but understudied, explanation lies with Gottfredson and Hirschi's arguments regarding self-control. The current study adds to this line of inquiry by assessing whether low self-control accounts for the victim-offender overlap in a sample of young adults and whether self-control accounts for the observed overlap similarly across gender. Results from a series of bivariate probit regression models indicate that low self-control is positively related to both victimization and offending. However, only among males does low self-control account for a substantive portion of the victim-offender overlap. Limitations of the study and implications and directions for future research are discussed. © The Author(s) 2015.

  15. Effect of Prior Health-Related Employment on the Registered Nurse Workforce Supply.

    Science.gov (United States)

    Yoo, Byung-kwan; Lin, Tzu-chun; Kim, Minchul; Sasaki, Tomoko; Spetz, Joanne

    2016-01-01

    Registered nurses (RN) who held prior health-related employment in occupations other than licensed practical or vocational nursing (LPN/LVN) are reported to have increased rapidly in the past decades. Researchers examined whether prior health-related employment affects RN workforce supply. A cross-sectional bivariate probit model using the 2008 National Sample Survey of Registered Nurses was esti- mated. Prior health-related employment in relatively lower-wage occupations, such as allied health, clerk, or nursing aide, was positively associated with working s an RN. ~>Prior health-related employ- ment in relatively higher-wage categories, such as a health care manager or LPN/LVN, was positively associated with working full-time as an RN. Policy implications are to promote an expanded career ladder program and a nursing school admission policy that targets non-RN health care workers with an interest in becoming RNs.

  16. Exploring the link between ambulatory care and avoidable hospitalizations at the Veteran Health Administration.

    Science.gov (United States)

    Pracht, Etienne E; Bass, Elizabeth

    2011-01-01

    This paper explores the link between utilization of ambulatory care and the likelihood of rehospitalization for an avoidable reason in veterans served by the Veteran Health Administration (VA). The analysis used administrative data containing healthcare utilization and patient characteristics stored at the national VA data warehouse, the Corporate Franchise Data Center. The study sample consisted of 284 veterans residing in Florida who had been hospitalized at least once for an avoidable reason. A bivariate probit model with instrumental variables was used to estimate the probability of rehospitalization. Veterans who had at least 1 ambulatory care visit per month experienced a significant reduction in the probability of rehospitalization for the same avoidable hospitalization condition. The findings suggest that ambulatory care can serve as an important substitute for more expensive hospitalization for the conditions characterized as avoidable. © 2011 National Association for Healthcare Quality.

  17. Heterogeneous Effects of a Nonlinear Price Schedule for Outpatient Care.

    Science.gov (United States)

    Farbmacher, Helmut; Ihle, Peter; Schubert, Ingrid; Winter, Joachim; Wuppermann, Amelie

    2017-10-01

    Nonlinear price schedules generally have heterogeneous effects on health-care demand. We develop and apply a finite mixture bivariate probit model to analyze whether there are heterogeneous reactions to the introduction of a nonlinear price schedule in the German statutory health insurance system. In administrative insurance claims data from the largest German health insurance plan, we find that some individuals strongly react to the new price schedule while a second group of individuals does not react. Post-estimation analyses reveal that the group of the individuals who do not react to the reform includes the relatively sick. These results are in line with forward-looking behavior: Individuals who are already sick expect that they will hit the kink in the price schedule and thus are less sensitive to the co-payment. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  18. Contractual arrangements and food quality certifications in the Mexican avocado industry

    Directory of Open Access Journals (Sweden)

    J. J. Arana-Coronado

    2013-03-01

    Full Text Available The adoption of private quality certifications in agrifood supply chains often requires specific investments by producers which can be safeguarded by choosing specific contractual arrangements. Based on a survey data from avocado producers in Mexico, this paper aims to analyze the impact of transaction costs and relationship characteristics of the joint choice of contractual arrangements and quality certifications. Using a bivariate probit model, it shows that a producer’s decision to adopt private quality certifications is directly linked to high levels of asset specificity and price. In order to ensure the high level of specificity under the presence of low levels of price uncertainty, producers have relied on relational governance supported by the expectation of continuity in their bilateral relationships with buyers.

  19. HIV Testing Among Young People Aged 16-24 in South Africa: Impact of Mass Media Communication Programs.

    Science.gov (United States)

    Do, Mai; Figueroa, Maria Elena; Lawrence Kincaid, D

    2016-09-01

    Knowing one's serostatus is critical in the HIV prevention, care and treatment continuum. This study examines the impact of communication programs on HIV testing in South Africa. Data came from 2204 young men and women aged 16-24 who reported to be sexually active in a population based survey. Structural equation modeling was used to test the directions and causal pathways between communication program exposure, HIV testing discussion, and having a test in the last 12 months. Bivariate and multivariate probit regressions provided evidence of exogeneity of communication exposure and the two HIV-related outcomes. One in three sampled individuals had been tested in the last 12 months. Communication program exposure only had an indirect effect on getting tested by encouraging young people to talk about testing. The study suggests that communication programs may create an environment that supports open HIV-related discussions and may have a long-term impact on behavior change.

  20. The use of copulas to practical estimation of multivariate stochastic differential equation mixed effects models

    International Nuclear Information System (INIS)

    Rupšys, P.

    2015-01-01

    A system of stochastic differential equations (SDE) with mixed-effects parameters and multivariate normal copula density function were used to develop tree height model for Scots pine trees in Lithuania. A two-step maximum likelihood parameter estimation method is used and computational guidelines are given. After fitting the conditional probability density functions to outside bark diameter at breast height, and total tree height, a bivariate normal copula distribution model was constructed. Predictions from the mixed-effects parameters SDE tree height model calculated during this research were compared to the regression tree height equations. The results are implemented in the symbolic computational language MAPLE

  1. The use of copulas to practical estimation of multivariate stochastic differential equation mixed effects models

    Energy Technology Data Exchange (ETDEWEB)

    Rupšys, P. [Aleksandras Stulginskis University, Studenų g. 11, Akademija, Kaunas district, LT – 53361 Lithuania (Lithuania)

    2015-10-28

    A system of stochastic differential equations (SDE) with mixed-effects parameters and multivariate normal copula density function were used to develop tree height model for Scots pine trees in Lithuania. A two-step maximum likelihood parameter estimation method is used and computational guidelines are given. After fitting the conditional probability density functions to outside bark diameter at breast height, and total tree height, a bivariate normal copula distribution model was constructed. Predictions from the mixed-effects parameters SDE tree height model calculated during this research were compared to the regression tree height equations. The results are implemented in the symbolic computational language MAPLE.

  2. Hedging effectiveness and volatility models for crude oil market: a dynamic approach; Modelos de volatilidade e a efetividade do hedge no mercado de petroleo: um abordagem dinamica

    Energy Technology Data Exchange (ETDEWEB)

    Salles, Andre Assis de [Universidade Federal do Rio de Janeiro (UFRJ), RJ (Brazil)

    2012-07-01

    The hedge strategies allow negotiators that have short and long positions in the market protection against price fluctuations. This paper examines the performance of bivariate volatility models for the crude oil spot and future returns of the Western Texas Intermediate - WTI type barrel prices. Besides the volatility of spot and future crude oil barrel returns time series, the hedge ratio strategy is examined through the hedge effectiveness. Thus this study shows hedge strategies built using methodologies applied in the variance modeling of returns of crude oil prices in the spot and future markets, and covariance between these two market returns, which correspond to the inputs of the hedge strategy shown in this work. From the studied models the bivariate GARCH in a Diagonal VECH and BEKK representations was chosen, using three different models for the mean: a bivariate autoregressive, a vector autoregressive and a vector error correction. The methodologies used here take into consideration the denial of assumptions of homoscedasticity and normality for the return distributions. The data used is logarithm returns of daily prices quoted in dollars per barrel from November 2008 to May 2010 for spot and future contracts, in particular the June contract. (author)

  3. Vector wind and vector wind shear models 0 to 27 km altitude for Cape Kennedy, Florida, and Vandenberg AFB, California

    Science.gov (United States)

    Smith, O. E.

    1976-01-01

    The techniques are presented to derive several statistical wind models. The techniques are from the properties of the multivariate normal probability function. Assuming that the winds can be considered as bivariate normally distributed, then (1) the wind components and conditional wind components are univariate normally distributed, (2) the wind speed is Rayleigh distributed, (3) the conditional distribution of wind speed given a wind direction is Rayleigh distributed, and (4) the frequency of wind direction can be derived. All of these distributions are derived from the 5-sample parameter of wind for the bivariate normal distribution. By further assuming that the winds at two altitudes are quadravariate normally distributed, then the vector wind shear is bivariate normally distributed and the modulus of the vector wind shear is Rayleigh distributed. The conditional probability of wind component shears given a wind component is normally distributed. Examples of these and other properties of the multivariate normal probability distribution function as applied to Cape Kennedy, Florida, and Vandenberg AFB, California, wind data samples are given. A technique to develop a synthetic vector wind profile model of interest to aerospace vehicle applications is presented.

  4. An Ordered Regression Model to Predict Transit Passengers’ Behavioural Intentions

    Energy Technology Data Exchange (ETDEWEB)

    Oña, J. de; Oña, R. de; Eboli, L.; Forciniti, C.; Mazzulla, G.

    2016-07-01

    Passengers’ behavioural intentions after experiencing transit services can be viewed as signals that show if a customer continues to utilise a company’s service. Users’ behavioural intentions can depend on a series of aspects that are difficult to measure directly. More recently, transit passengers’ behavioural intentions have been just considered together with the concepts of service quality and customer satisfaction. Due to the characteristics of the ways for evaluating passengers’ behavioural intentions, service quality and customer satisfaction, we retain that this kind of issue could be analysed also by applying ordered regression models. This work aims to propose just an ordered probit model for analysing service quality factors that can influence passengers’ behavioural intentions towards the use of transit services. The case study is the LRT of Seville (Spain), where a survey was conducted in order to collect the opinions of the passengers about the existing transit service, and to have a measure of the aspects that can influence the intentions of the users to continue using the transit service in the future. (Author)

  5. A Heckman selection model for the safety analysis of signalized intersections.

    Directory of Open Access Journals (Sweden)

    Xuecai Xu

    Full Text Available The objective of this paper is to provide a new method for estimating crash rate and severity simultaneously.This study explores a Heckman selection model of the crash rate and severity simultaneously at different levels and a two-step procedure is used to investigate the crash rate and severity levels. The first step uses a probit regression model to determine the sample selection process, and the second step develops a multiple regression model to simultaneously evaluate the crash rate and severity for slight injury/kill or serious injury (KSI, respectively. The model uses 555 observations from 262 signalized intersections in the Hong Kong metropolitan area, integrated with information on the traffic flow, geometric road design, road environment, traffic control and any crashes that occurred during two years.The results of the proposed two-step Heckman selection model illustrate the necessity of different crash rates for different crash severity levels.A comparison with the existing approaches suggests that the Heckman selection model offers an efficient and convenient alternative method for evaluating the safety performance at signalized intersections.

  6. Modeling marrow damage from response data: Morphallaxis from radiation biology to benzene toxicity

    Energy Technology Data Exchange (ETDEWEB)

    Jones, T.D.; Morris, M.D.; Hasan, J.S.

    1995-12-01

    Consensus principles from radiation biology were used to describe a generic set of nonlinear, first-order differential equations for modeling of toxicity-induced compensatory cell kinetics in terms of sublethal injury, repair, direct killing, killing of cells with unrepaired sublethal injury, and repopulation. This cellular model was linked to a probit model of hematopoietic mortality that describes death from infection and/or hemorrhage between {approximately} 5 and 30 days. Mortality data from 27 experiments with 851 doseresponse groups, in which doses were protracted by rate and/or fractionation, were used to simultaneously estimate all rate constants by maximum-likelihood methods. Data used represented 18,940 test animals distributed according to: (mice, 12,827); (rats, 2,925); (sheep, 1,676); (swine, 829); (dogs, 479); and (burros, 204). Although a long-term, repopulating hematopoietic stem cell is ancestral to all lineages needed to restore normal homeostasis, the dose-response data from the protracted irradiations indicate clearly that the particular lineage that is ``critical`` to hematopoietic recovery does not resemble stem-like cells with regard to radiosensitivity and repopulation rates. Instead, the weakest link in the chain of hematopoiesis was found to have an intrinsic radioresistance equal to or greater than stromal cells and to repopulate at the same rates. Model validation has been achieved by predicting the LD{sub 50} and/or fractional group mortality in 38 protracted-dose experiments (rats and mice) that were not used in the fitting of model coefficients.

  7. Modeling marrow damage from response data: Morphallaxis from radiation biology to benzene toxicity

    International Nuclear Information System (INIS)

    Jones, T.D.; Morris, M.D.; Hasan, J.S.

    1995-01-01

    Consensus principles from radiation biology were used to describe a generic set of nonlinear, first-order differential equations for modeling of toxicity-induced compensatory cell kinetics in terms of sublethal injury, repair, direct killing, killing of cells with unrepaired sublethal injury, and repopulation. This cellular model was linked to a probit model of hematopoietic mortality that describes death from infection and/or hemorrhage between ∼ 5 and 30 days. Mortality data from 27 experiments with 851 doseresponse groups, in which doses were protracted by rate and/or fractionation, were used to simultaneously estimate all rate constants by maximum-likelihood methods. Data used represented 18,940 test animals distributed according to: (mice, 12,827); (rats, 2,925); (sheep, 1,676); (swine, 829); (dogs, 479); and (burros, 204). Although a long-term, repopulating hematopoietic stem cell is ancestral to all lineages needed to restore normal homeostasis, the dose-response data from the protracted irradiations indicate clearly that the particular lineage that is ''critical'' to hematopoietic recovery does not resemble stem-like cells with regard to radiosensitivity and repopulation rates. Instead, the weakest link in the chain of hematopoiesis was found to have an intrinsic radioresistance equal to or greater than stromal cells and to repopulate at the same rates. Model validation has been achieved by predicting the LD 50 and/or fractional group mortality in 38 protracted-dose experiments (rats and mice) that were not used in the fitting of model coefficients

  8. Behavior genetic modeling of human fertility

    DEFF Research Database (Denmark)

    Rodgers, J L; Kohler, H P; Kyvik, K O

    2001-01-01

    Behavior genetic designs and analysis can be used to address issues of central importance to demography. We use this methodology to document genetic influence on human fertility. Our data come from Danish twin pairs born from 1953 to 1959, measured on age at first attempt to get pregnant (First......Try) and number of children (NumCh). Behavior genetic models were fitted using structural equation modeling and DF analysis. A consistent medium-level additive genetic influence was found for NumCh, equal across genders; a stronger genetic influence was identified for FirstTry, greater for females than for males....... A bivariate analysis indicated significant shared genetic variance between NumCh and FirstTry....

  9. Risk factors associated with the practice of child marriage among Roma girls in Serbia.

    Science.gov (United States)

    Hotchkiss, David R; Godha, Deepali; Gage, Anastasia J; Cappa, Claudia

    2016-02-01

    Relatively little research on the issue of child marriage has been conducted in European countries where the overall prevalence of child marriage is relatively low, but relatively high among marginalized ethnic sub-groups. The purpose of this study is to assess the risk factors associated with the practice of child marriage among females living in Roma settlements in Serbia and among the general population and to explore the inter-relationship between child marriage and school enrollment decisions. The study is based on data from a nationally representative household survey in Serbia conducted in 2010 - and a separate survey of households living in Roma settlements in the same year. For each survey, we estimated a bivariate probit model of risk factors associated with being currently married and currently enrolled in school based on girls 15 to 17 years of age in the nationally representative and Roma settlements samples. The practice of child marriage among the Roma was found to be most common among girls who lived in poorer households, who had less education, and who lived in rural locations. The results of the bivariate probit analysis suggest that, among girls in the general population, decisions about child marriage school attendance are inter-dependent in that common unobserved factors were found to influence both decisions. However, among girls living in Roma settlements, there is only weak evidence of simultaneous decision making. The study finds evidence of the interdependence between marriage and school enrollment decisions among the general population and, to a lesser extent, among the Roma. Further research is needed on child marriage among the Roma and other marginalized sub-groups in Europe, and should be based on panel data, combined with qualitative data, to assess the role of community-level factors and the characteristics of households where girls grow up on child marriage and education decisions.

  10. A Vector Autoregressive Model for Electricity Prices Subject to Long Memory and Regime Switching

    DEFF Research Database (Denmark)

    Haldrup, Niels; Nielsen, Frank; Nielsen, Morten Ørregaard

    2007-01-01

    A regime dependent VAR model is suggested that allows long memory (fractional integration) in each of the regime states as well as the possibility of fractional cointegra- tion. The model is relevant in describing the price dynamics of electricity prices where the transmission of power is subject...... to occasional congestion periods. For a system of bilat- eral prices non-congestion means that electricity prices are identical whereas congestion makes prices depart. Hence, the joint price dynamics implies switching between essen- tially a univariate price process under non-congestion and a bivariate price...

  11. Modeling Fuel Choice among Households in Northern Cameroon

    Directory of Open Access Journals (Sweden)

    Jean Hugues Nlom

    2015-07-01

    Full Text Available The present study aims to explore economic and socio-demographic factors that influence a household’s probability to switch from firewood to cleaner fuels (kerosene and LPG in northern Cameroon. The paper employs an ordered probit model to construct cooking patterns and fuel choices. Three main cooking sources are considered: firewood, kerosene, and liquefied petroleum gas. Utilized data are derived from a national survey conducted in 2004 by the Cameroonian National Institute of Statistics. The study analyzes the data related to the Sudano-Sahelian agro-ecological zone, which is one of the most affected by land degradation and decertification. While results indicate that there is a potential for a transition from traditional to cleaner fuels in the studied region, this transition is still in its earlier stage. The research demonstrates that firewood and kerosene prices, age of household heads, educational level of household heads and willingness to have a gas cylinder, as well as type of dwelling have a statistically significant impact on fuel-switching decisions.

  12. A latent process model for forecasting multiple time series in environmental public health surveillance.

    Science.gov (United States)

    Morrison, Kathryn T; Shaddick, Gavin; Henderson, Sarah B; Buckeridge, David L

    2016-08-15

    This paper outlines a latent process model for forecasting multiple health outcomes arising from a common environmental exposure. Traditionally, surveillance models in environmental health do not link health outcome measures, such as morbidity or mortality counts, to measures of exposure, such as air pollution. Moreover, different measures of health outcomes are treated as independent, while it is known that they are correlated with one another over time as they arise in part from a common underlying exposure. We propose modelling an environmental exposure as a latent process, and we describe the implementation of such a model within a hierarchical Bayesian framework and its efficient computation using integrated nested Laplace approximations. Through a simulation study, we compare distinct univariate models for each health outcome with a bivariate approach. The bivariate model outperforms the univariate models in bias and coverage of parameter estimation, in forecast accuracy and in computational efficiency. The methods are illustrated with a case study using healthcare utilization and air pollution data from British Columbia, Canada, 2003-2011, where seasonal wildfires produce high levels of air pollution, significantly impacting population health. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  13. Models and analysis for multivariate failure time data

    Science.gov (United States)

    Shih, Joanna Huang

    The goal of this research is to develop and investigate models and analytic methods for multivariate failure time data. We compare models in terms of direct modeling of the margins, flexibility of dependency structure, local vs. global measures of association, and ease of implementation. In particular, we study copula models, and models produced by right neutral cumulative hazard functions and right neutral hazard functions. We examine the changes of association over time for families of bivariate distributions induced from these models by displaying their density contour plots, conditional density plots, correlation curves of Doksum et al, and local cross ratios of Oakes. We know that bivariate distributions with same margins might exhibit quite different dependency structures. In addition to modeling, we study estimation procedures. For copula models, we investigate three estimation procedures. the first procedure is full maximum likelihood. The second procedure is two-stage maximum likelihood. At stage 1, we estimate the parameters in the margins by maximizing the marginal likelihood. At stage 2, we estimate the dependency structure by fixing the margins at the estimated ones. The third procedure is two-stage partially parametric maximum likelihood. It is similar to the second procedure, but we estimate the margins by the Kaplan-Meier estimate. We derive asymptotic properties for these three estimation procedures and compare their efficiency by Monte-Carlo simulations and direct computations. For models produced by right neutral cumulative hazards and right neutral hazards, we derive the likelihood and investigate the properties of the maximum likelihood estimates. Finally, we develop goodness of fit tests for the dependency structure in the copula models. We derive a test statistic and its asymptotic properties based on the test of homogeneity of Zelterman and Chen (1988), and a graphical diagnostic procedure based on the empirical Bayes approach. We study the

  14. Probability Model for Data Redundancy Detection in Sensor Networks

    Directory of Open Access Journals (Sweden)

    Suman Kumar

    2009-01-01

    Full Text Available Sensor networks are made of autonomous devices that are able to collect, store, process and share data with other devices. Large sensor networks are often redundant in the sense that the measurements of some nodes can be substituted by other nodes with a certain degree of confidence. This spatial correlation results in wastage of link bandwidth and energy. In this paper, a model for two associated Poisson processes, through which sensors are distributed in a plane, is derived. A probability condition is established for data redundancy among closely located sensor nodes. The model generates a spatial bivariate Poisson process whose parameters depend on the parameters of the two individual Poisson processes and on the distance between the associated points. The proposed model helps in building efficient algorithms for data dissemination in the sensor network. A numerical example is provided investigating the advantage of this model.

  15. Retrieving unobserved consideration sets from household panel data

    NARCIS (Netherlands)

    J.E.M. van Nierop; R. Paap (Richard); B. Bronnenberg; Ph.H.B.F. Franses (Philip Hans); M. Wedel (Michel)

    2005-01-01

    textabstractWe propose a new model to describe consideration, consisting of a multivariate probit model component for consideration and a multinomial probit model component for choice, given consideration. The approach allows one to analyze stated consideration set data, revealed consideration set

  16. Modeling and forecasting petroleum futures volatility

    International Nuclear Information System (INIS)

    Sadorsky, Perry

    2006-01-01

    Forecasts of oil price volatility are important inputs into macroeconometric models, financial market risk assessment calculations like value at risk, and option pricing formulas for futures contracts. This paper uses several different univariate and multivariate statistical models to estimate forecasts of daily volatility in petroleum futures price returns. The out-of-sample forecasts are evaluated using forecast accuracy tests and market timing tests. The TGARCH model fits well for heating oil and natural gas volatility and the GARCH model fits well for crude oil and unleaded gasoline volatility. Simple moving average models seem to fit well in some cases provided the correct order is chosen. Despite the increased complexity, models like state space, vector autoregression and bivariate GARCH do not perform as well as the single equation GARCH model. Most models out perform a random walk and there is evidence of market timing. Parametric and non-parametric value at risk measures are calculated and compared. Non-parametric models outperform the parametric models in terms of number of exceedences in backtests. These results are useful for anyone needing forecasts of petroleum futures volatility. (author)

  17. Calculating the Probability of Returning a Loan with Binary Probability Models

    Directory of Open Access Journals (Sweden)

    Julian Vasilev

    2014-12-01

    Full Text Available The purpose of this article is to give a new approach in calculating the probability of returning a loan. A lot of factors affect the value of the probability. In this article by using statistical and econometric models some influencing factors are proved. The main approach is concerned with applying probit and logit models in loan management institutions. A new aspect of the credit risk analysis is given. Calculating the probability of returning a loan is a difficult task. We assume that specific data fields concerning the contract (month of signing, year of signing, given sum and data fields concerning the borrower of the loan (month of birth, year of birth (age, gender, region, where he/she lives may be independent variables in a binary logistics model with a dependent variable “the probability of returning a loan”. It is proved that the month of signing a contract, the year of signing a contract, the gender and the age of the loan owner do not affect the probability of returning a loan. It is proved that the probability of returning a loan depends on the sum of contract, the remoteness of the loan owner and the month of birth. The probability of returning a loan increases with the increase of the given sum, decreases with the proximity of the customer, increases for people born in the beginning of the year and decreases for people born at the end of the year.

  18. A Capacity-Restraint Transit Assignment Model When a Predetermination Method Indicates the Invalidity of Time Independence

    Directory of Open Access Journals (Sweden)

    Haoyang Ding

    2015-01-01

    Full Text Available The statistical independence of time of every two adjacent bus links plays a crucial role in deciding the feasibility of using many mathematical models to analyze urban transit networks. Traditional research generally ignores the time independence that acts as the ground of their models. Assumption is usually made that time independence of every two adjacent links is sound. This is, however, actually groundless and probably causes problematic conclusions reached by corresponding models. Many transit assignment models such as multinomial probit-based models lose their effects when the time independence is not valid. In this paper, a simple method to predetermine the time independence is proposed. Based on the predetermination method, a modified capacity-restraint transit assignment method aimed at engineering practice is put forward and tested through a small contrived network and a case study in Nanjing city, China, respectively. It is found that the slope of regression equation between the mean and standard deviation of normal distribution acts as the indicator of time independence at the same time. Besides, our modified assignment method performs better than the traditional one with more reasonable results while keeping the property of simplicity well.

  19. Modelos de predição para sobrevivência de plantas de Eucalyptus grandis Prediction models of Eucalyptus grandis plant survival

    Directory of Open Access Journals (Sweden)

    Telde Natel Custódio

    2009-01-01

    Full Text Available Objetivou-se com este trabalho comparar modelos de predição de plantas sobreviventes de Eucalyptus grandis. Utilizaram-se os seguintes modelos: modelo linear misto com os dados transformados, utilizando-se as transformações angular e BOX-COX; modelo linear generalizado misto com distribuição binomial e funções de ligação logística, probit e complemento log-log; modelo linear generalizado misto com distribuição Poisson e função de ligação logarítmica. Os dados são provenientes de um experimento em blocos ao acaso, para avaliação de progênies maternas de Eucalyptus grandis, aos 5 anos de idade, em que a variável resposta são plantas sobreviventes. Para comparação dos efeitos entre os modelos foram estimadas as correlações de Spearman e aplicado o teste de permutação de Fisher. Foi possível concluir que, o modelo linear generalizado misto com distribuição Poisson e função de ligação logarítmica se ajustou mal aos dados e que as estimativas para os efeitos fixos e predição para os efeitos aleatórios, não se diferenciaram entre os demais modelos estudados.The objective of this work was to compare models for prediction of the survival of plants of Eucalyptus grandis. The following models were used: linear mixed model with the transformed data, by utilizing the angular transformations and BOX-COX; generalized linear mixed model with binomial distribution and logistic functions, probit and complement log-log links; generalized linear mixed model with Poisson distribution and logarithmic link function. The data came from a randomized block experiment for evaluation of Eucalyptus grandis maternal progenies at five years old, in which the variable response are surviving plants. For comparison of the effects among the models the correlations of Spearman were estimated and the test of permutation of Fisher was applied. It was possible to conclude that: the generalized linear mixed model with Poisson distribution and

  20. Discrete factor approximations in simultaneous equation models: estimating the impact of a dummy endogenous variable on a continuous outcome.

    Science.gov (United States)

    Mroz, T A

    1999-10-01

    This paper contains a Monte Carlo evaluation of estimators used to control for endogeneity of dummy explanatory variables in continuous outcome regression models. When the true model has bivariate normal disturbances, estimators using discrete factor approximations compare favorably to efficient estimators in terms of precision and bias; these approximation estimators dominate all the other estimators examined when the disturbances are non-normal. The experiments also indicate that one should liberally add points of support to the discrete factor distribution. The paper concludes with an application of the discrete factor approximation to the estimation of the impact of marriage on wages.

  1. application of Hierachical Linear Models

    African Journals Online (AJOL)

    Erna Kinsey

    The only drawback of applying. HLMs is ... structure of the data because it dictates the statistical techniques ..... either through visual inspection of histograms or frequency ... Bivariate/multivariate data cleaning procedures can also be impor-.

  2. Finite sample performance of the E-M algorithm for ranks data modelling

    Directory of Open Access Journals (Sweden)

    Angela D'Elia

    2007-10-01

    Full Text Available We check the finite sample performance of the maximum likelihood estimators of the parameters of a mixture distribution recently introduced for modelling ranks/preference data. The estimates are derived by the E-M algorithm and the performance is evaluated both from an univariate and bivariate points of view. While the results are generally acceptable as far as it concerns the bias, the Monte Carlo experiment shows a different behaviour of the estimators efficiency for the two parameters of the mixture, mainly depending upon their location in the admissible parametric space. Some operative suggestions conclude the paer.

  3. Asymptotic analysis for a simple explicit estimator in Barndorff-Nielsen and Shephard stochastic volatility models

    DEFF Research Database (Denmark)

    Hubalek, Friedrich; Posedel, Petra

    expressions for the asymptotic covariance matrix. We develop in detail the martingale estimating function approach for a bivariate model, that is not a diffusion, but admits jumps. We do not use ergodicity arguments. We assume that both, logarithmic returns and instantaneous variance are observed...... on a discrete grid of fixed width, and the observation horizon tends to infinity. This anaysis is a starting point and benchmark for further developments concerning optimal martingale estimating functions, and for theoretical and empirical investigations, that replace the (actually unobserved) variance process...

  4. On the application of copula in modeling maintenance contract

    International Nuclear Information System (INIS)

    Iskandar, B P; Husniah, H

    2016-01-01

    This paper deals with the application of copula in maintenance contracts for a nonrepayable item. Failures of the item are modeled using a two dimensional approach where age and usage of the item and this requires a bi-variate distribution to modelling failures. When the item fails then corrective maintenance (CM) is minimally repaired. CM can be outsourced to an external agent or done in house. The decision problem for the owner is to find the maximum total profit whilst for the agent is to determine the optimal price of the contract. We obtain the mathematical models of the decision problems for the owner as well as the agent using a Nash game theory formulation. (paper)

  5. Comparative analysis of informal borrowing behaviour between ...

    African Journals Online (AJOL)

    Tools of analyses were descriptive statistics of mean and percentages and probit model, The result of the Probit model on the variables influencing borrowing behaviour of male-headed households indicated that the coefficients of household size, farm size, purpose of borrowing, loan duration, interest rate and collateral ...

  6. A menu-driven software package of Bayesian nonparametric (and parametric) mixed models for regression analysis and density estimation.

    Science.gov (United States)

    Karabatsos, George

    2017-02-01

    Most of applied statistics involves regression analysis of data. In practice, it is important to specify a regression model that has minimal assumptions which are not violated by data, to ensure that statistical inferences from the model are informative and not misleading. This paper presents a stand-alone and menu-driven software package, Bayesian Regression: Nonparametric and Parametric Models, constructed from MATLAB Compiler. Currently, this package gives the user a choice from 83 Bayesian models for data analysis. They include 47 Bayesian nonparametric (BNP) infinite-mixture regression models; 5 BNP infinite-mixture models for density estimation; and 31 normal random effects models (HLMs), including normal linear models. Each of the 78 regression models handles either a continuous, binary, or ordinal dependent variable, and can handle multi-level (grouped) data. All 83 Bayesian models can handle the analysis of weighted observations (e.g., for meta-analysis), and the analysis of left-censored, right-censored, and/or interval-censored data. Each BNP infinite-mixture model has a mixture distribution assigned one of various BNP prior distributions, including priors defined by either the Dirichlet process, Pitman-Yor process (including the normalized stable process), beta (two-parameter) process, normalized inverse-Gaussian process, geometric weights prior, dependent Dirichlet process, or the dependent infinite-probits prior. The software user can mouse-click to select a Bayesian model and perform data analysis via Markov chain Monte Carlo (MCMC) sampling. After the sampling completes, the software automatically opens text output that reports MCMC-based estimates of the model's posterior distribution and model predictive fit to the data. Additional text and/or graphical output can be generated by mouse-clicking other menu options. This includes output of MCMC convergence analyses, and estimates of the model's posterior predictive distribution, for selected

  7. Development of discrete choice model considering internal reference points and their effects in travel mode choice context

    Science.gov (United States)

    Sarif; Kurauchi, Shinya; Yoshii, Toshio

    2017-06-01

    In the conventional travel behavior models such as logit and probit, decision makers are assumed to conduct the absolute evaluations on the attributes of the choice alternatives. On the other hand, many researchers in cognitive psychology and marketing science have been suggesting that the perceptions of attributes are characterized by the benchmark called “reference points” and the relative evaluations based on them are often employed in various choice situations. Therefore, this study developed a travel behavior model based on the mental accounting theory in which the internal reference points are explicitly considered. A questionnaire survey about the shopping trip to the CBD in Matsuyama city was conducted, and then the roles of reference points in travel mode choice contexts were investigated. The result showed that the goodness-of-fit of the developed model was higher than that of the conventional model, indicating that the internal reference points might play the major roles in the choice of travel mode. Also shown was that the respondents seem to utilize various reference points: some tend to adopt the lowest fuel price they have experienced, others employ fare price they feel in perceptions of the travel cost.

  8. Investigating the adaptive model of thermal comfort for naturally ventilated school buildings in Taiwan

    Science.gov (United States)

    Hwang, Ruey-Lung; Lin, Tzu-Ping; Chen, Chen-Peng; Kuo, Nai-Jung

    2009-03-01

    Divergence in the acceptability to people in different regions of naturally ventilated thermal environments raises a concern over the extent to which the ASHRAE Standard 55 may be applied as a universal criterion of thermal comfort. In this study, the ASHRAE 55 adaptive model of thermal comfort was investigated for its applicability to a hot and humid climate through a long-term field survey performed in central Taiwan among local students attending 14 elementary and high schools during September to January. Adaptive behaviors, thermal neutrality, and thermal comfort zones are explored. A probit analysis of thermal acceptability responses from students was performed in place of the conventional linear regression of thermal sensation votes against operative temperature to investigate the limits of comfort zones for 90% and 80% acceptability; the corresponding comfort zones were found to occur at 20.1-28.4°C and 17.6-30.0°C, respectively. In comparison with the yearly comfort zones recommended by the adaptive model for naturally ventilated spaces in the ASHRAE Standard 55, those observed in this study differ in the lower limit for 80% acceptability, with the observed level being 1.7°C lower than the ASHRAE-recommended value. These findings can be generalized to the population of school children, thus providing information that can supplement ASHRAE Standard 55 in evaluating the thermal performance of naturally ventilated school buildings, particularly in hot-humid areas such as Taiwan.

  9. Cumulative t-link threshold models for the genetic analysis of calving ease scores

    Directory of Open Access Journals (Sweden)

    Tempelman Robert J

    2003-09-01

    Full Text Available Abstract In this study, a hierarchical threshold mixed model based on a cumulative t-link specification for the analysis of ordinal data or more, specifically, calving ease scores, was developed. The validation of this model and the Markov chain Monte Carlo (MCMC algorithm was carried out on simulated data from normally and t4 (i.e. a t-distribution with four degrees of freedom distributed populations using the deviance information criterion (DIC and a pseudo Bayes factor (PBF measure to validate recently proposed model choice criteria. The simulation study indicated that although inference on the degrees of freedom parameter is possible, MCMC mixing was problematic. Nevertheless, the DIC and PBF were validated to be satisfactory measures of model fit to data. A sire and maternal grandsire cumulative t-link model was applied to a calving ease dataset from 8847 Italian Piemontese first parity dams. The cumulative t-link model was shown to lead to posterior means of direct and maternal heritabilities (0.40 ± 0.06, 0.11 ± 0.04 and a direct maternal genetic correlation (-0.58 ± 0.15 that were not different from the corresponding posterior means of the heritabilities (0.42 ± 0.07, 0.14 ± 0.04 and the genetic correlation (-0.55 ± 0.14 inferred under the conventional cumulative probit link threshold model. Furthermore, the correlation (> 0.99 between posterior means of sire progeny merit from the two models suggested no meaningful rerankings. Nevertheless, the cumulative t-link model was decisively chosen as the better fitting model for this calving ease data using DIC and PBF.

  10. A semiparametric model of household gasoline demand

    Energy Technology Data Exchange (ETDEWEB)

    Wadud, Zia [Department of Civil Engineering, Bangladesh University of Engineering and Technology, Dhaka 1000 (Bangladesh); Noland, Robert B. [Alan M. Voorhees Transportation Center, Edward J. Bloustein School of Planning and Public Policy, Rutgers University, New Brunswick, NJ 08901 (United States); Graham, Daniel J. [Centre for Transport Studies, Dept of Civil and Environmental Engineering, Imperial College London, London, SW7 2AZ (United Kingdom)

    2010-01-15

    Gasoline demand studies typically generate a single price and income elasticity for a country. It is however possible that these elasticities may differ among various socio-economic groups. At the same time, parametric gasoline demand models may not be flexible enough to capture the changes in price elasticities with different levels of income. This paper models US gasoline demand using more flexible semiparametric techniques, accommodating the possibility of differences in responses among households. The econometric model employs a non-parametric bivariate smoothing for price and income and a parametric representation of other explanatory variables. Possible heterogeneity in price and income elasticities is modelled through interacting price and income with demographic variables. Results show that price responses do vary with demographic variables such as income, multiple vehicle holding, presence of multiple wage earners or rural or urban residential locations. Households' responses to a price change decrease with higher income. Multiple vehicle and multiple earner households also show higher sensitivity to a price change. Households located in urban areas reduce consumption more than those in rural areas in response to an increase in price. Comparison of the flexible semiparametric model with a parametric translog model, however, reveals no significant differences between results, and the parametric models have the advantage of lower computational requirements and better interpretability. (author)

  11. Assessing Trust and Effectiveness in Virtual Teams: Latent Growth Curve and Latent Change Score Models

    Directory of Open Access Journals (Sweden)

    Michael D. Coovert

    2017-08-01

    Full Text Available Trust plays a central role in the effectiveness of work groups and teams. This is the case for both face-to-face and virtual teams. Yet little is known about the development of trust in virtual teams. We examined cognitive and affective trust and their relationship to team effectiveness as reflected through satisfaction with one’s team and task performance. Latent growth curve analysis reveals both trust types start at a significant level with individual differences in that initial level. Cognitive trust follows a linear growth pattern while affective trust is overall non-linear, but becomes linear once established. Latent change score models are utilized to examine change in trust and also its relationship with satisfaction with the team and team performance. In examining only change in trust and its relationship to satisfaction there appears to be a straightforward influence of trust on satisfaction and satisfaction on trust. However, when incorporated into a bivariate coupling latent change model the dynamics of the relationship are revealed. A similar pattern holds for trust and task performance; however, in the bivariate coupling change model a more parsimonious representation is preferred.

  12. Application of Vine Copulas to Credit Portfolio Risk Modeling

    Directory of Open Access Journals (Sweden)

    Marco Geidosch

    2016-06-01

    Full Text Available In this paper, we demonstrate the superiority of vine copulas over conventional copulas when modeling the dependence structure of a credit portfolio. We show statistical and economic implications of replacing conventional copulas by vine copulas for a subportfolio of the Euro Stoxx 50 and the S&P 500 companies, respectively. Our study includes D-vines and R-vines where the bivariate building blocks are chosen from the Gaussian, the t and the Clayton family. Our findings are (i the conventional Gauss copula is deficient in modeling the dependence structure of a credit portfolio and economic capital is seriously underestimated; (ii D-vine structures offer a better statistical fit to the data than classical copulas, but underestimate economic capital compared to R-vines; (iii when mixing different copula families in an R-vine structure, the best statistical fit to the data can be achieved which corresponds to the most reliable estimate for economic capital.

  13. Landslide susceptibility mapping in Mawat area, Kurdistan Region, NE Iraq: a comparison of different statistical models

    Science.gov (United States)

    Othman, A. A.; Gloaguen, R.; Andreani, L.; Rahnama, M.

    2015-03-01

    During the last decades, expansion of settlements into areas prone to landslides in Iraq has increased the importance of accurate hazard assessment. Susceptibility mapping provides information about hazardous locations and thus helps to potentially prevent infrastructure damage due to mass wasting. The aim of this study is to evaluate and compare frequency ratio (FR), weight of evidence (WOE), logistic regression (LR) and probit regression (PR) approaches in combination with new geomorphological indices to determine the landslide susceptibility index (LSI). We tested these four methods in Mawat area, Kurdistan Region, NE Iraq, where landslides occur frequently. For this purpose, we evaluated 16 geomorphological, geological and environmental predicting factors mainly derived from the advanced spaceborne thermal emission and reflection radiometer (ASTER) satellite. The available reference inventory includes 351 landslides representing a cumulative surface of 3.127 km2. This reference inventory was mapped from QuickBird data by manual delineation and partly verified by field survey. The areas under curve (AUC) of the receiver operating characteristic (ROC), and relative landslide density (R index) show that all models perform similarly and that focus should be put on the careful selection of proxies. The results indicate that the lithology and the slope aspects play major roles for landslide occurrences. Furthermore, this paper demonstrates that using hypsometric integral as a prediction factor instead of slope curvature gives better results and increases the accuracy of the LSI.

  14. Econometric modelling of risk adverse behaviours of entrepreneurs in the provision of house fittings in China

    Directory of Open Access Journals (Sweden)

    Rita Yi Man Li

    2012-03-01

    Full Text Available Entrepreneurs have always born the risk of running their business. They reap a profit in return for their risk taking and work. Housing developers are no different. In many countries, such as Australia, the United Kingdom and the United States, they interpret the tastes of the buyers and provide the dwellings they develop with basic fittings such as floor and wall coverings, bathroom fittings and kitchen cupboards. In mainland China, however, in most of the developments, units or houses are sold without floor or wall coverings, kitchen  or bathroom fittings. What is the motive behind this choice? This paper analyses the factors affecting housing developers’ decisions to provide fittings based on 1701 housing developments in Hangzhou, Chongqing and Hangzhou using a Probit model. The results show that developers build a higher proportion of bare units in mainland China when: 1 there is shortage of housing; 2 land costs are high so that the comparative costs of providing fittings become relatively low.

  15. Non-perturbative models of intermittency in drift-wave turbulence: towards a probabilistic theory of anomalous transport

    International Nuclear Information System (INIS)

    Kim, Eun-jin; Diamond, P.H.; Malkov, M.

    2003-01-01

    Two examples of non-perturbative models of intermittency in drift-wave (DW) turbulence are presented. The first is a calculation of the probability distribution function (PDF) of ion heat flux due to structures in ion temperature gradient turbulence. The instanton calculus predicts the PDF to be a stretched exponential. The second is a derivation of a bi-variate Burgers equation for the evolution of the DW population density in the presence of radially extended streamer flows. The PDF of fluctuation intensity avalanches is determined. The relation of this to turbulence spreading, observed in simulations, is discussed. (author)

  16. Wave Resource Characterization Using an Unstructured Grid Modeling Approach

    Directory of Open Access Journals (Sweden)

    Wei-Cheng Wu

    2018-03-01

    Full Text Available This paper presents a modeling study conducted on the central Oregon coast for wave resource characterization, using the unstructured grid Simulating WAve Nearshore (SWAN model coupled with a nested grid WAVEWATCH III® (WWIII model. The flexibility of models with various spatial resolutions and the effects of open boundary conditions simulated by a nested grid WWIII model with different physics packages were evaluated. The model results demonstrate the advantage of the unstructured grid-modeling approach for flexible model resolution and good model skills in simulating the six wave resource parameters recommended by the International Electrotechnical Commission in comparison to the observed data in Year 2009 at National Data Buoy Center Buoy 46050. Notably, spectral analysis indicates that the ST4 physics package improves upon the ST2 physics package’s ability to predict wave power density for large waves, which is important for wave resource assessment, load calculation of devices, and risk management. In addition, bivariate distributions show that the simulated sea state of maximum occurrence with the ST4 physics package matched the observed data better than with the ST2 physics package. This study demonstrated that the unstructured grid wave modeling approach, driven by regional nested grid WWIII outputs along with the ST4 physics package, can efficiently provide accurate wave hindcasts to support wave resource characterization. Our study also suggests that wind effects need to be considered if the dimension of the model domain is greater than approximately 100 km, or O (102 km.

  17. How Much Math Do Students Need to Succeed in Business and Economics Statistics? An Ordered Probit Analysis

    Science.gov (United States)

    Green, Jeffrey J.; Stone, Courtenay C.; Zegeye, Abera; Charles, Thomas A.

    2009-01-01

    Because statistical analysis requires the ability to use mathematics, students typically are required to take one or more prerequisite math courses prior to enrolling in the business statistics course. Despite these math prerequisites, however, many students find it difficult to learn business statistics. In this study, we use an ordered probit…

  18. An anatomic risk model to screen post endovascular aneurysm repair patients for aneurysm sac enlargement.

    Science.gov (United States)

    Png, Chien Yi M; Tadros, Rami O; Beckerman, William E; Han, Daniel K; Tardiff, Melissa L; Torres, Marielle R; Marin, Michael L; Faries, Peter L

    2017-09-01

    Follow-up computed tomography angiography (CTA) scans add considerable postimplantation costs to endovascular aneurysm repairs (EVARs) of abdominal aortic aneurysms (AAAs). By building a risk model, we hope to identify patients at low risk for aneurysm sac enlargement to minimize unnecessary CTAs. 895 consecutive patients who underwent EVAR for AAA were reviewed, of which 556 met inclusion criteria. A Probit model was created for aneurysm sac enlargement, with preoperative aneurysm morphology, patient demographics, and operative details as variables. Our final model included 287 patients and had a sensitivity of 100%, a specificity of 68.9%, and an accuracy of 70.4%. Ninety-nine (35%) of patients were assigned to the high-risk group, whereas 188 (65%) of patients were assigned to the low-risk group. Notably, regarding anatomic variables, our model reported that age, pulmonary comorbidities, aortic neck diameter, iliac artery length, and aneurysms were independent predictors of post-EVAR sac enlargement. With the exception of age, all statistically significant variables were qualitatively supported by prior literature. With regards to secondary outcomes, the high-risk group had significantly higher proportions of AAA-related deaths (5.1% versus 1.1%, P = 0.037) and Type 1 endoleaks (9.1% versus 3.2%, P = 0.033). Our model is a decent predictor of patients at low risk for post AAA EVAR aneurysm sac enlargement and associated complications. With additional validation and refinement, it could be applied to practices to cut down on the overall need for postimplantation CTA. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. THE DETERMINANTS OF INNOVATION IN THE ITALIAN FOOD INDUSTRY: THE ROLE OF R&D NETWORKING

    OpenAIRE

    D'Alessio, Massimiliano; Maietta, Ornella Wanda

    2008-01-01

    Objective of the paper is to verify which are the determinants of innovations in the Italian food industry and which role R&D networking, through the cooperative nature of firm, plays among these determinants. The data used are the 9th (2001-2003) wave of Capitalia surveys based on a representative sample of manufacturing firms with information on firm characteristics, employee education levels, innovation and R&D investments. The approach is a bivariate probit analysis where the two dependen...

  20. Price, tax and tobacco product substitution in Zambia.

    Science.gov (United States)

    Stoklosa, Michal; Goma, Fastone; Nargis, Nigar; Drope, Jeffrey; Chelwa, Grieve; Chisha, Zunda; Fong, Geoffrey T

    2018-03-24

    In Zambia, the number of cigarette users is growing, and the lack of strong tax policies is likely an important cause. When adjusted for inflation, levels of tobacco tax have not changed since 2007. Moreover, roll-your-own (RYO) tobacco, a less-costly alternative to factory-made (FM) cigarettes, is highly prevalent. We modelled the probability of FM and RYO cigarette smoking using individual-level data obtained from the 2012 and 2014 waves of the International Tobacco Control (ITC) Zambia Survey. We used two estimation methods: the standard estimation method involving separate random effects probit models and a method involving a system of equations (incorporating bivariate seemingly unrelated random effects probit) to estimate price elasticities of FM and RYO cigarettes and their cross-price elasticities. The estimated price elasticities of smoking prevalence are -0.20 and -0.03 for FM and RYO cigarettes, respectively. FM and RYO are substitutes; that is, when the price of one of the products goes up, some smokers switch to the other product. The effects are stronger for substitution from FM to RYO than vice versa. This study affirms that increasing cigarette tax with corresponding price increases could significantly reduce cigarette use in Zambia. Furthermore, reducing between-product price differences would reduce substitution from FM to RYO. Since RYO use is associated with lower socioeconomic status, efforts to decrease RYO use, including through tax/price approaches and cessation assistance, would decrease health inequalities in Zambian society and reduce the negative economic consequences of tobacco use experienced by the poor. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  1. Parametric overdispersed frailty models for current status data.

    Science.gov (United States)

    Abrams, Steven; Aerts, Marc; Molenberghs, Geert; Hens, Niel

    2017-12-01

    Frailty models have a prominent place in survival analysis to model univariate and multivariate time-to-event data, often complicated by the presence of different types of censoring. In recent years, frailty modeling gained popularity in infectious disease epidemiology to quantify unobserved heterogeneity using Type I interval-censored serological data or current status data. In a multivariate setting, frailty models prove useful to assess the association between infection times related to multiple distinct infections acquired by the same individual. In addition to dependence among individual infection times, overdispersion can arise when the observed variability in the data exceeds the one implied by the model. In this article, we discuss parametric overdispersed frailty models for time-to-event data under Type I interval-censoring, building upon the work by Molenberghs et al. (2010) and Hens et al. (2009). The proposed methodology is illustrated using bivariate serological data on hepatitis A and B from Flanders, Belgium anno 1993-1994. Furthermore, the relationship between individual heterogeneity and overdispersion at a stratum-specific level is studied through simulations. Although it is important to account for overdispersion, one should be cautious when modeling both individual heterogeneity and overdispersion based on current status data as model selection is hampered by the loss of information due to censoring. © 2017, The International Biometric Society.

  2. [Offered income, salary expectations, and the economic activity of married women: an analytic model].

    Science.gov (United States)

    Lollivier, S

    1984-06-01

    This study uses data from tax declarations for 40,000 French households for 1975 to propose a model that permits quantification of the effects of certain significant factors on the economic activity of married women. The PROBIT model of analysis of variance was used to determine the specific effect of several variables, including age of the woman, number of children under 25 years of age in the household, the age of the youngest child, husband's income and socioprofessional status, wife's level and type of education, size of community of residence and region of residence. The principal factors influencing activity rates were found to be educational level, age, and to those of childless women, but activity rates dropped by about 30% for mothers of 2 and even more for mothers of 3 or more children. Influence of the place of residence and the husband's income were associated with lesser disparities. The reasons for variations in female labor force participation can be viewed as analogous to a balance. Underlying factors can increase or decrease the income the woman hopes to earn (offered income) as well as the minimum income for which she will work (required salary). A TOBIT model was constructed in which income was a function of age, education, geographic location, and number of children, and salary required was a function of the variables related to the husband including income and socioprofessional status. For most of the effects considered, the observed variation in activity rates resulted from variations in offered income. The husband's income influences only the desired salary. The offered income decreases and the required salary increases when the number of children is 2 or more, reducing the rate of activity. More educated women have slightly greater salary expectations, but command much higher salaries, resulting in an increased rate of professional activity.

  3. Long-term relationships of major macro-variables in a resource-related economic model of Australia

    International Nuclear Information System (INIS)

    Harvie, Charles; Hoa, T. van

    1993-01-01

    The paper reports the results of a simple cointegration analysis applied to bivariate causality models using data on resource output, oil prices, terms of trade, current account and output growth to investigate the long-term relationships among these major macroeconomic aggregates in a resource-related economic model of Australia. For the period 1960-1990, the empirical evidence indicates that these five macro-variables, as formulated in our model, are not random walks. In addition, resource production and oil prices are significantly cointegrated, and they are also significantly cointegrated with the current account, terms of trade and economic growth. These findings provide support to the long-term adjustments foundation of our resource-related model. (author)

  4. Where does “whichever occurs first” hold for preventive maintenance modelings?

    International Nuclear Information System (INIS)

    Zhao, Xufeng; Liu, Hu-Chen; Nakagawa, Toshio

    2015-01-01

    The purpose of this paper is to observe where the classical assumption “whichever occurs first” holds for preventive maintenance (PM) modelings. We firstly take up a bivariate maintenance policy where “whichever occurs first” and the newly proposed “whichever occurs last” are respectively used. Modification of PM performance is introduced into modelings to avoid interruptions of job executions, that is, PMs are done only at the end of working cycles. From the points of performability and maintenance cost, we secondly compare the optimized “first” and “last” policies in detail and find two critical points of comparisons analytically. Further, by comparing the “first” and “last” policies with the standard maintenance, modified PM costs are obtained to observe whether it is easy to save PM cost for “whichever occurs first”. For a trivariate maintenance policy, we thirdly propose an entirely new assumption “whichever occurs middle” and give another one model that considers both assumptions of “first” and “last”. We analyze maintenance probabilities for each model and then obtain directly their expected maintenance cost rates for further studies. - Highlights: • A bivariate maintenance policy based on “whichever occurs first” is improved. • Two comparisons of “whichever occurs first and last” are made. • Modified maintenance costs are obtained to observe which policy could save more costs. • New assumption “whichever occurs middle” for the trivariate maintenances is proposed. • One policy is modeled by considering both assumptions of “first” and “last”

  5. INTER-TEMPORAL ANALYSIS OF HOUSEHOLD CAR AND MOTORCYCLE OWNERSHIP BEHAVIORS

    Directory of Open Access Journals (Sweden)

    Nobuhiro SANKO

    2009-01-01

    Full Text Available This study investigates household car and motorcycle ownership in Nagoya metropolitan area of Japan. Bivariate ordered probit models of household vehicle ownership were developed using the data from the case study area at three time points, 1981, 1991, and 2001. The accessibility that is generally known to be correlated with vehicle ownership decisions is incorporated as an input for the proposed vehicle ownership model to investigate the potential relationship between them. The mode choice models for the area were first estimated to quantify the accessibility indexes that were later integrated into the vehicle ownership models. Inter-temporal comparison and temporal transferability analysis were conducted. Some of the major findings suggest: 1 that age and gender differences have become less important in modal choices and car ownership as motorization proceeds; 2 that the accessibility seems to have a significant correlation with vehicle ownership; 3 that car and motorcycle ownership may not be independent and may have a complementary relationship; and 4 that the deep insights concerning the model selection are obtained from the viewpoints of the temporal transferability.

  6. Health insurance for the poor: impact on catastrophic and out-of-pocket health expenditures in Mexico.

    Science.gov (United States)

    Galárraga, Omar; Sosa-Rubí, Sandra G; Salinas-Rodríguez, Aarón; Sesma-Vázquez, Sergio

    2010-10-01

    The goal of Seguro Popular (SP) in Mexico was to improve the financial protection of the uninsured population against excessive health expenditures. This paper estimates the impact of SP on catastrophic health expenditures (CHE), as well as out-of-pocket (OOP) health expenditures, from two different sources. First, we use the SP Impact Evaluation Survey (2005-2006), and compare the instrumental variables (IV) results with the experimental benchmark. Then, we use the same IV methods with the National Health and Nutrition Survey (ENSANUT 2006). We estimate naïve models, assuming exogeneity, and contrast them with IV models that take advantage of the specific SP implementation mechanisms for identification. The IV models estimated included two-stage least squares (2SLS), bivariate probit, and two-stage residual inclusion (2SRI) models. Instrumental variables estimates resulted in comparable estimates against the "gold standard." Instrumental variables estimates indicate a reduction of 54% in catastrophic expenditures at the national level. SP beneficiaries also had lower expenditures on outpatient and medicine expenditures. The selection-corrected protective effect is found not only in the limited experimental dataset, but also at the national level.

  7. Integrative modelling reveals mechanisms linking productivity and plant species richness.

    Science.gov (United States)

    Grace, James B; Anderson, T Michael; Seabloom, Eric W; Borer, Elizabeth T; Adler, Peter B; Harpole, W Stanley; Hautier, Yann; Hillebrand, Helmut; Lind, Eric M; Pärtel, Meelis; Bakker, Jonathan D; Buckley, Yvonne M; Crawley, Michael J; Damschen, Ellen I; Davies, Kendi F; Fay, Philip A; Firn, Jennifer; Gruner, Daniel S; Hector, Andy; Knops, Johannes M H; MacDougall, Andrew S; Melbourne, Brett A; Morgan, John W; Orrock, John L; Prober, Suzanne M; Smith, Melinda D

    2016-01-21

    How ecosystem productivity and species richness are interrelated is one of the most debated subjects in the history of ecology. Decades of intensive study have yet to discern the actual mechanisms behind observed global patterns. Here, by integrating the predictions from multiple theories into a single model and using data from 1,126 grassland plots spanning five continents, we detect the clear signals of numerous underlying mechanisms linking productivity and richness. We find that an integrative model has substantially higher explanatory power than traditional bivariate analyses. In addition, the specific results unveil several surprising findings that conflict with classical models. These include the isolation of a strong and consistent enhancement of productivity by richness, an effect in striking contrast with superficial data patterns. Also revealed is a consistent importance of competition across the full range of productivity values, in direct conflict with some (but not all) proposed models. The promotion of local richness by macroecological gradients in climatic favourability, generally seen as a competing hypothesis, is also found to be important in our analysis. The results demonstrate that an integrative modelling approach leads to a major advance in our ability to discern the underlying processes operating in ecological systems.

  8. Estimation of causal mediation effects for a dichotomous outcome in multiple-mediator models using the mediation formula.

    Science.gov (United States)

    Wang, Wei; Nelson, Suchitra; Albert, Jeffrey M

    2013-10-30

    Mediators are intermediate variables in the causal pathway between an exposure and an outcome. Mediation analysis investigates the extent to which exposure effects occur through these variables, thus revealing causal mechanisms. In this paper, we consider the estimation of the mediation effect when the outcome is binary and multiple mediators of different types exist. We give a precise definition of the total mediation effect as well as decomposed mediation effects through individual or sets of mediators using the potential outcomes framework. We formulate a model of joint distribution (probit-normal) using continuous latent variables for any binary mediators to account for correlations among multiple mediators. A mediation formula approach is proposed to estimate the total mediation effect and decomposed mediation effects based on this parametric model. Estimation of mediation effects through individual or subsets of mediators requires an assumption involving the joint distribution of multiple counterfactuals. We conduct a simulation study that demonstrates low bias of mediation effect estimators for two-mediator models with various combinations of mediator types. The results also show that the power to detect a nonzero total mediation effect increases as the correlation coefficient between two mediators increases, whereas power for individual mediation effects reaches a maximum when the mediators are uncorrelated. We illustrate our approach by applying it to a retrospective cohort study of dental caries in adolescents with low and high socioeconomic status. Sensitivity analysis is performed to assess the robustness of conclusions regarding mediation effects when the assumption of no unmeasured mediator-outcome confounders is violated. Copyright © 2013 John Wiley & Sons, Ltd.

  9. Estimation of Causal Mediation Effects for a Dichotomous Outcome in Multiple-Mediator Models using the Mediation Formula

    Science.gov (United States)

    Nelson, Suchitra; Albert, Jeffrey M.

    2013-01-01

    Mediators are intermediate variables in the causal pathway between an exposure and an outcome. Mediation analysis investigates the extent to which exposure effects occur through these variables, thus revealing causal mechanisms. In this paper, we consider the estimation of the mediation effect when the outcome is binary and multiple mediators of different types exist. We give a precise definition of the total mediation effect as well as decomposed mediation effects through individual or sets of mediators using the potential outcomes framework. We formulate a model of joint distribution (probit-normal) using continuous latent variables for any binary mediators to account for correlations among multiple mediators. A mediation formula approach is proposed to estimate the total mediation effect and decomposed mediation effects based on this parametric model. Estimation of mediation effects through individual or subsets of mediators requires an assumption involving the joint distribution of multiple counterfactuals. We conduct a simulation study that demonstrates low bias of mediation effect estimators for two-mediator models with various combinations of mediator types. The results also show that the power to detect a non-zero total mediation effect increases as the correlation coefficient between two mediators increases, while power for individual mediation effects reaches a maximum when the mediators are uncorrelated. We illustrate our approach by applying it to a retrospective cohort study of dental caries in adolescents with low and high socioeconomic status. Sensitivity analysis is performed to assess the robustness of conclusions regarding mediation effects when the assumption of no unmeasured mediator-outcome confounders is violated. PMID:23650048

  10. Ethnic variations in immigrant poverty exit and female employment: the missing link.

    Science.gov (United States)

    Kaida, Lisa

    2015-04-01

    Despite widespread interest in poverty among recent immigrants and female immigrant employment, research on the link between the two is limited. This study evaluates the effect of recently arrived immigrant women's employment on the exit from family poverty and considers the implications for ethnic differences in poverty exit. It uses the bivariate probit model and the Fairlie decomposition technique to analyze data from the Longitudinal Survey of Immigrants to Canada (LSIC), a nationally representative survey of immigrants arriving in Canada, 2000-2001. Results show that the employment of recently arrived immigrant women makes a notable contribution to lifting families out of poverty. Moreover, the wide ethnic variations in the probability of exit from poverty between European and non-European groups are partially explained by the lower employment rates among non-European women. The results suggest that the equal earner/female breadwinner model applies to low-income recent immigrant families in general, but the male breadwinner model explains the low probability of poverty exit among select non-European groups whose female employment rates are notably low.

  11. Toxicоlogical evaluation of the plant products using Brine Shrimp (Artemia salina L. model

    Directory of Open Access Journals (Sweden)

    Меntor R. Hamidi

    2014-04-01

    Full Text Available Many natural products could serve as the starting point in the development of modern medicines because of their numerous biological and pharmacological activities. However, some of them are known to carry toxicological properties as well. In order to achieve a safe treatment with plant products, numerous research studies have recently been focused on both pharmacology and toxicity of medicinal plants. Moreover, these studies employed efforts for alternative biological assays. Brine Shrimp Lethality Assay is the most convenient system for monitoring biological activities of various plant species. This method is very useful for preliminary assessment of toxicity of the plant extracts. Rapidness, simplicity and low requirements are several advantages of this assay. However, several conditions need to be completed, especially in the means of standardized experimental conditions (temperature, pH of the medium, salinity, aeration and light. The toxicity of herbal extracts using this assay has been determined in a concentration range of 10, 100 and 1000 µg/ml of the examined herbal extract. Most toxicity studies which use the Brine Shrimp Lethality Assay determine the toxicity after 24 hours of exposure to the tested sample. The median lethal concentration (LC50 of the test samples is obtained by a plot of percentage of the dead shrimps against the logarithm of the sample concentration. LC50 values are estimated using a probit regression analysis and compared with either Meyer’s or Clarkson’s toxicity criteria. Furthermore, the positive correlation between Meyer’s toxicity scale for Artemia salina and Gosselin, Smith and Hodge’s toxicity scale for higher animal models confirmed that the Brine Shrimp Lethality Assay is an excellent predictive tool for the toxic potential of plant extracts in humans.

  12. CHAPTER 1

    African Journals Online (AJOL)

    Dr Olaleye

    The Probit regression showed statistical significant (p<0.05) impact of contact and precautionary indices .... parameters of the estimated model are denoted as β. ... Where is the constant term and is the stochastic error term.

  13. extent of use of ict by fish farmers in isoko agricultural zone of delta ...

    African Journals Online (AJOL)

    Mr. TONY A

    Descriptive statistics and binary probit model were the tools of analyses. ... extension services and affordable credit will help in promoting the ... Depending on the extension approach, farmers should either pay totally or partially the extension ...

  14. Factors influencing cassava - pulp fermentation period for gari ...

    African Journals Online (AJOL)

    Factors influencing cassava - pulp fermentation period for gari processing among ... Result of probit model analysis at 5% significance level shows an R value ... Marital status (2.236**) and respondents' cultural influences (1.960**) were ...

  15. The Dynamics of Poverty and Vulnerability in Rural Ethiopia

    African Journals Online (AJOL)

    from the random effects probit model suggest that determinants of poverty status in rural Ethiopia ... 1 School of Agricultural Economics and Agribusiness, ... security policies, strategies and programs in the last two decades (FDRE,. 2004 ...

  16. Determinants of Fisher's Choice of Fishing Activity along the Volta ...

    African Journals Online (AJOL)

    Determinants of Fisher's Choice of Fishing Activity along the Volta Lake in Yeji ... The analysis was done using the Ordered Probit Model and descriptive statistics. ... economic growth, reduce poverty and ensure household food security in Yeji.

  17. Tapered composite likelihood for spatial max-stable models

    KAUST Repository

    Sang, Huiyan

    2014-05-01

    Spatial extreme value analysis is useful to environmental studies, in which extreme value phenomena are of interest and meaningful spatial patterns can be discerned. Max-stable process models are able to describe such phenomena. This class of models is asymptotically justified to characterize the spatial dependence among extremes. However, likelihood inference is challenging for such models because their corresponding joint likelihood is unavailable and only bivariate or trivariate distributions are known. In this paper, we propose a tapered composite likelihood approach by utilizing lower dimensional marginal likelihoods for inference on parameters of various max-stable process models. We consider a weighting strategy based on a "taper range" to exclude distant pairs or triples. The "optimal taper range" is selected to maximize various measures of the Godambe information associated with the tapered composite likelihood function. This method substantially reduces the computational cost and improves the efficiency over equally weighted composite likelihood estimators. We illustrate its utility with simulation experiments and an analysis of rainfall data in Switzerland.

  18. Risk Measurement and Risk Modelling Using Applications of Vine Copulas

    Directory of Open Access Journals (Sweden)

    David E. Allen

    2017-09-01

    Full Text Available This paper features an application of Regular Vine copulas which are a novel and recently developed statistical and mathematical tool which can be applied in the assessment of composite financial risk. Copula-based dependence modelling is a popular tool in financial applications, but is usually applied to pairs of securities. By contrast, Vine copulas provide greater flexibility and permit the modelling of complex dependency patterns using the rich variety of bivariate copulas which may be arranged and analysed in a tree structure to explore multiple dependencies. The paper features the use of Regular Vine copulas in an analysis of the co-dependencies of 10 major European Stock Markets, as represented by individual market indices and the composite STOXX 50 index. The sample runs from 2005 to the end of 2013 to permit an exploration of how correlations change indifferent economic circumstances using three different sample periods: pre-GFC (January 2005–July 2007, GFC (July 2007– September 2009, and post-GFC periods (September 2009–December 2013. The empirical results suggest that the dependencies change in a complex manner, and are subject to change in different economic circumstances. One of the attractions of this approach to risk modelling is the flexibility in the choice of distributions used to model co-dependencies. The practical application of Regular Vine metrics is demonstrated via an example of the calculation of the VaR of a portfolio made up of the indices.

  19. A generalized conditional heteroscedastic model for temperature downscaling

    Science.gov (United States)

    Modarres, R.; Ouarda, T. B. M. J.

    2014-11-01

    This study describes a method for deriving the time varying second order moment, or heteroscedasticity, of local daily temperature and its association to large Coupled Canadian General Circulation Models predictors. This is carried out by applying a multivariate generalized autoregressive conditional heteroscedasticity (MGARCH) approach to construct the conditional variance-covariance structure between General Circulation Models (GCMs) predictors and maximum and minimum temperature time series during 1980-2000. Two MGARCH specifications namely diagonal VECH and dynamic conditional correlation (DCC) are applied and 25 GCM predictors were selected for a bivariate temperature heteroscedastic modeling. It is observed that the conditional covariance between predictors and temperature is not very strong and mostly depends on the interaction between the random process governing temporal variation of predictors and predictants. The DCC model reveals a time varying conditional correlation between GCM predictors and temperature time series. No remarkable increasing or decreasing change is observed for correlation coefficients between GCM predictors and observed temperature during 1980-2000 while weak winter-summer seasonality is clear for both conditional covariance and correlation. Furthermore, the stationarity and nonlinearity Kwiatkowski-Phillips-Schmidt-Shin (KPSS) and Brock-Dechert-Scheinkman (BDS) tests showed that GCM predictors, temperature and their conditional correlation time series are nonlinear but stationary during 1980-2000 according to BDS and KPSS test results. However, the degree of nonlinearity of temperature time series is higher than most of the GCM predictors.

  20. Tapered composite likelihood for spatial max-stable models

    KAUST Repository

    Sang, Huiyan; Genton, Marc G.

    2014-01-01

    Spatial extreme value analysis is useful to environmental studies, in which extreme value phenomena are of interest and meaningful spatial patterns can be discerned. Max-stable process models are able to describe such phenomena. This class of models is asymptotically justified to characterize the spatial dependence among extremes. However, likelihood inference is challenging for such models because their corresponding joint likelihood is unavailable and only bivariate or trivariate distributions are known. In this paper, we propose a tapered composite likelihood approach by utilizing lower dimensional marginal likelihoods for inference on parameters of various max-stable process models. We consider a weighting strategy based on a "taper range" to exclude distant pairs or triples. The "optimal taper range" is selected to maximize various measures of the Godambe information associated with the tapered composite likelihood function. This method substantially reduces the computational cost and improves the efficiency over equally weighted composite likelihood estimators. We illustrate its utility with simulation experiments and an analysis of rainfall data in Switzerland.

  1. Asymptotic properties of Pearson's rank-variate correlation coefficient under contaminated Gaussian model.

    Science.gov (United States)

    Ma, Rubao; Xu, Weichao; Zhang, Yun; Ye, Zhongfu

    2014-01-01

    This paper investigates the robustness properties of Pearson's rank-variate correlation coefficient (PRVCC) in scenarios where one channel is corrupted by impulsive noise and the other is impulsive noise-free. As shown in our previous work, these scenarios that frequently encountered in radar and/or sonar, can be well emulated by a particular bivariate contaminated Gaussian model (CGM). Under this CGM, we establish the asymptotic closed forms of the expectation and variance of PRVCC by means of the well known Delta method. To gain a deeper understanding, we also compare PRVCC with two other classical correlation coefficients, i.e., Spearman's rho (SR) and Kendall's tau (KT), in terms of the root mean squared error (RMSE). Monte Carlo simulations not only verify our theoretical findings, but also reveal the advantage of PRVCC by an example of estimating the time delay in the particular impulsive noise environment.

  2. Modeling heterogeneous (co)variances from adjacent-SNP groups improves genomic prediction for milk protein composition traits

    DEFF Research Database (Denmark)

    Gebreyesus, Grum; Lund, Mogens Sandø; Buitenhuis, Albert Johannes

    2017-01-01

    Accurate genomic prediction requires a large reference population, which is problematic for traits that are expensive to measure. Traits related to milk protein composition are not routinely recorded due to costly procedures and are considered to be controlled by a few quantitative trait loci...... of large effect. The amount of variation explained may vary between regions leading to heterogeneous (co)variance patterns across the genome. Genomic prediction models that can efficiently take such heterogeneity of (co)variances into account can result in improved prediction reliability. In this study, we...... developed and implemented novel univariate and bivariate Bayesian prediction models, based on estimates of heterogeneous (co)variances for genome segments (BayesAS). Available data consisted of milk protein composition traits measured on cows and de-regressed proofs of total protein yield derived for bulls...

  3. Care-Seeking Patterns and Direct Economic Burden of Injuries in Bangladesh.

    Science.gov (United States)

    Alfonso, Natalia Y; Alonge, Olakunle; Hoque, Dewan Md Emdadul; Baset, Kamran Ul; Hyder, Adnan A; Bishai, David

    2017-04-29

    This study provides a comprehensive review of the care-seeking patterns and direct economic burden of injuries from the victims' perspective in rural Bangladesh using a 2013 household survey covering 1.17 million people. Descriptive statistics and bivariate analyses were used to derive rates and test the association between variables. An analytic model was used to estimate total injury out-of-pocket (OOP) payments and a multivariate probit regression model assessed the relationship between financial distress and injury type. Results show non-fatal injuries occur to 1 in 5 people in our sample per year. With average household size of 4.5 in Bangladesh--every household has an injury every year. Most non-fatally injured patients sought healthcare from drug sellers. Less than half of fatal injuries sought healthcare and half of those with care were hospitalized. Average OOP payments varied significantly (range: $8-$830) by injury type and outcome (fatal vs. non-fatal). Total injury OOP expenditure was $$355,795 and $5000 for non-fatal and fatal injuries, respectively, per 100,000 people. The majority of household heads with injuries reported financial distress. This study can inform injury prevention advocates on disparities in healthcare usage, OOP costs and financial distress. Reallocation of resources to the most at risk populations can accelerate reduction of preventable injuries and prevent injury related catastrophic payments and impoverishment.

  4. Factors Influencing Smallholder Farmers' Climate Change Perceptions: A Study from Farmers in Ethiopia.

    Science.gov (United States)

    Habtemariam, Lemlem Teklegiorgis; Gandorfer, Markus; Kassa, Getachew Abate; Heissenhuber, Alois

    2016-08-01

    Factors influencing climate change perceptions have vital roles in designing strategies to enrich climate change understanding. Despite this, factors that influence smallholder farmers' climate change perceptions have not yet been adequately studied. As many of the smallholder farmers live in regions where climate change is predicted to have the most negative impact, their climate change perception is of particular interest. In this study, based on data collected from Ethiopian smallholder farmers, we assessed farmers' perceptions and anticipations of past and future climate change. Furthermore, the factors influencing farmers' climate change perceptions and the relation between farmers' perceptions and available public climate information were assessed. Our findings revealed that a majority of respondents perceive warming temperatures and decreasing rainfall trends that correspond with the local meteorological record. Farmers' perceptions about the past climate did not always reflect their anticipations about the future. A substantial number of farmers' anticipations of future climate were less consistent with climate model projections. The recursive bivariate probit models employed to explore factors affecting different categories of climate change perceptions illustrate statistical significance for explanatory variables including location, gender, age, education, soil fertility status, climate change information, and access to credit services. The findings contribute to the literature by providing evidence not just on farmers' past climate perceptions but also on future climate anticipations. The identified factors help policy makers to provide targeted extension and advisory services to enrich climate change understanding and support appropriate farm-level climate change adaptations.

  5. Factors Influencing Smallholder Farmers' Climate Change Perceptions: A Study from Farmers in Ethiopia

    Science.gov (United States)

    Habtemariam, Lemlem Teklegiorgis; Gandorfer, Markus; Kassa, Getachew Abate; Heissenhuber, Alois

    2016-08-01

    Factors influencing climate change perceptions have vital roles in designing strategies to enrich climate change understanding. Despite this, factors that influence smallholder farmers' climate change perceptions have not yet been adequately studied. As many of the smallholder farmers live in regions where climate change is predicted to have the most negative impact, their climate change perception is of particular interest. In this study, based on data collected from Ethiopian smallholder farmers, we assessed farmers' perceptions and anticipations of past and future climate change. Furthermore, the factors influencing farmers' climate change perceptions and the relation between farmers' perceptions and available public climate information were assessed. Our findings revealed that a majority of respondents perceive warming temperatures and decreasing rainfall trends that correspond with the local meteorological record. Farmers' perceptions about the past climate did not always reflect their anticipations about the future. A substantial number of farmers' anticipations of future climate were less consistent with climate model projections. The recursive bivariate probit models employed to explore factors affecting different categories of climate change perceptions illustrate statistical significance for explanatory variables including location, gender, age, education, soil fertility status, climate change information, and access to credit services. The findings contribute to the literature by providing evidence not just on farmers' past climate perceptions but also on future climate anticipations. The identified factors help policy makers to provide targeted extension and advisory services to enrich climate change understanding and support appropriate farm-level climate change adaptations.

  6. Socioeconomic Status and Health: A New Approach to the Measurement of Bivariate Inequality.

    Science.gov (United States)

    Erreygers, Guido; Kessels, Roselinde

    2017-06-23

    We suggest an alternative way to construct a family of indices of socioeconomic inequality of health. Our indices belong to the broad category of linear indices. In contrast to rank-dependent indices, which are defined in terms of the ranks of the socioeconomic variable and the levels of the health variable, our indices are based on the levels of both the socioeconomic and the health variable. We also indicate how the indices can be modified in order to introduce sensitivity to inequality in the socioeconomic distribution and to inequality in the health distribution. As an empirical illustration, we make a comparative study of the relation between income and well-being in 16 European countries using data from the Survey of Health, Ageing and Retirement in Europe (SHARE) Wave 4.

  7. Analysis of input variables of an artificial neural network using bivariate correlation and canonical correlation

    International Nuclear Information System (INIS)

    Costa, Valter Magalhaes

    2011-01-01

    The monitoring of variables and diagnosis of sensor fault in nuclear power plants or processes industries is very important because an early diagnosis allows the correction of the fault and, like this, do not cause the production interruption, improving operator's security and it's not provoking economics losses. The objective of this work is, in the whole of all variables monitor of a nuclear power plant, to build a set, not necessary minimum, which will be the set of input variables of an artificial neural network and, like way, to monitor the biggest number of variables. This methodology was applied to the IEA-R1 Research Reactor at IPEN. For this, the variables Power, Rate of flow of primary circuit, Rod of control/security and Difference in pressure in the core of the reactor ( Δ P) was grouped, because, for hypothesis, almost whole of monitoring variables have relation with the variables early described or its effect can be result of the interaction of two or more. The Power is related to the increasing and decreasing of temperatures as well as the amount radiation due fission of the uranium; the Rods are controls of power and influence in the amount of radiation and increasing and decreasing of temperatures and the Rate of flow of primary circuit has function of the transport of energy by removing of heat of the nucleus Like this, labeling B= {Power, Rate of flow of Primary Circuit, Rod of Control/Security and Δ P} was computed the correlation between B and all another variables monitoring (coefficient of multiple correlation), that is, by the computer of the multiple correlation, that is tool of Theory of Canonical Correlations, was possible to computer how much the set B can predict each variable. Due the impossibility of a satisfactory approximation by B in the prediction of some variables, it was included one or more variables that have high correlation with this variable to improve the quality of prediction. In this work an artificial neural network was trained and the results were satisfactory since the IEA-R1 Data Acquisition System reactor monitors 64 variables and, with a set of 9 input variables resulting from the correlation analysis, it was possible to monitor 51 variables using neural networks. (author)

  8. First-order dominance: stronger characterization and a bivariate checking algorithm

    DEFF Research Database (Denmark)

    Range, Troels Martin; Østerdal, Lars Peter Raahave

    2018-01-01

    How to determine whether one distribution first-order dominates another is a fundamental problem that has many applications in economics, finance, probability theory, and statistics. Nevertheless, little is known about how to efficiently check first-order dominance for finite multivariate distrib...

  9. Perceived Social Support and Academic Achievement: Cross-Lagged Panel and Bivariate Growth Curve Analyses

    Science.gov (United States)

    Mackinnon, Sean P.

    2012-01-01

    As students transition to post-secondary education, they experience considerable stress and declines in academic performance. Perceived social support is thought to improve academic achievement by reducing stress. Longitudinal designs with three or more waves are needed in this area because they permit stronger causal inferences and help…

  10. Information Decomposition in Bivariate Systems: Theory and Application to Cardiorespiratory Dynamics

    Directory of Open Access Journals (Sweden)

    Luca Faes

    2015-01-01

    Full Text Available In the framework of information dynamics, the temporal evolution of coupled systems can be studied by decomposing the predictive information about an assigned target system into amounts quantifying the information stored inside the system and the information transferred to it. While information storage and transfer are computed through the known self-entropy (SE and transfer entropy (TE, an alternative decomposition evidences the so-called cross entropy (CE and conditional SE (cSE, quantifying the cross information and internal information of the target system, respectively. This study presents a thorough evaluation of SE, TE, CE and cSE as quantities related to the causal statistical structure of coupled dynamic processes. First, we investigate the theoretical properties of these measures, providing the conditions for their existence and assessing the meaning of the information theoretic quantity that each of them reflects. Then, we present an approach for the exact computation of information dynamics based on the linear Gaussian approximation, and exploit this approach to characterize the behavior of SE, TE, CE and cSE in benchmark systems with known dynamics. Finally, we exploit these measures to study cardiorespiratory dynamics measured from healthy subjects during head-up tilt and paced breathing protocols. Our main result is that the combined evaluation of the measures of information dynamics allows to infer the causal effects associated with the observed dynamics and to interpret the alteration of these effects with changing experimental conditions.

  11. Bi-variate statistical attribute filtering : A tool for robust detection of faint objects

    NARCIS (Netherlands)

    Teeninga, Paul; Moschini, Ugo; Trager, Scott C.; Wilkinson, M.H.F.

    2013-01-01

    We present a new method for morphological connected attribute filtering for object detection in astronomical images. In this approach, a threshold is set on one attribute (power), based on its distribution due to noise, as a function of object area. The results show an order of magnitude higher

  12. Comparison between different uncertainty propagation methods in multivariate analysis: An application in the bivariate case

    International Nuclear Information System (INIS)

    Mullor, R.; Sanchez, A.; Martorell, S.; Martinez-Alzamora, N.

    2011-01-01

    Safety related systems performance optimization is classically based on quantifying the effects that testing and maintenance activities have on reliability and cost (R+C). However, R+C quantification is often incomplete in the sense that important uncertainties may not be considered. An important number of studies have been published in the last decade in the field of R+C based optimization considering uncertainties. They have demonstrated that inclusion of uncertainties in the optimization brings the decision maker insights concerning how uncertain the R+C results are and how this uncertainty does matter as it can result in differences in the outcome of the decision making process. Several methods of uncertainty propagation based on the theory of tolerance regions have been proposed in the literature depending on the particular characteristics of the variables in the output and their relations. In this context, the objective of this paper focuses on the application of non-parametric and parametric methods to analyze uncertainty propagation, which will be implemented on a multi-objective optimization problem where reliability and cost act as decision criteria and maintenance intervals act as decision variables. Finally, a comparison of results of these applications and the conclusions obtained are presented.

  13. Anthropometry of Women of the U.S. Army--1977. Report Number 3. Bivariate Frequency Tables

    Science.gov (United States)

    1977-07-01

    U4 (\\ 4. 4 (I ’ a -1 Or L - l CLJ U. L. C.)~V * . -4 W- (V (N 4 ’ ( .4~-. U., 𔃾 (- L4+ + LU. B S- AV ILBL UOYu rr~ 4.4 4~4 4~) 4- - flJ 41.4 -J 4...0 4 ^* 04L I I 4 x v0 -- 4.44- . 4- 4. . 4- 4. 4 411 I~tfl \\Jr4(A 0 N~LI S 19 Id -4 o ,4 UNN6 A NI t4 . I P.. A 4N ,j td t󈧄’- 4 (v. LA In .4 10...J %. td 0A I.- 𔃾 0 - 40 CI (TIU LA m r N?)MLA 4 0% x > co ~ r" (M -4- ) 4 (A Li * F4 ~ LA v4 𔃺 *LA %D M (%J V 4 D I.- Id . - MV -4 (Ii " CA V) X F

  14. Socioeconomic Status and Health: A New Approach to the Measurement of Bivariate Inequality

    OpenAIRE

    Erreygers, Guido; Kessels, Roselinde

    2017-01-01

    Abstract: We suggest an alternative way to construct a family of indices of socioeconomic inequality of health. Our indices belong to the broad category of linear indices. In contrast to rank-dependent indices, which are defined in terms of the ranks of the socioeconomic variable and the levels of the health variable, our indices are based on the levels of both the socioeconomic and the health variable. We also indicate how the indices can be modified in order to introduce sensitivity to ineq...

  15. Primary testicular failure in Klinefelter's syndrome: the use of bivariate luteinizing hormone-testosterone reference charts

    DEFF Research Database (Denmark)

    Aksglaede, Lise; Andersson, Anna-Maria; Jørgensen, Niels

    2007-01-01

    The diagnosis of androgen deficiency is based on clinical features and confirmatory low serum testosterone levels. In early primary testicular failure, a rise in serum LH levels suggests inadequate androgen action for the individual's physiological requirements despite a serum testosterone level ...

  16. Bivariate and Multivariate Associations between Trait Listening Goals and Trait Communicator Preferences

    Science.gov (United States)

    Keaton, Shaughan A.; Keteyian, Robert V.; Bodie, Graham D.

    2014-01-01

    This article provides validity evidence for a measure of listening goals by showing theoretically consistent relationships with an existing communication preference questionnaire. Participants (N = 257) were administered trait measures for listening goals and communicator preferences. The four listening goals--relational, task-oriented,…

  17. Comparison between different uncertainty propagation methods in multivariate analysis: An application in the bivariate case

    Energy Technology Data Exchange (ETDEWEB)

    Mullor, R. [Dpto. Estadistica e Investigacion Operativa, Universidad Alicante (Spain); Sanchez, A., E-mail: aisanche@eio.upv.e [Dpto. Estadistica e Investigacion Operativa Aplicadas y Calidad, Universidad Politecnica Valencia, Camino de Vera s/n 46022 (Spain); Martorell, S. [Dpto. Ingenieria Quimica y Nuclear, Universidad Politecnica Valencia (Spain); Martinez-Alzamora, N. [Dpto. Estadistica e Investigacion Operativa Aplicadas y Calidad, Universidad Politecnica Valencia, Camino de Vera s/n 46022 (Spain)

    2011-06-15

    Safety related systems performance optimization is classically based on quantifying the effects that testing and maintenance activities have on reliability and cost (R+C). However, R+C quantification is often incomplete in the sense that important uncertainties may not be considered. An important number of studies have been published in the last decade in the field of R+C based optimization considering uncertainties. They have demonstrated that inclusion of uncertainties in the optimization brings the decision maker insights concerning how uncertain the R+C results are and how this uncertainty does matter as it can result in differences in the outcome of the decision making process. Several methods of uncertainty propagation based on the theory of tolerance regions have been proposed in the literature depending on the particular characteristics of the variables in the output and their relations. In this context, the objective of this paper focuses on the application of non-parametric and parametric methods to analyze uncertainty propagation, which will be implemented on a multi-objective optimization problem where reliability and cost act as decision criteria and maintenance intervals act as decision variables. Finally, a comparison of results of these applications and the conclusions obtained are presented.

  18. Socioeconomic Status and Health: A New Approach to the Measurement of Bivariate Inequality

    Science.gov (United States)

    Kessels, Roselinde

    2017-01-01

    We suggest an alternative way to construct a family of indices of socioeconomic inequality of health. Our indices belong to the broad category of linear indices. In contrast to rank-dependent indices, which are defined in terms of the ranks of the socioeconomic variable and the levels of the health variable, our indices are based on the levels of both the socioeconomic and the health variable. We also indicate how the indices can be modified in order to introduce sensitivity to inequality in the socioeconomic distribution and to inequality in the health distribution. As an empirical illustration, we make a comparative study of the relation between income and well-being in 16 European countries using data from the Survey of Health, Ageing and Retirement in Europe (SHARE) Wave 4. PMID:28644405

  19. Socioeconomic status and health : A new approach to the measurement of bivariate inequality

    NARCIS (Netherlands)

    Erreygers, G.; Kessels, R.

    2017-01-01

    We suggest an alternative way to construct a family of indices of socioeconomic inequality of health. Our indices belong to the broad category of linear indices. In contrast to rank-dependent indices, which are defined in terms of the ranks of the socioeconomic variable and the levels of the health

  20. Mixtures of Gaussians for uncertainty description in bivariate latent heat flux proxies

    NARCIS (Netherlands)

    Wójcik, R.; Troch, P.A.A.; Stricker, J.N.M.; Torfs, P.J.J.F.

    2006-01-01

    This paper proposes a new probabilistic approach for describing uncertainty in the ensembles of latent heat flux proxies. The proxies are obtained from hourly Bowen ratio and satellite-derived measurements, respectively, at several locations in the southern Great Plains region in the United States.

  1. Bivariate analysis of basal serum anti-Mullerian hormone measurements and human blastocyst development after IVF

    LENUS (Irish Health Repository)

    Sills, E Scott

    2011-12-02

    Abstract Background To report on relationships among baseline serum anti-Müllerian hormone (AMH) measurements, blastocyst development and other selected embryology parameters observed in non-donor oocyte IVF cycles. Methods Pre-treatment AMH was measured in patients undergoing IVF (n = 79) and retrospectively correlated to in vitro embryo development noted during culture. Results Mean (+\\/- SD) age for study patients in this study group was 36.3 ± 4.0 (range = 28-45) yrs, and mean (+\\/- SD) terminal serum estradiol during IVF was 5929 +\\/- 4056 pmol\\/l. A moderate positive correlation (0.49; 95% CI 0.31 to 0.65) was noted between basal serum AMH and number of MII oocytes retrieved. Similarly, a moderate positive correlation (0.44) was observed between serum AMH and number of early cleavage-stage embryos (95% CI 0.24 to 0.61), suggesting a relationship between serum AMH and embryo development in IVF. Of note, serum AMH levels at baseline were significantly different for patients who did and did not undergo blastocyst transfer (15.6 vs. 10.9 pmol\\/l; p = 0.029). Conclusions While serum AMH has found increasing application as a predictor of ovarian reserve for patients prior to IVF, its roles to estimate in vitro embryo morphology and potential to advance to blastocyst stage have not been extensively investigated. These data suggest that baseline serum AMH determinations can help forecast blastocyst developmental during IVF. Serum AMH measured before treatment may assist patients, clinicians and embryologists as scheduling of embryo transfer is outlined. Additional studies are needed to confirm these correlations and to better define the role of baseline serum AMH level in the prediction of blastocyst formation.

  2. Introducing Catastrophe-QSAR. Application on Modeling Molecular Mechanisms of Pyridinone Derivative-Type HIV Non-Nucleoside Reverse Transcriptase Inhibitors

    Directory of Open Access Journals (Sweden)

    Marius Lazea

    2011-12-01

    Full Text Available The classical method of quantitative structure-activity relationships (QSAR is enriched using non-linear models, as Thom’s polynomials allow either uni- or bi-variate structural parameters. In this context, catastrophe QSAR algorithms are applied to the anti-HIV-1 activity of pyridinone derivatives. This requires calculation of the so-called relative statistical power and of its minimum principle in various QSAR models. A new index, known as a statistical relative power, is constructed as an Euclidian measure for the combined ratio of the Pearson correlation to algebraic correlation, with normalized t-Student and the Fisher tests. First and second order inter-model paths are considered for mono-variate catastrophes, whereas for bi-variate catastrophes the direct minimum path is provided, allowing the QSAR models to be tested for predictive purposes. At this stage, the max-to-min hierarchies of the tested models allow the interaction mechanism to be identified using structural parameter succession and the typical catastrophes involved. Minimized differences between these catastrophe models in the common structurally influential domains that span both the trial and tested compounds identify the “optimal molecular structural domains” and the molecules with the best output with respect to the modeled activity, which in this case is human immunodeficiency virus type 1 HIV-1 inhibition. The best molecules are characterized by hydrophobic interactions with the HIV-1 p66 subunit protein, and they concur with those identified in other 3D-QSAR analyses. Moreover, the importance of aromatic ring stacking interactions for increasing the binding affinity of the inhibitor-reverse transcriptase ligand-substrate complex is highlighted.

  3. Modelling of strongly coupled particle growth and aggregation

    International Nuclear Information System (INIS)

    Gruy, F; Touboul, E

    2013-01-01

    The mathematical modelling of the dynamics of particle suspension is based on the population balance equation (PBE). PBE is an integro-differential equation for the population density that is a function of time t, space coordinates and internal parameters. Usually, the particle is characterized by a unique parameter, e.g. the matter volume v. PBE consists of several terms: for instance, the growth rate and the aggregation rate. So, the growth rate is a function of v and t. In classical modelling, the growth and the aggregation are independently considered, i.e. they are not coupled. However, current applications occur where the growth and the aggregation are coupled, i.e. the change of the particle volume with time is depending on its initial value v 0 , that in turn is related to an aggregation event. As a consequence, the dynamics of the suspension does not obey the classical Von Smoluchowski equation. This paper revisits this problem by proposing a new modelling by using a bivariate PBE (with two internal variables: v and v 0 ) and by solving the PBE by means of a numerical method and Monte Carlo simulations. This is applied to a physicochemical system with a simple growth law and a constant aggregation kernel.

  4. Protein Structure Classification and Loop Modeling Using Multiple Ramachandran Distributions

    KAUST Repository

    Najibi, Seyed Morteza

    2017-02-08

    Recently, the study of protein structures using angular representations has attracted much attention among structural biologists. The main challenge is how to efficiently model the continuous conformational space of the protein structures based on the differences and similarities between different Ramachandran plots. Despite the presence of statistical methods for modeling angular data of proteins, there is still a substantial need for more sophisticated and faster statistical tools to model the large-scale circular datasets. To address this need, we have developed a nonparametric method for collective estimation of multiple bivariate density functions for a collection of populations of protein backbone angles. The proposed method takes into account the circular nature of the angular data using trigonometric spline which is more efficient compared to existing methods. This collective density estimation approach is widely applicable when there is a need to estimate multiple density functions from different populations with common features. Moreover, the coefficients of adaptive basis expansion for the fitted densities provide a low-dimensional representation that is useful for visualization, clustering, and classification of the densities. The proposed method provides a novel and unique perspective to two important and challenging problems in protein structure research: structure-based protein classification and angular-sampling-based protein loop structure prediction.

  5. Protein Structure Classification and Loop Modeling Using Multiple Ramachandran Distributions

    KAUST Repository

    Najibi, Seyed Morteza; Maadooliat, Mehdi; Zhou, Lan; Huang, Jianhua Z.; Gao, Xin

    2017-01-01

    Recently, the study of protein structures using angular representations has attracted much attention among structural biologists. The main challenge is how to efficiently model the continuous conformational space of the protein structures based on the differences and similarities between different Ramachandran plots. Despite the presence of statistical methods for modeling angular data of proteins, there is still a substantial need for more sophisticated and faster statistical tools to model the large-scale circular datasets. To address this need, we have developed a nonparametric method for collective estimation of multiple bivariate density functions for a collection of populations of protein backbone angles. The proposed method takes into account the circular nature of the angular data using trigonometric spline which is more efficient compared to existing methods. This collective density estimation approach is widely applicable when there is a need to estimate multiple density functions from different populations with common features. Moreover, the coefficients of adaptive basis expansion for the fitted densities provide a low-dimensional representation that is useful for visualization, clustering, and classification of the densities. The proposed method provides a novel and unique perspective to two important and challenging problems in protein structure research: structure-based protein classification and angular-sampling-based protein loop structure prediction.

  6. A geostatical model for USA uranium deposits

    International Nuclear Information System (INIS)

    Drew, M.W.

    1979-01-01

    Evidence exists which suggests that the frequency distributions of both grade and size of metal deposits may be well approximated by lognormal distribution functions. Using data on presently viable deposits and a simplified function which links production cost to deposit grade and size, a bivariate lognormal deposit grade/size distribution may be calibrated for a given geological environment. Exploration is introduced by assuming that the proportion discovered of the potential uranium reserve available at or below a given production can be represented by a fraction of the average deposit size and the limit exploration expenditure. As output, the model derives estimates of total reserves linked to maximum production costs and to exploration expenditure where the latter may be expressed either as expenditure per lb of mineral discovered or as a given percentage of operating profit. Reserve/price functions have been derived for the USA based on USAEC data. Tentative conclusions which may be drawn from the results are: (1) Assuming that a similar proportion of profits continues to be allocated to exploration in the future, then the USA should be able to meet its own national demand for uranium up to the end of the century (say 2 M tons U) at prices up to US$35/lb U 3 O 8 (1.1.75$ values). (2) If instead of all exploration being funded from a fixed maximum proportion of mining company profits, consumers were to fund additional exploration separately, then it is possible that the total unit cost of uranium to the consumers would thereby be reduced. It should be stressed that these conclusions are tentative and are only as reliable as the input data and assumptions of the model. In particular no account is taken of commercial or political forces which could artificially restrict supplies or raise prices. The model should be regarded as a first attempt and is offered as a basis for discussion leading to further development. (author)

  7. The effects of preoperative cardiology consultation prior to elective abdominal aortic aneurysm repair on patient morbidity.

    Science.gov (United States)

    Boniakowski, Anna E; Davis, Frank M; Phillips, Amanda R; Robinson, Adina B; Coleman, Dawn M; Henke, Peter K

    2017-08-01

    Objectives The relationship between preoperative medical consultations and postoperative complications has not been extensively studied. Thus, we investigated the impact of preoperative consultation on postoperative morbidity following elective abdominal aortic aneurysm repair. Methods A retrospective review was conducted on 469 patients (mean age 72 years, 20% female) who underwent elective abdominal aortic aneurysm repair from June 2007 to July 2014. Data elements included detailed medical history, preoperative cardiology consultation, and postoperative complications. Primary outcomes included 30-day morbidity, consult-specific morbidity, and mortality. A bivariate probit regression model accounting for the endogeneity of binary preoperative medical consult and patient variability was estimated with a maximum likelihood function. Results Eighty patients had preoperative medical consults (85% cardiology); thus, our analysis focuses on the effect of cardiac-related preoperative consults. Hyperlipidemia, increased aneurysm size, and increased revised cardiac risk index increased likelihood of referral to cardiology preoperatively. Surgery type (endovascular versus open repair) was not significant in development of postoperative complications when controlling for revised cardiac risk index ( p = 0.295). After controlling for patient comorbidities, there was no difference in postoperative cardiac-related complications between patients who did and did not undergo cardiology consultation preoperatively ( p = 0.386). Conclusions When controlling for patient disease severity using revised cardiac risk index risk stratification, preoperative cardiology consultation is not associated with postoperative cardiac morbidity.

  8. Awareness and Adoption of Soil and Water Conservation Technologies in a Developing Country: A Case of Nabajuzi Watershed in Central Uganda

    Science.gov (United States)

    Kagoya, Sarah; Paudel, Krishna P.; Daniel, Nadhomi L.

    2018-02-01

    Soil and water conservation technologies have been widely available in most parts of Uganda. However, not only has the adoption rate been low but also many farmers seem not to be aware of these technologies. This study aims at identifying the factors that influence awareness and adoption of soil and water conservation technologies in Nabajuzi watershed in central Uganda. A bivariate probit model was used to examine farmers' awareness and adoption of soil and water conservation technologies in the watershed. We use data collected from the interview of 400 households located in the watershed to understand the factors affecting the awareness and adoption of these technologies in the study area. Findings indicate that the likelihood of being aware and adopting the technologies are explained by the age of household head, being a tenant, and number of years of access to farmland. To increase awareness and adoption of technologies in Uganda, policymakers may expedite the process of land titling as farmers may feel secure about landholding and thus adopt these technologies to increase profitability and productivity in the long run. Incentive payments to farmers residing in the vulnerable region to adopt these considered technologies may help to alleviate soil deterioration problems in the affected area.

  9. Part-time sick leave as a treatment method for individuals with musculoskeletal disorders.

    Science.gov (United States)

    Andrén, Daniela; Svensson, Mikael

    2012-09-01

    There is increasing evidence that staying active is an important part of a recovery process for individuals on sick leave due to musculoskeletal disorders (MSDs). It has been suggested that using part-time sick-leave rather than full-time sick leave will enhance the possibility of full recovery to the workforce, and several countries actively favor this policy. The aim of this paper is to examine if it is beneficial for individuals on sick leave due to MSDs to be on part-time sick leave compared to full-time sick leave. A sample of 1,170 employees from the RFV-LS (register) database of the Social Insurance Agency of Sweden is used. The effect of being on part-time sick leave compared to full-time sick leave is estimated for the probability of returning to work with full recovery of lost work capacity. A two-stage recursive bivariate probit model is used to deal with the endogeneity problem. The results indicate that employees assigned to part-time sick leave do recover to full work capacity with a higher probability than those assigned to full-time sick leave. The average treatment effect of part-time sick leave is 25 percentage points. Considering that part-time sick leave may also be less expensive than assigning individuals to full-time sick leave, this would imply efficiency improvements from assigning individuals, when possible, to part-time sick leave.

  10. Economic analysis of the intangible impacts of informal care for people with Alzheimer's disease and other mental disorders.

    Science.gov (United States)

    Gervès, Chloé; Bellanger, Martine Marie; Ankri, Joël

    2013-01-01

    Valuation of the intangible impacts of informal care remains a great challenge for economic evaluation, especially in the framework of care recipients with cognitive impairment. Our main objective was to explore the influence of intangible impacts of caring on both informal caregivers' ability to estimate their willingness to pay (WTP) to be replaced and their WTP value. We mapped characteristics that influence ability or inability to estimate WTP by using a multiple correspondence analysis. We ran a bivariate probit model with sample selection to further analyze the caregivers' WTP value conditional on their ability to estimate their WTP. A distinction exists between the opportunity costs of the caring dimension and those of the intangible costs and benefits of caring. Informal caregivers' ability to estimate WTP is negatively influenced by both intangible benefits from caring (P WTP value is negatively associated with positive intangible impacts of informal care (P WTP and their ability to estimate WTP are both influenced by intangible burden and benefit of caring. These results call into question the relevance of a hypothetical generalized financial compensation system as the optimal way to motivate caregivers to continue providing care. Copyright © 2013 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  11. Global Obesity Study on Drivers for Weight Reduction Strategies

    Directory of Open Access Journals (Sweden)

    Carola Grebitus

    2015-01-01

    Full Text Available Objective: To assess factors determining the reaction of individuals to the threats of overweight and obesity and to examine the interdependencies between weight-reducing strategies. Methods: Cross-country survey covering 19 countries and 13,155 interviews. Data were analysed using a bivariate probit model that allows simultaneously analysing two weight-reducing strategies. Results: Results show that weight-reducing strategies chosen are not independent from each other. Findings also reveal that different strategies are chosen by different population segments. Women are more likely to change their dietary patterns and less likely to become physically active after surpassing a weight threshold. In addition, the probability of a dietary change in case of overweight differs considerably between countries. The study also reveals that attitudes are an important factor for the strategy choice. Conclusions: It is vital for public health policies to understand determinants of citizens' engagement in weight reduction strategies once a certain threshold is reached. Thus, results can support the design of public health campaigns and programmes that aim to change community or national health behaviour trends taking into account, e.g., national differences.

  12. Understanding Employee Awareness of Health Care Quality Information: How Can Employers Benefit?

    Science.gov (United States)

    Abraham, Jean; Feldman, Roger; Carlin, Caroline

    2004-01-01

    Objective To analyze the factors associated with employee awareness of employer-disseminated quality information on providers. Data Sources Primary data were collected in 2002 on a stratified, random sample of 1,365 employees in 16 firms that are members of the Buyers Health Care Action Group (BHCAG) located in the Minneapolis–St. Paul region. An employer survey was also conducted to assess how employers communicated the quality information to employees. Study Design In 2001, BHCAG sponsored two programs for reporting provider quality. We specify employee awareness of the quality information to depend on factors that influence the benefits and costs of search. Factors influencing the benefits include age, sex, provider satisfaction, health status, job tenure, and Twin Cities tenure. Factors influencing search costs include employee income, education, and employer communication strategies. We estimate the model using bivariate probit analysis. Data Collection Employee data were collected by phone survey. Principal Findings Overall, the level of quality information awareness is low. However, employer communication strategies such as distributing booklets to all employees or making them available on request have a large effect on the probability of quality information awareness. Employee education and utilization of providers' services are also positively related to awareness. Conclusions This study is one of the first to investigate employee awareness of provider quality information. Given the direct implications for medical outcomes, one might anticipate higher rates of awareness regarding provider quality, relative to plan quality. However, we do not find empirical evidence to support this assertion. PMID:15533188

  13. Role of intrinsic search cues in the formation of consumer preferences and choice for pork chops.

    Science.gov (United States)

    Verbeke, Wim; De Smet, Stefaan; Vackier, Isabelle; Van Oeckel, Monique J; Warnants, Nathalie; Van Kenhove, Patrick

    2005-02-01

    This study investigates the role of drip, colour, marbling and fat cover as intrinsic search cues in the formation of pork chop preferences and individual determinants. Data are collected from a sample of 443 pork consumers in Belgium through using repeated selection of chops from randomised photobooks and questionnaires including socio-demographic, attitudinal and behavioural variables. Data analysis includes mixture regression analysis, bivariate descriptive statistics and the estimation of multivariate probit models. Consumers sampled in this study prefer pork chops without fat cover. Preference for fat cover is stronger among male, 35+ aged consumers with lower levels of awareness of the relation between food and health and who like pork for other reasons than taste and nutritional value (all p<0.05). Preference for colour is equally consistent within an individual, though fifty-fifty light-dark, with dark chops being more preferred by 35+ aged consumers (p<0.05). Preferences for marbling and drip are not consistent and not determined by joint socio-demographic, attitudinal and behavioural factors. Preferences for cue levels are not correlated, except a weak relation between preference for dark chops without drip (r=0.116). Preferences are apparently formed by deductions with the use of single cues as key information, mainly based on fat cover or colour, and random choice on marbling and drip.

  14. Changes in the demand for private medical insurance following a shift in tax incentives.

    Science.gov (United States)

    Rodríguez, Marisol; Stoyanova, Alexandrina

    2008-02-01

    The 1998 Spanish reform of the Personal Income Tax eliminated the 15% deduction for private medical expenditures including payments on private health insurance (PHI) policies. To avoid an undesired increase in the demand for publicly funded health care, tax incentives to buy PHI were not completely removed but basically shifted from individual to group employer-paid policies. In a unique fiscal experiment, at the same time that the tax relief for individually purchased policies was abolished, the government provided for tax allowances on policies taken out through employment. Using a bivariate probit model on data from National Health Surveys, we estimate the impact of said reform on the demand for PHI and the changes occurred within it. Our findings indicate that the total probability of buying PHI was not significantly affected by the reform. Indeed, the fall in the demand for individual policies (by 10% between 1997 and 2001) was offset by an increase in the demand for group employer-paid ones. We also briefly discuss the welfare effects on the state budget, the industry and society at large.

  15. Women's autonomy and reproductive health care utilisation: empirical evidence from Tajikistan.

    Science.gov (United States)

    Kamiya, Yusuke

    2011-10-01

    Women's autonomy is widely considered to be a key to improving maternal health in developing countries, whereas there is no consistent empirical evidence to support this claim. This paper examines whether or not and how women's autonomy within the household affects the use of reproductive health care, using a household survey data from Tajikistan. Estimation is performed by the bivariate probit model whereby woman's use of health services and the level of women's autonomy are recursively and simultaneously determined. The data is from a sample of women aged 15-49 from the Tajikistan Living Standard Measurement Survey 2007. Women's autonomy as measured by women's decision-making on household financial matters increase the likelihood that a woman receives antenatal and delivery care, whilst it has a negative effect on the probability of attending to four or more antenatal consultations. The hypothesis that women's autonomy and reproductive health care utilisation are independently determined is rejected for most of the estimation specifications, indicating the importance of taking into account the endogenous nature of women's autonomy when assessing its effect on health care use. The empirical results reconfirm the assertion that women's status within the household is closely linked to reproductive health care utilisation in developing countries. Policymakers therefore need not only to implement not only direct health interventions but also to focus on broader social policies which address women's empowerment. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  16. Investigating the poverty-obesity paradox in Europe.

    Science.gov (United States)

    Salmasi, Luca; Celidoni, Martina

    2017-08-01

    This paper investigates the effect of income- and wealth-based poverty on the probability of being obese for the elderly in Europe by analysing data drawn from the Survey of Health, Ageing and Retirement (SHARE) and the English Longitudinal Study of Ageing (ELSA). We use early-life economic conditions and regional circumstances as instruments for poverty later in life to account for endogeneity issues. After controlling for a large set of covariates at the individual, household, regional and country level, the results show that poverty significantly increases the probability of being obese and the Body Mass Index (BMI), for men and women. The results show that, accounting for endogeneity with a bivariate probit model, poor individuals are from 10 to 20% points more likely to be obese than non-poor individuals. The effect on BMI ranges from 0.295 points (2.39 kg) to 0.395 points (2.75 kg). These results are robust to a series of checks and suggest that anti-poverty interventions might have positive side effects in terms of reducing food-related health inequalities. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Association between clean indoor air laws and voluntary smokefree rules in homes and cars.

    Science.gov (United States)

    Cheng, Kai-Wen; Okechukwu, Cassandra A; McMillen, Robert; Glantz, Stanton A

    2015-03-01

    This study examines the influence that smokefree workplaces, restaurants and bars have on the adoption of smokefree rules in homes and cars, and whether there is an association with adopting smokefree rules in homes and cars. Bivariate probit models were used to jointly estimate the likelihood of living in a smokefree home and having a smokefree car as a function of law coverage and other variables. Household data were obtained from the nationally representative Social Climate Survey of Tobacco Control 2001, 2002 and 2004-2009; clean indoor air law data were from the American Nonsmokers' Rights Foundation Tobacco Control Laws Database. 'Full coverage' and 'partial coverage' smokefree legislation is associated with an increased likelihood of having voluntary home and car smokefree rules compared with 'no coverage'. The association between 'full coverage' and smokefree rule in homes and cars is 5% and 4%, respectively, and the association between 'partial coverage' and smokefree rules in homes and cars is 3% and 4%, respectively. There is a positive association between the adoption of smokefree rules in homes and cars. Clean indoor air laws provide the additional benefit of encouraging voluntary adoption of smokefree rules in homes and cars. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  18. An empirical analysis on the incidence of part-time work among women with disabilities.

    Science.gov (United States)

    Pagan-Rodriguez, Ricardo

    2009-01-01

    To analyse the determinants of part-time employment and examine the impact of having a disability on the probability of working part-time. Our dataset allows us to take into account the heterogeneity within the disabled collective and identify the incidence of part-time work, for example, by type of disability and compare the results obtained. Using data from the ad hoc module on disability of the Spanish Labour Force Survey 2002 (which contains detailed information on key characteristics of disabled population), we used a bivariate probit model to estimate the probability of disabled women working part-time and of being employed. The results show that disabled women have a higher probability of working part-time as compared to non-disabled women, especially those with progressive illnesses, digestive and stomach disorders and chest or breathing problems. In addition, there is a positive relationship between longer disability durations and levels of part-time employment. Part-time employment can be used as a means to increase the levels of employment of disabled women, especially for those who face important barriers and difficulties as they try to enter into the labour market (e.g., those with epilepsy, mental, emotional conditions and other progressive illnesses or having long-term disabilities).

  19. Identification of Caries Risk Determinants in Toddlers: Results of the GUSTO Birth Cohort Study.

    Science.gov (United States)

    Un Lam, C; Khin, L W; Kalhan, A C; Yee, R; Lee, Y S; Chong, M F-F; Kwek, K; Saw, S M; Godfrey, K; Chong, Y S; Hsu, C-Y

    2017-01-01

    The aim of this study was to identify risk determinants leading to early childhood caries (ECC) and visible plaque (VP) in toddlers. Data for mother-child pairs participating in the Growing Up in Singapore towards Healthy Outcomes (GUSTO) birth cohort were collected from pregnancy to toddlerhood. Oral examinations were performed in 543 children during their clinic visit at 24 months to detect ECC and VP. Following logistic regression, ECC and VP were jointly regressed as primary and secondary outcomes, respectively, using the bivariate probit model. The ECC prevalence was 17.8% at 2 years of age, with 7.3% of children having a VP score >1. ECC was associated with nighttime breastfeeding (3 weeks) and biological factors, including Indian ethnicity (lower ECC rate), higher maternal childbearing age and existing health conditions, maternal plasma folate brushing frequency, lower parental perceived importance of baby teeth, and weaning onto solids. Interestingly, although a higher frequency of dental visits and toothbrushing were associated with lower plaque accumulation, they were associated with increased ECC risk, suggesting that these established caries-risk factors may be a consequence rather than the cause of ECC. In conclusion, Indian toddlers may be less susceptible to ECC, compared to Chinese and Malay toddlers. The study also highlights a problem-driven utilization pattern of dental services (care sought for treatment) in Singapore, in contrast to the prevention-driven approach (care sought to prevent disease) in Western countries. © 2017 S. Karger AG, Basel.

  20. Does the causal effect of health on employment differ for immigrants and natives?

    DEFF Research Database (Denmark)

    Jakobsen, Vibeke; Larsen, Mona

    This paper examines whether a causal effect of health on employment exists and, if so, whether it differs for immigrants and natives and whether such a difference can be attributed to different labour market status. Measuring poor health through information about hospital diagnoses for a number o......, the impact of health is largest for immigrants, while for women the effect is very similar. Differences in the distribution of lagged labour market status appear important only in explaining the results for women.Action=1&NewsId=2430&PID=32427#sthash.7uLQonhl.dpuf......This paper examines whether a causal effect of health on employment exists and, if so, whether it differs for immigrants and natives and whether such a difference can be attributed to different labour market status. Measuring poor health through information about hospital diagnoses for a number...... of specific diseases, we estimate bivariate probit models using the general practitioner’s referral behaviour as an instrument for receiving diagnoses. Using Danish administrative data, we find that poor health affects the employment probability negatively for both immigrants and native Danes. For men...

  1. Do objective neighbourhood characteristics relate to residents' preferences for certain sports locations? A cross-sectional study using a discrete choice modelling approach.

    Science.gov (United States)

    Deelen, Ineke; Jansen, Marijke; Dogterom, Nico J; Kamphuis, Carlijn B M; Ettema, Dick

    2017-12-11

    The number of sports facilities, sports clubs, or city parks in a residential neighbourhood may affect the likelihood that people participate in sports and their preferences for a certain sports location. This study aimed to assess whether objective physical and socio-spatial neighbourhood characteristics relate to sports participation and preferences for sports locations. Data from Dutch adults (N = 1201) on sports participation, their most-used sports location, and socio-demographic characteristics were collected using an online survey. Objective land-use data and the number of sports facilities were gathered for each participant using a 2000-m buffer around their home locations, whereas socio-spatial neighbourhood characteristics (i.e., density, socio-economic status, and safety) were determined at the neighbourhood level. A discrete choice-modelling framework (multinomial probit model) was used to model the associations between neighbourhood characteristics and sports participation and location. Higher proportions of green space, blue space, and the number of sports facilities were positively associated with sports participation in public space, at sports clubs, and at other sports facilities. Higher degrees of urbanization were negatively associated with sports participation at public spaces, sports clubs, and other sports facilities. Those with more green space, blue space or sports facilities in their residential neighbourhood were more likely to participate in sports, but these factors did not affect their preference for a certain sports location. Longitudinal study designs are necessary to assess causality: do active people choose to live in sports-facilitating neighbourhoods, or do neighbourhood characteristics affect sports participation?

  2. In vitro burn model illustrating heat conduction patterns using compressed thermal papers.

    Science.gov (United States)

    Lee, Jun Yong; Jung, Sung-No; Kwon, Ho

    2015-01-01

    To date, heat conduction from heat sources to tissue has been estimated by complex mathematical modeling. In the present study, we developed an intuitive in vitro skin burn model that illustrates heat conduction patterns inside the skin. This was composed of tightly compressed thermal papers with compression frames. Heat flow through the model left a trace by changing the color of thermal papers. These were digitized and three-dimensionally reconstituted to reproduce the heat conduction patterns in the skin. For standardization, we validated K91HG-CE thermal paper using a printout test and bivariate correlation analysis. We measured the papers' physical properties and calculated the estimated depth of heat conduction using Fourier's equation. Through contact burns of 5, 10, 15, 20, and 30 seconds on porcine skin and our burn model using a heated brass comb, and comparing the burn wound and heat conduction trace, we validated our model. The heat conduction pattern correlation analysis (intraclass correlation coefficient: 0.846, p < 0.001) and the heat conduction depth correlation analysis (intraclass correlation coefficient: 0.93, p < 0.001) showed statistically significant high correlations between the porcine burn wound and our model. Our model showed good correlation with porcine skin burn injury and replicated its heat conduction patterns. © 2014 by the Wound Healing Society.

  3. PDEAR model prediction of Protea species in years 2070-2100

    Science.gov (United States)

    Guo, Danni; Guo, Renkuan; Midgley, Guy F.; Rebelo, A. G.

    2009-10-01

    Global warming and climate changes are changing the environment and therefore changing the distribution and behaviour of the plant species. Plant species often move and change their distributions as they find their original habitats are no longer suitable to their needs. It is therefore important to establish a statistical model to catch up the movement and patterns of the endangered species in order to effectively manage environmental protection under the inevitable biodiversity changes that are taking place. In this paper, we are focusing on the population category of rare Proteas that has an estimated population size from 1 to 10 per sample site, which is very small. We used the partial differential equation associated regression (PDEAR) model, which merges the partial differential equation theory, (statistical) linear model theory and random fuzzy variable theory together into a efficient small-sample oriented model, for the spatial pattern changing analysis. The regression component in a PDEAR model is in nature a special random fuzzy multivariate regression model. We developed a bivariate model for investigating the impacts from rainfall and temperature on the Protea species in average sense in the population size of 1 to 10, in the Cape Floristic Region, from 1992 to 2002, South Africa. Under same the average biodiversity structure assumptions, we explore the future spatial change patterns of Protea species in the population size of 1 to 10 with future (average) predicted rainfall and temperature. The spatial distribution and patterns are clearly will help us to explore global climate changing impacts on endangered species.

  4. MJO prediction skill of the subseasonal-to-seasonal (S2S) prediction models

    Science.gov (United States)

    Son, S. W.; Lim, Y.; Kim, D.

    2017-12-01

    The Madden-Julian Oscillation (MJO), the dominant mode of tropical intraseasonal variability, provides the primary source of tropical and extratropical predictability on subseasonal to seasonal timescales. To better understand its predictability, this study conducts quantitative evaluation of MJO prediction skill in the state-of-the-art operational models participating in the subseasonal-to-seasonal (S2S) prediction project. Based on bivariate correlation coefficient of 0.5, the S2S models exhibit MJO prediction skill ranging from 12 to 36 days. These prediction skills are affected by both the MJO amplitude and phase errors, the latter becoming more important with forecast lead times. Consistent with previous studies, the MJO events with stronger initial amplitude are typically better predicted. However, essentially no sensitivity to the initial MJO phase is observed. Overall MJO prediction skill and its inter-model spread are further related with the model mean biases in moisture fields and longwave cloud-radiation feedbacks. In most models, a dry bias quickly builds up in the deep tropics, especially across the Maritime Continent, weakening horizontal moisture gradient. This likely dampens the organization and propagation of MJO. Most S2S models also underestimate the longwave cloud-radiation feedbacks in the tropics, which may affect the maintenance of the MJO convective envelop. In general, the models with a smaller bias in horizontal moisture gradient and longwave cloud-radiation feedbacks show a higher MJO prediction skill, suggesting that improving those processes would enhance MJO prediction skill.

  5. Causal mediation analysis with a binary outcome and multiple continuous or ordinal mediators: Simulations and application to an alcohol intervention

    OpenAIRE

    Nguyen, Trang Quynh; Webb-Vargas, Yenny; Koning, Ina M.; Stuart, Elizabeth A.

    2016-01-01

    We investigate a method to estimate the combined effect of multiple continuous/ordinal mediators on a binary outcome: 1) fit a structural equation model with probit link for the outcome and identity/probit link for continuous/ordinal mediators, 2) predict potential outcome probabilities, and 3) compute natural direct and indirect effects. Step 2 involves rescaling the latent continuous variable underlying the outcome to address residual mediator variance/covariance. We evaluate the estimation...

  6. Modeling the probability distribution of positional errors incurred by residential address geocoding

    Directory of Open Access Journals (Sweden)

    Mazumdar Soumya

    2007-01-01

    Full Text Available Abstract Background The assignment of a point-level geocode to subjects' residences is an important data assimilation component of many geographic public health studies. Often, these assignments are made by a method known as automated geocoding, which attempts to match each subject's address to an address-ranged street segment georeferenced within a streetline database and then interpolate the position of the address along that segment. Unfortunately, this process results in positional errors. Our study sought to model the probability distribution of positional errors associated with automated geocoding and E911 geocoding. Results Positional errors were determined for 1423 rural addresses in Carroll County, Iowa as the vector difference between each 100%-matched automated geocode and its true location as determined by orthophoto and parcel information. Errors were also determined for 1449 60%-matched geocodes and 2354 E911 geocodes. Huge (> 15 km outliers occurred among the 60%-matched geocoding errors; outliers occurred for the other two types of geocoding errors also but were much smaller. E911 geocoding was more accurate (median error length = 44 m than 100%-matched automated geocoding (median error length = 168 m. The empirical distributions of positional errors associated with 100%-matched automated geocoding and E911 geocoding exhibited a distinctive Greek-cross shape and had many other interesting features that were not capable of being fitted adequately by a single bivariate normal or t distribution. However, mixtures of t distributions with two or three components fit the errors very well. Conclusion Mixtures of bivariate t distributions with few components appear to be flexible enough to fit many positional error datasets associated with geocoding, yet parsimonious enough to be feasible for nascent applications of measurement-error methodology to spatial epidemiology.

  7. Evaluation of field development plans using 3-D reservoir modelling

    Energy Technology Data Exchange (ETDEWEB)

    Seifert, D.; Lewis, J.J.M. [Heriot-Watt Univ., Edinburgh (United Kingdom); Newbery, J.D.H. [Conoco, UK Ltd., Aberdeen (United Kingdom)] [and others

    1997-08-01

    Three-dimensional reservoir modelling has become an accepted tool in reservoir description and is used for various purposes, such as reservoir performance prediction or integration and visualisation of data. In this case study, a small Northern North Sea turbiditic reservoir was to be developed with a line drive strategy utilising a series of horizontal producer and injector pairs, oriented north-south. This development plan was to be evaluated and the expected outcome of the wells was to be assessed and risked. Detailed analyses of core, well log and analogue data has led to the development of two geological {open_quotes}end member{close_quotes} scenarios. Both scenarios have been stochastically modelled using the Sequential Indicator Simulation method. The resulting equiprobable realisations have been subjected to detailed statistical well placement optimisation techniques. Based upon bivariate statistical evaluation of more than 1000 numerical well trajectories for each of the two scenarios, it was found that the wells inclinations and lengths had a great impact on the wells success, whereas the azimuth was found to have only a minor impact. After integration of the above results, the actual well paths were redesigned to meet external drilling constraints, resulting in substantial reductions in drilling time and costs.

  8. A comparison of dose-response characteristics of four NTCP models using outcomes of radiation-induced optic neuropathy and retinopathy

    International Nuclear Information System (INIS)

    Moiseenko, Vitali; Song, William Y; Mell, Loren K; Bhandare, Niranjan

    2011-01-01

    Biological models are used to relate the outcome of radiation therapy to dose distribution. As use of biological models in treatment planning expands, uncertainties associated with the use of specific models for predicting outcomes should be understood and quantified. In particular, the question to what extent model predictions are data-driven or dependent on the choice of the model has to be explored. Four dose-response models--logistic, log-logistic, Poisson-based and probit--were tested for their ability and consistency in describing dose-response data for radiation-induced optic neuropathy (RION) and retinopathy (RIRP). Dose to the optic nerves was specified as the minimum dose, D min , received by any segment of the organ to which the damage was diagnosed by ophthalmologic evaluation. For retinopathy, the dose to the retina was specified as the highest isodose covering at least 1/3 of the retinal surface (D 33% ) that geometrically covered the observed retinal damage. Data on both complications were modeled separately for patients treated once daily and twice daily. Model parameters D 50 and γ and corresponding confidence intervals were obtained using maximum-likelihood method. Model parameters were reasonably consistent for RION data for patients treated once daily, D 50 ranging from 94.2 to 104.7 Gy and γ from 0.88 to 1.41. Similar consistency was seen for RIRP data which span a broad range of complication incidence, with D 50 from 72.2 to 75.0 Gy and γ from 1.51 to 2.16 for patients treated twice daily; 72.2-74.0 Gy and 0.84-1.20 for patients treated once daily. However, large variations were observed for RION in patients treated twice daily, D 50 from 96.3 to 125.2 Gy and γ from 0.80 to 1.56. Complication incidence in this dataset in any dose group did not exceed 20%. For the considered data sets, the log-logistic model tends to lead to larger D 50 and lower γ compared to other models for all datasets. Statements regarding normal tissue

  9. Non-ignorable missingness item response theory models for choice effects in examinee-selected items.

    Science.gov (United States)

    Liu, Chen-Wei; Wang, Wen-Chung

    2017-11-01

    Examinee-selected item (ESI) design, in which examinees are required to respond to a fixed number of items in a given set, always yields incomplete data (i.e., when only the selected items are answered, data are missing for the others) that are likely non-ignorable in likelihood inference. Standard item response theory (IRT) models become infeasible when ESI data are missing not at random (MNAR). To solve this problem, the authors propose a two-dimensional IRT model that posits one unidimensional IRT model for observed data and another for nominal selection patterns. The two latent variables are assumed to follow a bivariate normal distribution. In this study, the mirt freeware package was adopted to estimate parameters. The authors conduct an experiment to demonstrate that ESI data are often non-ignorable and to determine how to apply the new model to the data collected. Two follow-up simulation studies are conducted to assess the parameter recovery of the new model and the consequences for parameter estimation of ignoring MNAR data. The results of the two simulation studies indicate good parameter recovery of the new model and poor parameter recovery when non-ignorable missing data were mistakenly treated as ignorable. © 2017 The British Psychological Society.

  10. Bayesian models for cost-effectiveness analysis in the presence of structural zero costs.

    Science.gov (United States)

    Baio, Gianluca

    2014-05-20

    Bayesian modelling for cost-effectiveness data has received much attention in both the health economics and the statistical literature, in recent years. Cost-effectiveness data are characterised by a relatively complex structure of relationships linking a suitable measure of clinical benefit (e.g. quality-adjusted life years) and the associated costs. Simplifying assumptions, such as (bivariate) normality of the underlying distributions, are usually not granted, particularly for the cost variable, which is characterised by markedly skewed distributions. In addition, individual-level data sets are often characterised by the presence of structural zeros in the cost variable. Hurdle models can be used to account for the presence of excess zeros in a distribution and have been applied in the context of cost data. We extend their application to cost-effectiveness data, defining a full Bayesian specification, which consists of a model for the individual probability of null costs, a marginal model for the costs and a conditional model for the measure of effectiveness (given the observed costs). We presented the model using a working example to describe its main features. © 2013 The Authors. Statistics in Medicine published by John Wiley & Sons, Ltd.

  11. Modelling Practice

    DEFF Research Database (Denmark)

    Cameron, Ian; Gani, Rafiqul

    2011-01-01

    This chapter deals with the practicalities of building, testing, deploying and maintaining models. It gives specific advice for each phase of the modelling cycle. To do this, a modelling framework is introduced which covers: problem and model definition; model conceptualization; model data...... requirements; model construction; model solution; model verification; model validation and finally model deployment and maintenance. Within the adopted methodology, each step is discussedthrough the consideration of key issues and questions relevant to the modelling activity. Practical advice, based on many...

  12. Dynamic least-squares kernel density modeling of Fokker-Planck equations with application to neural population.

    Science.gov (United States)

    Shotorban, Babak

    2010-04-01

    The dynamic least-squares kernel density (LSQKD) model [C. Pantano and B. Shotorban, Phys. Rev. E 76, 066705 (2007)] is used to solve the Fokker-Planck equations. In this model the probability density function (PDF) is approximated by a linear combination of basis functions with unknown parameters whose governing equations are determined by a global least-squares approximation of the PDF in the phase space. In this work basis functions are set to be Gaussian for which the mean, variance, and covariances are governed by a set of partial differential equations (PDEs) or ordinary differential equations (ODEs) depending on what phase-space variables are approximated by Gaussian functions. Three sample problems of univariate double-well potential, bivariate bistable neurodynamical system [G. Deco and D. Martí, Phys. Rev. E 75, 031913 (2007)], and bivariate Brownian particles in a nonuniform gas are studied. The LSQKD is verified for these problems as its results are compared against the results of the method of characteristics in nondiffusive cases and the stochastic particle method in diffusive cases. For the double-well potential problem it is observed that for low to moderate diffusivity the dynamic LSQKD well predicts the stationary PDF for which there is an exact solution. A similar observation is made for the bistable neurodynamical system. In both these problems least-squares approximation is made on all phase-space variables resulting in a set of ODEs with time as the independent variable for the Gaussian function parameters. In the problem of Brownian particles in a nonuniform gas, this approximation is made only for the particle velocity variable leading to a set of PDEs with time and particle position as independent variables. Solving these PDEs, a very good performance by LSQKD is observed for a wide range of diffusivities.

  13. Leadership Models.

    Science.gov (United States)

    Freeman, Thomas J.

    This paper discusses six different models of organizational structure and leadership, including the scalar chain or pyramid model, the continuum model, the grid model, the linking pin model, the contingency model, and the circle or democratic model. Each model is examined in a separate section that describes the model and its development, lists…

  14. Models and role models.

    Science.gov (United States)

    ten Cate, Jacob M

    2015-01-01

    Developing experimental models to understand dental caries has been the theme in our research group. Our first, the pH-cycling model, was developed to investigate the chemical reactions in enamel or dentine, which lead to dental caries. It aimed to leverage our understanding of the fluoride mode of action and was also utilized for the formulation of oral care products. In addition, we made use of intra-oral (in situ) models to study other features of the oral environment that drive the de/remineralization balance in individual patients. This model addressed basic questions, such as how enamel and dentine are affected by challenges in the oral cavity, as well as practical issues related to fluoride toothpaste efficacy. The observation that perhaps fluoride is not sufficiently potent to reduce dental caries in the present-day society triggered us to expand our knowledge in the bacterial aetiology of dental caries. For this we developed the Amsterdam Active Attachment biofilm model. Different from studies on planktonic ('single') bacteria, this biofilm model captures bacteria in a habitat similar to dental plaque. With data from the combination of these models, it should be possible to study separate processes which together may lead to dental caries. Also products and novel agents could be evaluated that interfere with either of the processes. Having these separate models in place, a suggestion is made to design computer models to encompass the available information. Models but also role models are of the utmost importance in bringing and guiding research and researchers. 2015 S. Karger AG, Basel

  15. Model(ing) Law

    DEFF Research Database (Denmark)

    Carlson, Kerstin

    The International Criminal Tribunal for the former Yugoslavia (ICTY) was the first and most celebrated of a wave of international criminal tribunals (ICTs) built in the 1990s designed to advance liberalism through international criminal law. Model(ing) Justice examines the case law of the ICTY...

  16. Models and role models

    NARCIS (Netherlands)

    ten Cate, J.M.

    2015-01-01

    Developing experimental models to understand dental caries has been the theme in our research group. Our first, the pH-cycling model, was developed to investigate the chemical reactions in enamel or dentine, which lead to dental caries. It aimed to leverage our understanding of the fluoride mode of

  17. Modeling Compound Flood Hazards in Coastal Embayments

    Science.gov (United States)

    Moftakhari, H.; Schubert, J. E.; AghaKouchak, A.; Luke, A.; Matthew, R.; Sanders, B. F.

    2017-12-01

    Coastal cities around the world are built on lowland topography adjacent to coastal embayments and river estuaries, where multiple factors threaten increasing flood hazards (e.g. sea level rise and river flooding). Quantitative risk assessment is required for administration of flood insurance programs and the design of cost-effective flood risk reduction measures. This demands a characterization of extreme water levels such as 100 and 500 year return period events. Furthermore, hydrodynamic flood models are routinely used to characterize localized flood level intensities (i.e., local depth and velocity) based on boundary forcing sampled from extreme value distributions. For example, extreme flood discharges in the U.S. are estimated from measured flood peaks using the Log-Pearson Type III distribution. However, configuring hydrodynamic models for coastal embayments is challenging because of compound extreme flood events: events caused by a combination of extreme sea levels, extreme river discharges, and possibly other factors such as extreme waves and precipitation causing pluvial flooding in urban developments. Here, we present an approach for flood risk assessment that coordinates multivariate extreme analysis with hydrodynamic modeling of coastal embayments. First, we evaluate the significance of correlation structure between terrestrial freshwater inflow and oceanic variables; second, this correlation structure is described using copula functions in unit joint probability domain; and third, we choose a series of compound design scenarios for hydrodynamic modeling based on their occurrence likelihood. The design scenarios include the most likely compound event (with the highest joint probability density), preferred marginal scenario and reproduced time series of ensembles based on Monte Carlo sampling of bivariate hazard domain. The comparison between resulting extreme water dynamics under the compound hazard scenarios explained above provides an insight to the

  18. On Diagnostic Checking of Vector ARMA-GARCH Models with Gaussian and Student-t Innovations

    Directory of Open Access Journals (Sweden)

    Yongning Wang

    2013-04-01

    Full Text Available This paper focuses on the diagnostic checking of vector ARMA (VARMA models with multivariate GARCH errors. For a fitted VARMA-GARCH model with Gaussian or Student-t innovations, we derive the asymptotic distributions of autocorrelation matrices of the cross-product vector of standardized residuals. This is different from the traditional approach that employs only the squared series of standardized residuals. We then study two portmanteau statistics, called Q1(M and Q2(M, for model checking. A residual-based bootstrap method is provided and demonstrated as an effective way to approximate the diagnostic checking statistics. Simulations are used to compare the performance of the proposed statistics with other methods available in the literature. In addition, we also investigate the effect of GARCH shocks on checking a fitted VARMA model. Empirical sizes and powers of the proposed statistics are investigated and the results suggest a procedure of using jointly Q1(M and Q2(M in diagnostic checking. The bivariate time series of FTSE 100 and DAX index returns is used to illustrate the performance of the proposed portmanteau statistics. The results show that it is important to consider the cross-product series of standardized residuals and GARCH effects in model checking.

  19. Landslide susceptibility assessment using Spatial Analysis and GIS modeling in Cluj-Napoca Metropolitan Area, Romania

    Directory of Open Access Journals (Sweden)

    Bogdan Eugen Dolean

    2017-06-01

    Full Text Available In Romania, landslides together with the multitude geomorphological processes linked to them are some of the most common hazards which manifested in vulnerable areas with important human activities can induce many negative effects. From this perspective, identifying the areas affected by landslides, based on GIS spatial analysis models and statistical methods, is a subject frequently discussed in the national and international literature. This research was focused on the methods and practices of GIS spatial analysis, with a target of creating a complex model and a viable methodology of assessment the probability of occurrence of landslides, applicable within any territory. The study was based on the identification and analysis in a bivariate systemic manner of the numerous factors involved in the production of landslides, such as topography, morphology, hydrography, geological, lithology, weather, land use. The area in which the analysis has been conducted, The Metropolitan Area of Cluj-Napoca, was chosen due to the exacerbated urbanization of the recent years, coupled with a massive increase in the number of inhabitants, thus being a space of socioeconomic importance and a real challenge regarding spatial planning. Applying the model in this area has generated relatively good results, with a power of predictability of over 80%, measured in landslides sample areas used for the validation of the results, fact which attest the viability of the model and the fact that the model can be used in different areas with related morphometric and environmental characteristics.

  20. Model of Cholera Forecasting Using Artificial Neural Network in Chabahar City, Iran

    Directory of Open Access Journals (Sweden)

    Zahra Pezeshki

    2016-02-01

    Full Text Available Background: Cholera as an endemic disease remains a health issue in Iran despite decrease in incidence. Since forecasting epidemic diseases provides appropriate preventive actions in disease spread, different forecasting methods including artificial neural networks have been developed to study parameters involved in incidence and spread of epidemic diseases such as cholera. Objectives: In this study, cholera in rural area of Chabahar, Iran was investigated to achieve a proper forecasting model. Materials and Methods: Data of cholera was gathered from 465 villages, of which 104 reported cholera during ten years period of study. Logistic regression modeling and correlate bivariate were used to determine risk factors and achieve possible predictive model one-hidden-layer perception neural network with backpropagation training algorithm and the sigmoid activation function was trained and tested between the two groups of infected and non-infected villages after preprocessing. For determining validity of prediction, the ROC diagram was used. The study variables included climate conditions and geographical parameters. Results: After determining significant variables of cholera incidence, the described artificial neural network model was capable of forecasting cholera event among villages of test group with accuracy up to 80%. The highest accuracy was achieved when model was trained with variables that were significant in statistical analysis describing that the two methods confirm the result of each other. Conclusions: Application of artificial neural networking assists forecasting cholera for adopting protective measures. For a more accurate prediction, comprehensive information is required including data on hygienic, social and demographic parameters.