WorldWideScience

Sample records for regression discontinuity estimates

  1. Replicating Experimental Impact Estimates Using a Regression Discontinuity Approach. NCEE 2012-4025

    Science.gov (United States)

    Gleason, Philip M.; Resch, Alexandra M.; Berk, Jillian A.

    2012-01-01

    This NCEE Technical Methods Paper compares the estimated impacts of an educational intervention using experimental and regression discontinuity (RD) study designs. The analysis used data from two large-scale randomized controlled trials--the Education Technology Evaluation and the Teach for America Study--to provide evidence on the performance of…

  2. Using a Regression Discontinuity Design to Estimate the Impact of Placement Decisions in Developmental Math

    Science.gov (United States)

    Melguizo, Tatiana; Bos, Johannes M.; Ngo, Federick; Mills, Nicholas; Prather, George

    2016-01-01

    This study evaluates the effectiveness of math placement policies for entering community college students on these students' academic success in math. We estimate the impact of placement decisions by using a discrete-time survival model within a regression discontinuity framework. The primary conclusion that emerges is that initial placement in a…

  3. Testing discontinuities in nonparametric regression

    KAUST Repository

    Dai, Wenlin

    2017-01-19

    In nonparametric regression, it is often needed to detect whether there are jump discontinuities in the mean function. In this paper, we revisit the difference-based method in [13 H.-G. Müller and U. Stadtmüller, Discontinuous versus smooth regression, Ann. Stat. 27 (1999), pp. 299–337. doi: 10.1214/aos/1018031100

  4. Testing discontinuities in nonparametric regression

    KAUST Repository

    Dai, Wenlin; Zhou, Yuejin; Tong, Tiejun

    2017-01-01

    In nonparametric regression, it is often needed to detect whether there are jump discontinuities in the mean function. In this paper, we revisit the difference-based method in [13 H.-G. Müller and U. Stadtmüller, Discontinuous versus smooth regression, Ann. Stat. 27 (1999), pp. 299–337. doi: 10.1214/aos/1018031100

  5. A Bayesian Nonparametric Causal Model for Regression Discontinuity Designs

    Science.gov (United States)

    Karabatsos, George; Walker, Stephen G.

    2013-01-01

    The regression discontinuity (RD) design (Thistlewaite & Campbell, 1960; Cook, 2008) provides a framework to identify and estimate causal effects from a non-randomized design. Each subject of a RD design is assigned to the treatment (versus assignment to a non-treatment) whenever her/his observed value of the assignment variable equals or…

  6. Engineering estimates versus impact evaluation of energy efficiency projects: Regression discontinuity evidence from a case study

    International Nuclear Information System (INIS)

    Lang, Corey; Siler, Matthew

    2013-01-01

    Energy efficiency upgrades have been gaining widespread attention across global channels as a cost-effective approach to addressing energy challenges. The cost-effectiveness of these projects is generally predicted using engineering estimates pre-implementation, often with little ex post analysis of project success. In this paper, for a suite of energy efficiency projects, we directly compare ex ante engineering estimates of energy savings to ex post econometric estimates that use 15-min interval, building-level energy consumption data. In contrast to most prior literature, our econometric results confirm the engineering estimates, even suggesting the engineering estimates were too modest. Further, we find heterogeneous efficiency impacts by time of day, suggesting select efficiency projects can be useful in reducing peak load. - Highlights: • Regression discontinuity used to estimate energy savings from efficiency projects. • Ex post econometric estimates validate ex ante engineering estimates of energy savings. • Select efficiency projects shown to reduce peak load

  7. Estimating Unbiased Treatment Effects in Education Using a Regression Discontinuity Design

    Directory of Open Access Journals (Sweden)

    William C. Smith

    2014-08-01

    Full Text Available The ability of regression discontinuity (RD designs to provide an unbiased treatment effect while overcoming the ethical concerns plagued by Random Control Trials (RCTs make it a valuable and useful approach in education evaluation. RD is the only explicitly recognized quasi-experimental approach identified by the Institute of Education Statistics to meet the prerequisites of a causal relationship. Unfortunately, the statistical complexity of the RD design has limited its application in education research. This article provides a less technical introduction to RD for education researchers and practitioners. Using visual analysis to aide conceptual understanding, the article walks readers through the essential steps of a Sharp RD design using hypothetical, but realistic, district intervention data and provides additional resources for further exploration.

  8. Strengthening the Regression Discontinuity Design Using Additional Design Elements: A Within-Study Comparison

    Science.gov (United States)

    Wing, Coady; Cook, Thomas D.

    2013-01-01

    The sharp regression discontinuity design (RDD) has three key weaknesses compared to the randomized clinical trial (RCT). It has lower statistical power, it is more dependent on statistical modeling assumptions, and its treatment effect estimates are limited to the narrow subpopulation of cases immediately around the cutoff, which is rarely of…

  9. Regression Discontinuity and Randomized Controlled Trial Estimates: An Application to The Mycotic Ulcer Treatment Trials.

    Science.gov (United States)

    Oldenburg, Catherine E; Venkatesh Prajna, N; Krishnan, Tiruvengada; Rajaraman, Revathi; Srinivasan, Muthiah; Ray, Kathryn J; O'Brien, Kieran S; Glymour, M Maria; Porco, Travis C; Acharya, Nisha R; Rose-Nussbaumer, Jennifer; Lietman, Thomas M

    2018-08-01

    We compare results from regression discontinuity (RD) analysis to primary results of a randomized controlled trial (RCT) utilizing data from two contemporaneous RCTs for treatment of fungal corneal ulcers. Patients were enrolled in the Mycotic Ulcer Treatment Trials I and II (MUTT I & MUTT II) based on baseline visual acuity: patients with acuity ≤ 20/400 (logMAR 1.3) enrolled in MUTT I, and >20/400 in MUTT II. MUTT I investigated the effect of topical natamycin versus voriconazole on best spectacle-corrected visual acuity. MUTT II investigated the effect of topical voriconazole plus placebo versus topical voriconazole plus oral voriconazole. We compared the RD estimate (natamycin arm of MUTT I [N = 162] versus placebo arm of MUTT II [N = 54]) to the RCT estimate from MUTT I (topical natamycin [N = 162] versus topical voriconazole [N = 161]). In the RD, patients receiving natamycin had mean improvement of 4-lines of visual acuity at 3 months (logMAR -0.39, 95% CI: -0.61, -0.17) compared to topical voriconazole plus placebo, and 2-lines in the RCT (logMAR -0.18, 95% CI: -0.30, -0.05) compared to topical voriconazole. The RD and RCT estimates were similar, although the RD design overestimated effects compared to the RCT.

  10. Regression Discontinuity Designs Based on Population Thresholds

    DEFF Research Database (Denmark)

    Eggers, Andrew C.; Freier, Ronny; Grembi, Veronica

    In many countries, important features of municipal government (such as the electoral system, mayors' salaries, and the number of councillors) depend on whether the municipality is above or below arbitrary population thresholds. Several papers have used a regression discontinuity design (RDD...

  11. Regression Discontinuity in Prospective Evaluations: The Case of the FFVP Evaluation

    Science.gov (United States)

    Klerman, Jacob Alex; Olsho, Lauren E. W.; Bartlett, Susan

    2015-01-01

    While regression discontinuity has usually been applied retrospectively to secondary data, it is even more attractive when applied prospectively. In a prospective design, data collection can be focused on cases near the discontinuity, thereby improving internal validity and substantially increasing precision. Furthermore, such prospective…

  12. How Can Comparison Groups Strengthen Regression Discontinuity Designs?

    Science.gov (United States)

    Wing, Coady; Cook, Thomas D.

    2011-01-01

    In this paper, the authors examine some of the ways that different types of non-equivalent comparison groups can be used to strengthen causal inferences based on regression discontinuity design (RDD). First, they consider a design that incorporates pre-test data on assignment scores and outcomes that were collected either before the treatment…

  13. Grades, Gender, and Encouragement: A Regression Discontinuity Analysis

    Science.gov (United States)

    Owen, Ann L.

    2010-01-01

    The author employs a regression discontinuity design to provide direct evidence on the effects of grades earned in economics principles classes on the decision to major in economics and finds a differential effect for male and female students. Specifically, for female students, receiving an A for a final grade in the first economics class is…

  14. Health Care Facility Choice and User Fee Abolition: Regression Discontinuity in a Multinomial Choice Setting

    OpenAIRE

    Steven F. Koch; Jeffrey S. Racine

    2013-01-01

    We apply parametric and nonparametric regression discontinuity methodology within a multinomial choice setting to examine the impact of public health care user fee abolition on health facility choice using data from South Africa. The nonparametric model is found to outperform the parametric model both in- and out-of-sample, while also delivering more plausible estimates of the impact of user fee abolition (i.e. the 'treatment effect'). In the parametric framework, treatment effects were relat...

  15. The price sensitivity of Medicare beneficiaries: a regression discontinuity approach.

    Science.gov (United States)

    Buchmueller, Thomas C; Grazier, Kyle; Hirth, Richard A; Okeke, Edward N

    2013-01-01

    We use 4 years of data from the retiree health benefits program of the University of Michigan to estimate the effect of price on the health plan choices of Medicare beneficiaries. During the period of our analysis, changes in the University's premium contribution rules led to substantial price changes. A key feature of this 'natural experiment' is that individuals who had retired before a certain date were exempted from having to pay any premium contributions. This 'grandfathering' creates quasi-experimental variation that is ideal for estimating the effect of price. Using regression discontinuity methods, we compare the plan choices of individuals who retired just after the grandfathering cutoff date and were therefore exposed to significant price changes to the choices of a 'control group' of individuals who retired just before that date and therefore did not experience the price changes. The results indicate a statistically significant effect of price, with a $10 increase in monthly premium contributions leading to a 2 to 3 percentage point decrease in a plan's market share. Copyright © 2012 John Wiley & Sons, Ltd.

  16. Evaluating effects of developmental education for college students using a regression discontinuity design.

    Science.gov (United States)

    Moss, Brian G; Yeaton, William H

    2013-10-01

    Annually, American colleges and universities provide developmental education (DE) to millions of underprepared students; however, evaluation estimates of DE benefits have been mixed. Using a prototypic exemplar of DE, our primary objective was to investigate the utility of a replicative evaluative framework for assessing program effectiveness. Within the context of the regression discontinuity (RD) design, this research examined the effectiveness of a DE program for five, sequential cohorts of first-time college students. Discontinuity estimates were generated for individual terms and cumulatively, across terms. Participants were 3,589 first-time community college students. DE program effects were measured by contrasting both college-level English grades and a dichotomous measure of pass/fail, for DE and non-DE students. Parametric and nonparametric estimates of overall effect were positive for continuous and dichotomous measures of achievement (grade and pass/fail). The variability of program effects over time was determined by tracking results within individual terms and cumulatively, across terms. Applying this replication strategy, DE's overall impact was modest (an effect size of approximately .20) but quite consistent, based on parametric and nonparametric estimation approaches. A meta-analysis of five RD results yielded virtually the same estimate as the overall, parametric findings. Subset analysis, though tentative, suggested that males benefited more than females, while academic gains were comparable for different ethnicities. The cumulative, within-study comparison, replication approach offers considerable potential for the evaluation of new and existing policies, particularly when effects are relatively small, as is often the case in applied settings.

  17. A gradient estimate for solutions to parabolic equations with discontinuous coefficients

    Directory of Open Access Journals (Sweden)

    Jishan Fan

    2013-04-01

    Full Text Available Li-Vogelius and Li-Nirenberg gave a gradient estimate for solutions of strongly elliptic equations and systems of divergence forms with piecewise smooth coefficients, respectively. The discontinuities of the coefficients are assumed to be given by manifolds of codimension 1, which we called them emph{manifolds of discontinuities}. Their gradient estimate is independent of the distances between manifolds of discontinuities. In this paper, we gave a parabolic version of their results. That is, we gave a gradient estimate for parabolic equations of divergence forms with piecewise smooth coefficients. The coefficients are assumed to be independent of time and their discontinuities are likewise the previous elliptic equations. As an application of this estimate, we also gave a pointwise gradient estimate for the fundamental solution of a parabolic operator with piecewise smooth coefficients. Both gradient estimates are independent of the distances between manifolds of discontinuities.

  18. Financial Aid and First-Year Collegiate GPA: A Regression Discontinuity Approach

    Science.gov (United States)

    Curs, Bradley R.; Harper, Casandra E.

    2012-01-01

    Using a regression discontinuity design, we investigate whether a merit-based financial aid program has a causal effect on the first-year grade point average of first-time out-of-state freshmen at the University of Oregon. Our results indicate that merit-based financial aid has a positive and significant effect on first-year collegiate grade point…

  19. Evaluating an Organizational-Level Occupational Health Intervention in a Combined Regression Discontinuity and Randomized Control Design.

    Science.gov (United States)

    Sørensen, By Ole H

    2016-10-01

    Organizational-level occupational health interventions have great potential to improve employees' health and well-being. However, they often compare unfavourably to individual-level interventions. This calls for improving methods for designing, implementing and evaluating organizational interventions. This paper presents and discusses the regression discontinuity design because, like the randomized control trial, it is a strong summative experimental design, but it typically fits organizational-level interventions better. The paper explores advantages and disadvantages of a regression discontinuity design with an embedded randomized control trial. It provides an example from an intervention study focusing on reducing sickness absence in 196 preschools. The paper demonstrates that such a design fits the organizational context, because it allows management to focus on organizations or workgroups with the most salient problems. In addition, organizations may accept an embedded randomized design because the organizations or groups with most salient needs receive obligatory treatment as part of the regression discontinuity design. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  20. A gradient estimate for solutions to parabolic equations with discontinuous coefficients

    OpenAIRE

    Fan, Jishan; Kim, Kyoungsun; Nagayasu, Sei; Nakamura, Gen

    2011-01-01

    Li-Vogelius and Li-Nirenberg gave a gradient estimate for solutions of strongly elliptic equations and systems of divergence forms with piecewise smooth coefficients, respectively. The discontinuities of the coefficients are assumed to be given by manifolds of codimension 1, which we called them emph{manifolds of discontinuities}. Their gradient estimate is independent of the distances between manifolds of discontinuities. In this paper, we gave a parabolic version of their results. T...

  1. Estimation of the Continuous and Discontinuous Leverage Effects.

    Science.gov (United States)

    Aït-Sahalia, Yacine; Fan, Jianqing; Laeven, Roger J A; Wang, Christina Dan; Yang, Xiye

    2017-01-01

    This paper examines the leverage effect, or the generally negative covariation between asset returns and their changes in volatility, under a general setup that allows the log-price and volatility processes to be Itô semimartingales. We decompose the leverage effect into continuous and discontinuous parts and develop statistical methods to estimate them. We establish the asymptotic properties of these estimators. We also extend our methods and results (for the continuous leverage) to the situation where there is market microstructure noise in the observed returns. We show in Monte Carlo simulations that our estimators have good finite sample performance. When applying our methods to real data, our empirical results provide convincing evidence of the presence of the two leverage effects, especially the discontinuous one.

  2. Institutions and deforestation in the Brazilian amazon: a geographic regression discontinuity analysis

    OpenAIRE

    Bogetvedt, Ingvild Engen; Hauge, Mari Johnsrud

    2017-01-01

    This study explores the impact of institutional quality at the municipal level on deforestation in the Legal Amazon. We add to this insufficiently understood topic by implementing a geographic regression discontinuity design. By taking advantage of high-resolution spatial data on deforestation combined with an objective measure of corruption used as a proxy for institutional quality, we analyse 138 Brazilian municipalities in the period of 2002-2004. Our empirical findings show...

  3. Identification of small-scale discontinuities based on dip-oriented gradient energy entropy coherence estimation

    Science.gov (United States)

    Peng, Da; Yin, Cheng

    2017-09-01

    Locating small-scale discontinuities is one of the most challenging geophysical tasks; these subtle geological features are significant since they are often associated with subsurface petroleum traps. Subtle faults, fractures, unconformities, reef textures, channel boundaries, thin-bed boundaries and other structural and stratigraphic discontinuities have subtle geological edges which may provide lateral variation in seismic expression. Among the different geophysical techniques available, 3D seismic discontinuity attributes are particularly useful for highlighting discontinuities in the seismic data. Traditional seismic discontinuity attributes are sensitive to noise and are not very appropriate for detecting small-scale discontinuities. Thus, we present a dip-oriented gradient energy entropy (DOGEE) coherence estimation method to detect subtle faults and structural features. The DOGEE coherence estimation method uses the gradient structure tensor (GST) algorithm to obtain local dip information and construct a gradient correlation matrix to calculate gradient energy entropy. The proposed DOGEE coherence estimation method is robust to noise, and also improves the clarity of fault edges. It is effective for small-scale discontinuity characterisation and interpretation.

  4. Ridge regression estimator: combining unbiased and ordinary ridge regression methods of estimation

    Directory of Open Access Journals (Sweden)

    Sharad Damodar Gore

    2009-10-01

    Full Text Available Statistical literature has several methods for coping with multicollinearity. This paper introduces a new shrinkage estimator, called modified unbiased ridge (MUR. This estimator is obtained from unbiased ridge regression (URR in the same way that ordinary ridge regression (ORR is obtained from ordinary least squares (OLS. Properties of MUR are derived. Results on its matrix mean squared error (MMSE are obtained. MUR is compared with ORR and URR in terms of MMSE. These results are illustrated with an example based on data generated by Hoerl and Kennard (1975.

  5. Reducing Bias and Increasing Precision by Adding Either a Pretest Measure of the Study Outcome or a Nonequivalent Comparison Group to the Basic Regression Discontinuity Design: An Example from Education

    Science.gov (United States)

    Tang, Yang; Cook, Thomas D.; Kisbu-Sakarya, Yasemin

    2015-01-01

    Regression discontinuity design (RD) has been widely used to produce reliable causal estimates. Researchers have validated the accuracy of RD design using within study comparisons (Cook, Shadish & Wong, 2008; Cook & Steiner, 2010; Shadish et al, 2011). Within study comparisons examines the validity of a quasi-experiment by comparing its…

  6. Block volume estimation from the discontinuity spacing measurements of mesozoic limestone quarries, Karaburun Peninsula, Turkey.

    Science.gov (United States)

    Elci, Hakan; Turk, Necdet

    2014-01-01

    Block volumes are generally estimated by analyzing the discontinuity spacing measurements obtained either from the scan lines placed over the rock exposures or the borehole cores. Discontinuity spacing measurements made at the Mesozoic limestone quarries in Karaburun Peninsula were used to estimate the average block volumes that could be produced from them using the suggested methods in the literature. The Block Quality Designation (BQD) ratio method proposed by the authors has been found to have given in the same order of the rock block volume to the volumetric joint count (J(v)) method. Moreover, dimensions of the 2378 blocks produced between the years of 2009 and 2011 in the working quarries have been recorded. Assuming, that each block surfaces is a discontinuity, the mean block volume (V(b)), the mean volumetric joint count (J(vb)) and the mean block shape factor of the blocks are determined and compared with the estimated mean in situ block volumes (V(in)) and volumetric joint count (J(vi)) values estimated from the in situ discontinuity measurements. The established relations are presented as a chart to be used in practice for estimating the mean volume of blocks that can be obtained from a quarry site by analyzing the rock mass discontinuity spacing measurements.

  7. Block Volume Estimation from the Discontinuity Spacing Measurements of Mesozoic Limestone Quarries, Karaburun Peninsula, Turkey

    Directory of Open Access Journals (Sweden)

    Hakan Elci

    2014-01-01

    Full Text Available Block volumes are generally estimated by analyzing the discontinuity spacing measurements obtained either from the scan lines placed over the rock exposures or the borehole cores. Discontinuity spacing measurements made at the Mesozoic limestone quarries in Karaburun Peninsula were used to estimate the average block volumes that could be produced from them using the suggested methods in the literature. The Block Quality Designation (BQD ratio method proposed by the authors has been found to have given in the same order of the rock block volume to the volumetric joint count (Jv method. Moreover, dimensions of the 2378 blocks produced between the years of 2009 and 2011 in the working quarries have been recorded. Assuming, that each block surfaces is a discontinuity, the mean block volume (Vb, the mean volumetric joint count (Jvb and the mean block shape factor of the blocks are determined and compared with the estimated mean in situ block volumes (Vin and volumetric joint count (Jvi values estimated from the in situ discontinuity measurements. The established relations are presented as a chart to be used in practice for estimating the mean volume of blocks that can be obtained from a quarry site by analyzing the rock mass discontinuity spacing measurements.

  8. Quantitative Estimation of Transmitted and Reflected Lamb Waves at Discontinuity

    International Nuclear Information System (INIS)

    Lim, Hyung Jin; Sohn, Hoon

    2010-01-01

    For the application of Lamb wave to structural health monitoring(SHM), understanding its physical characteristic and interaction between Lamb wave and defect of the host structure is an important issue. In this study, reflected, transmitted and mode converted Lamb waves at discontinuity of a plate structure were simulated and the amplitude ratios are calculated theoretically using Modal decomposition method. The predicted results were verified comparing with finite element method(FEM) and experimental results simulating attached PZTs. The result shows that the theoretical prediction is close to the FEM and the experimental verification. Moreover, quantitative estimation method was suggested using amplitude ratio of Lamb wave at discontinuity

  9. Skipping Class in College and Exam Performance: Evidence from a Regression Discontinuity Classroom Experiment

    Science.gov (United States)

    Dobkin, Carlos; Gil, Ricard; Marion, Justin

    2010-01-01

    In this paper we estimate the effect of class attendance on exam performance by implementing a policy in three large economics classes that required students scoring below the median on the midterm exam to attend class. This policy generated a large discontinuity in the rate of post-midterm attendance at the median of the midterm score. We…

  10. Block Volume Estimation from the Discontinuity Spacing Measurements of Mesozoic Limestone Quarries, Karaburun Peninsula, Turkey

    OpenAIRE

    Elci, Hakan; Turk, Necdet

    2014-01-01

    Block volumes are generally estimated by analyzing the discontinuity spacing measurements obtained either from the scan lines placed over the rock exposures or the borehole cores. Discontinuity spacing measurements made at the Mesozoic limestone quarries in Karaburun Peninsula were used to estimate the average block volumes that could be produced from them using the suggested methods in the literature. The Block Quality Designation (BQD) ratio method proposed by the authors has been found to ...

  11. A new unbiased stochastic derivative estimator for discontinuous sample performances with structural parameters

    NARCIS (Netherlands)

    Peng, Yijie; Fu, Michael C.; Hu, Jian Qiang; Heidergott, Bernd

    In this paper, we propose a new unbiased stochastic derivative estimator in a framework that can handle discontinuous sample performances with structural parameters. This work extends the three most popular unbiased stochastic derivative estimators: (1) infinitesimal perturbation analysis (IPA), (2)

  12. Estimating the real-world effects of expanding antiretroviral treatment eligibility: Evidence from a regression discontinuity analysis in Zambia.

    Directory of Open Access Journals (Sweden)

    Aaloke Mody

    2018-06-01

    Full Text Available Although randomized trials have established the clinical efficacy of treating all persons living with HIV (PLWHs, expanding treatment eligibility in the real world may have additional behavioral effects (e.g., changes in retention or lead to unintended consequences (e.g., crowding out sicker patients owing to increased patient volume. Using a regression discontinuity design, we sought to assess the effects of a previous change to Zambia's HIV treatment guidelines increasing the threshold for treatment eligibility from 350 to 500 cells/μL to anticipate effects of current global efforts to treat all PLWHs.We analyzed antiretroviral therapy (ART-naïve adults who newly enrolled in HIV care in a network of 64 clinics operated by the Zambian Ministry of Health and supported by the Centre for Infectious Disease Research in Zambia (CIDRZ. Patients were restricted to those enrolling in a narrow window around the April 1, 2014 change to Zambian HIV treatment guidelines that raised the CD4 threshold for treatment from 350 to 500 cells/μL (i.e., August 1, 2013, to November 1, 2014. Clinical and sociodemographic data were obtained from an electronic medical record system used in routine care. We used a regression discontinuity design to estimate the effects of this change in treatment eligibility on ART initiation within 3 months of enrollment, retention in care at 6 months (defined as clinic attendance between 3 and 9 months after enrollment, and a composite of both ART initiation by 3 months and retention in care at 6 months in all new enrollees. We also performed an instrumental variable (IV analysis to quantify the effect of actually initiating ART because of this guideline change on retention. Overall, 34,857 ART-naïve patients (39.1% male, median age 34 years [IQR 28-41], median CD4 268 cells/μL [IQR 134-430] newly enrolled in HIV care during this period; 23,036 were analyzed after excluding patients around the threshold to allow for clinic

  13. -Error Estimates of the Extrapolated Crank-Nicolson Discontinuous Galerkin Approximations for Nonlinear Sobolev Equations

    Directory of Open Access Journals (Sweden)

    Lee HyunYoung

    2010-01-01

    Full Text Available We analyze discontinuous Galerkin methods with penalty terms, namely, symmetric interior penalty Galerkin methods, to solve nonlinear Sobolev equations. We construct finite element spaces on which we develop fully discrete approximations using extrapolated Crank-Nicolson method. We adopt an appropriate elliptic-type projection, which leads to optimal error estimates of discontinuous Galerkin approximations in both spatial direction and temporal direction.

  14. Independent contrasts and PGLS regression estimators are equivalent.

    Science.gov (United States)

    Blomberg, Simon P; Lefevre, James G; Wells, Jessie A; Waterhouse, Mary

    2012-05-01

    We prove that the slope parameter of the ordinary least squares regression of phylogenetically independent contrasts (PICs) conducted through the origin is identical to the slope parameter of the method of generalized least squares (GLSs) regression under a Brownian motion model of evolution. This equivalence has several implications: 1. Understanding the structure of the linear model for GLS regression provides insight into when and why phylogeny is important in comparative studies. 2. The limitations of the PIC regression analysis are the same as the limitations of the GLS model. In particular, phylogenetic covariance applies only to the response variable in the regression and the explanatory variable should be regarded as fixed. Calculation of PICs for explanatory variables should be treated as a mathematical idiosyncrasy of the PIC regression algorithm. 3. Since the GLS estimator is the best linear unbiased estimator (BLUE), the slope parameter estimated using PICs is also BLUE. 4. If the slope is estimated using different branch lengths for the explanatory and response variables in the PIC algorithm, the estimator is no longer the BLUE, so this is not recommended. Finally, we discuss whether or not and how to accommodate phylogenetic covariance in regression analyses, particularly in relation to the problem of phylogenetic uncertainty. This discussion is from both frequentist and Bayesian perspectives.

  15. A discontinuous Poisson-Boltzmann equation with interfacial jump: homogenisation and residual error estimate.

    Science.gov (United States)

    Fellner, Klemens; Kovtunenko, Victor A

    2016-01-01

    A nonlinear Poisson-Boltzmann equation with inhomogeneous Robin type boundary conditions at the interface between two materials is investigated. The model describes the electrostatic potential generated by a vector of ion concentrations in a periodic multiphase medium with dilute solid particles. The key issue stems from interfacial jumps, which necessitate discontinuous solutions to the problem. Based on variational techniques, we derive the homogenisation of the discontinuous problem and establish a rigorous residual error estimate up to the first-order correction.

  16. Using the Ridge Regression Procedures to Estimate the Multiple Linear Regression Coefficients

    Science.gov (United States)

    Gorgees, HazimMansoor; Mahdi, FatimahAssim

    2018-05-01

    This article concerns with comparing the performance of different types of ordinary ridge regression estimators that have been already proposed to estimate the regression parameters when the near exact linear relationships among the explanatory variables is presented. For this situations we employ the data obtained from tagi gas filling company during the period (2008-2010). The main result we reached is that the method based on the condition number performs better than other methods since it has smaller mean square error (MSE) than the other stated methods.

  17. Evaluating disease management programme effectiveness: an introduction to the regression discontinuity design.

    Science.gov (United States)

    Linden, Ariel; Adams, John L; Roberts, Nancy

    2006-04-01

    Although disease management (DM) has been in existence for over a decade, there is still much uncertainty as to its effectiveness in improving health status and reducing medical cost. The main reason is that most programme evaluations typically follow weak observational study designs that are subject to bias, most notably selection bias and regression to the mean. The regression discontinuity (RD) design may be the best alternative to randomized studies for evaluating DM programme effectiveness. The most crucial element of the RD design is its use of a 'cut-off' score on a pre-test measure to determine assignment to intervention or control. A valuable feature of this technique is that the pre-test measure does not have to be the same as the outcome measure, thus maximizing the programme's ability to use research-based practice guidelines, survey instruments and other tools to identify those individuals in greatest need of the programme intervention. Similarly, the cut-off score can be based on clinical understanding of the disease process, empirically derived, or resource-based. In the RD design, programme effectiveness is determined by a change in the pre-post relationship at the cut-off point. While the RD design is uniquely suitable for DM programme evaluation, its success will depend, in large part, on fundamental changes being made in the way DM programmes identify and assign individuals to the programme intervention.

  18. Principal component regression for crop yield estimation

    CERN Document Server

    Suryanarayana, T M V

    2016-01-01

    This book highlights the estimation of crop yield in Central Gujarat, especially with regard to the development of Multiple Regression Models and Principal Component Regression (PCR) models using climatological parameters as independent variables and crop yield as a dependent variable. It subsequently compares the multiple linear regression (MLR) and PCR results, and discusses the significance of PCR for crop yield estimation. In this context, the book also covers Principal Component Analysis (PCA), a statistical procedure used to reduce a number of correlated variables into a smaller number of uncorrelated variables called principal components (PC). This book will be helpful to the students and researchers, starting their works on climate and agriculture, mainly focussing on estimation models. The flow of chapters takes the readers in a smooth path, in understanding climate and weather and impact of climate change, and gradually proceeds towards downscaling techniques and then finally towards development of ...

  19. A logistic regression estimating function for spatial Gibbs point processes

    DEFF Research Database (Denmark)

    Baddeley, Adrian; Coeurjolly, Jean-François; Rubak, Ege

    We propose a computationally efficient logistic regression estimating function for spatial Gibbs point processes. The sample points for the logistic regression consist of the observed point pattern together with a random pattern of dummy points. The estimating function is closely related to the p......We propose a computationally efficient logistic regression estimating function for spatial Gibbs point processes. The sample points for the logistic regression consist of the observed point pattern together with a random pattern of dummy points. The estimating function is closely related...

  20. Hierarchical Matching and Regression with Application to Photometric Redshift Estimation

    Science.gov (United States)

    Murtagh, Fionn

    2017-06-01

    This work emphasizes that heterogeneity, diversity, discontinuity, and discreteness in data is to be exploited in classification and regression problems. A global a priori model may not be desirable. For data analytics in cosmology, this is motivated by the variety of cosmological objects such as elliptical, spiral, active, and merging galaxies at a wide range of redshifts. Our aim is matching and similarity-based analytics that takes account of discrete relationships in the data. The information structure of the data is represented by a hierarchy or tree where the branch structure, rather than just the proximity, is important. The representation is related to p-adic number theory. The clustering or binning of the data values, related to the precision of the measurements, has a central role in this methodology. If used for regression, our approach is a method of cluster-wise regression, generalizing nearest neighbour regression. Both to exemplify this analytics approach, and to demonstrate computational benefits, we address the well-known photometric redshift or `photo-z' problem, seeking to match Sloan Digital Sky Survey (SDSS) spectroscopic and photometric redshifts.

  1. L2-Error Estimates of the Extrapolated Crank-Nicolson Discontinuous Galerkin Approximations for Nonlinear Sobolev Equations

    Directory of Open Access Journals (Sweden)

    Hyun Young Lee

    2010-01-01

    Full Text Available We analyze discontinuous Galerkin methods with penalty terms, namely, symmetric interior penalty Galerkin methods, to solve nonlinear Sobolev equations. We construct finite element spaces on which we develop fully discrete approximations using extrapolated Crank-Nicolson method. We adopt an appropriate elliptic-type projection, which leads to optimal ℓ∞(L2 error estimates of discontinuous Galerkin approximations in both spatial direction and temporal direction.

  2. Efficient estimation of an additive quantile regression model

    NARCIS (Netherlands)

    Cheng, Y.; de Gooijer, J.G.; Zerom, D.

    2011-01-01

    In this paper, two non-parametric estimators are proposed for estimating the components of an additive quantile regression model. The first estimator is a computationally convenient approach which can be viewed as a more viable alternative to existing kernel-based approaches. The second estimator

  3. Early discontinuation

    DEFF Research Database (Denmark)

    Hansen, Dorte Gilså; Felde, Lina; Gichangi, Anthony

    2007-01-01

    prevalence and rate of early discontinuation of different drugs consisting of, in this study, lipid-lowering drugs, antihypertensive drugs, antidepressants, antidiabetics and drugs against osteoporosis. Material and methods This was a register study based on prescription data covering a 4-year period...... and consisting of 470,000 citizens. For each practice and group of drug, a 1-year prevalence for 2002 and the rate of early discontinuation among new users in 2002-2003 were estimated. Early discontinuation was defined as no prescriptions during the second half-year following the first prescription....... There was a positive association between the prevalence of prescribing for the specific drugs studied (antidepressants, antidiabetics, drugs against osteoporosis and lipid-lowering drugs) and early discontinuation (r = 0.29 -0.44), but not for anti-hypertensive drugs. The analysis of the association between prevalence...

  4. Parameters Estimation of Geographically Weighted Ordinal Logistic Regression (GWOLR) Model

    Science.gov (United States)

    Zuhdi, Shaifudin; Retno Sari Saputro, Dewi; Widyaningsih, Purnami

    2017-06-01

    A regression model is the representation of relationship between independent variable and dependent variable. The dependent variable has categories used in the logistic regression model to calculate odds on. The logistic regression model for dependent variable has levels in the logistics regression model is ordinal. GWOLR model is an ordinal logistic regression model influenced the geographical location of the observation site. Parameters estimation in the model needed to determine the value of a population based on sample. The purpose of this research is to parameters estimation of GWOLR model using R software. Parameter estimation uses the data amount of dengue fever patients in Semarang City. Observation units used are 144 villages in Semarang City. The results of research get GWOLR model locally for each village and to know probability of number dengue fever patient categories.

  5. Efficient estimation of an additive quantile regression model

    NARCIS (Netherlands)

    Cheng, Y.; de Gooijer, J.G.; Zerom, D.

    2009-01-01

    In this paper two kernel-based nonparametric estimators are proposed for estimating the components of an additive quantile regression model. The first estimator is a computationally convenient approach which can be viewed as a viable alternative to the method of De Gooijer and Zerom (2003). By

  6. Efficient estimation of an additive quantile regression model

    NARCIS (Netherlands)

    Cheng, Y.; de Gooijer, J.G.; Zerom, D.

    2010-01-01

    In this paper two kernel-based nonparametric estimators are proposed for estimating the components of an additive quantile regression model. The first estimator is a computationally convenient approach which can be viewed as a viable alternative to the method of De Gooijer and Zerom (2003). By

  7. Regression Equations for Birth Weight Estimation using ...

    African Journals Online (AJOL)

    In this study, Birth Weight has been estimated from anthropometric measurements of hand and foot. Linear regression equations were formed from each of the measured variables. These simple equations can be used to estimate Birth Weight of new born babies, in order to identify those with low birth weight and referred to ...

  8. Sparse reduced-rank regression with covariance estimation

    KAUST Repository

    Chen, Lisha

    2014-12-08

    Improving the predicting performance of the multiple response regression compared with separate linear regressions is a challenging question. On the one hand, it is desirable to seek model parsimony when facing a large number of parameters. On the other hand, for certain applications it is necessary to take into account the general covariance structure for the errors of the regression model. We assume a reduced-rank regression model and work with the likelihood function with general error covariance to achieve both objectives. In addition we propose to select relevant variables for reduced-rank regression by using a sparsity-inducing penalty, and to estimate the error covariance matrix simultaneously by using a similar penalty on the precision matrix. We develop a numerical algorithm to solve the penalized regression problem. In a simulation study and real data analysis, the new method is compared with two recent methods for multivariate regression and exhibits competitive performance in prediction and variable selection.

  9. Sparse reduced-rank regression with covariance estimation

    KAUST Repository

    Chen, Lisha; Huang, Jianhua Z.

    2014-01-01

    Improving the predicting performance of the multiple response regression compared with separate linear regressions is a challenging question. On the one hand, it is desirable to seek model parsimony when facing a large number of parameters. On the other hand, for certain applications it is necessary to take into account the general covariance structure for the errors of the regression model. We assume a reduced-rank regression model and work with the likelihood function with general error covariance to achieve both objectives. In addition we propose to select relevant variables for reduced-rank regression by using a sparsity-inducing penalty, and to estimate the error covariance matrix simultaneously by using a similar penalty on the precision matrix. We develop a numerical algorithm to solve the penalized regression problem. In a simulation study and real data analysis, the new method is compared with two recent methods for multivariate regression and exhibits competitive performance in prediction and variable selection.

  10. Regression and Sparse Regression Methods for Viscosity Estimation of Acid Milk From it’s Sls Features

    DEFF Research Database (Denmark)

    Sharifzadeh, Sara; Skytte, Jacob Lercke; Nielsen, Otto Højager Attermann

    2012-01-01

    Statistical solutions find wide spread use in food and medicine quality control. We investigate the effect of different regression and sparse regression methods for a viscosity estimation problem using the spectro-temporal features from new Sub-Surface Laser Scattering (SLS) vision system. From...... with sparse LAR, lasso and Elastic Net (EN) sparse regression methods. Due to the inconsistent measurement condition, Locally Weighted Scatter plot Smoothing (Loess) has been employed to alleviate the undesired variation in the estimated viscosity. The experimental results of applying different methods show...

  11. Estimating the exceedance probability of rain rate by logistic regression

    Science.gov (United States)

    Chiu, Long S.; Kedem, Benjamin

    1990-01-01

    Recent studies have shown that the fraction of an area with rain intensity above a fixed threshold is highly correlated with the area-averaged rain rate. To estimate the fractional rainy area, a logistic regression model, which estimates the conditional probability that rain rate over an area exceeds a fixed threshold given the values of related covariates, is developed. The problem of dependency in the data in the estimation procedure is bypassed by the method of partial likelihood. Analyses of simulated scanning multichannel microwave radiometer and observed electrically scanning microwave radiometer data during the Global Atlantic Tropical Experiment period show that the use of logistic regression in pixel classification is superior to multiple regression in predicting whether rain rate at each pixel exceeds a given threshold, even in the presence of noisy data. The potential of the logistic regression technique in satellite rain rate estimation is discussed.

  12. How do urban households in China respond to increasing block pricing in electricity? Evidence from a fuzzy regression discontinuity approach

    International Nuclear Information System (INIS)

    Zhang, Zibin; Cai, Wenxin; Feng, Xiangzhao

    2017-01-01

    China is the largest electricity consumption country after it has passed the United States in 2011. Residential electricity consumption in China grew by 381.35% (12.85% per annum) between 2000 and 2013. In order to deal with rapid growth in residential electricity consumption, an increasing block pricing policy was introduced for residential electricity consumers in China on July 1st, 2012. Using difference-in-differences models with a fuzzy regression discontinuity design, we estimate a causal effect of price on electricity consumption for urban households during the introduction of increasing block pricing policy in Guangdong province of China. We find that consumers do not respond to a smaller (approximately 8%) increase in marginal price. However, consumers do respond to a larger increase in marginal price. An approximately 40% increase in marginal price induces an approximately 35% decrease in electricity use (284 kW h per month). Our results suggest that although the increasing block pricing could affect the behavior of households with higher electricity use, there is only a limit potential to overall energy conservation. - Highlights: • Estimate electricity consumption changes in response to the IBP in China. • Employ quasi-experimental approach and micro household level data in China. • Households do not respond to a smaller increase in marginal price. • 40% increase in marginal price induces a 35% decrease in electricity use.

  13. Robust median estimator in logisitc regression

    Czech Academy of Sciences Publication Activity Database

    Hobza, T.; Pardo, L.; Vajda, Igor

    2008-01-01

    Roč. 138, č. 12 (2008), s. 3822-3840 ISSN 0378-3758 R&D Projects: GA MŠk 1M0572 Grant - others:Instituto Nacional de Estadistica (ES) MPO FI - IM3/136; GA MŠk(CZ) MTM 2006-06872 Institutional research plan: CEZ:AV0Z10750506 Keywords : Logistic regression * Median * Robustness * Consistency and asymptotic normality * Morgenthaler * Bianco and Yohai * Croux and Hasellbroeck Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.679, year: 2008 http://library.utia.cas.cz/separaty/2008/SI/vajda-robust%20median%20estimator%20in%20logistic%20regression.pdf

  14. Estimating nonlinear selection gradients using quadratic regression coefficients: double or nothing?

    Science.gov (United States)

    Stinchcombe, John R; Agrawal, Aneil F; Hohenlohe, Paul A; Arnold, Stevan J; Blows, Mark W

    2008-09-01

    The use of regression analysis has been instrumental in allowing evolutionary biologists to estimate the strength and mode of natural selection. Although directional and correlational selection gradients are equal to their corresponding regression coefficients, quadratic regression coefficients must be doubled to estimate stabilizing/disruptive selection gradients. Based on a sample of 33 papers published in Evolution between 2002 and 2007, at least 78% of papers have not doubled quadratic regression coefficients, leading to an appreciable underestimate of the strength of stabilizing and disruptive selection. Proper treatment of quadratic regression coefficients is necessary for estimation of fitness surfaces and contour plots, canonical analysis of the gamma matrix, and modeling the evolution of populations on an adaptive landscape.

  15. Estimation Methods for Non-Homogeneous Regression - Minimum CRPS vs Maximum Likelihood

    Science.gov (United States)

    Gebetsberger, Manuel; Messner, Jakob W.; Mayr, Georg J.; Zeileis, Achim

    2017-04-01

    Non-homogeneous regression models are widely used to statistically post-process numerical weather prediction models. Such regression models correct for errors in mean and variance and are capable to forecast a full probability distribution. In order to estimate the corresponding regression coefficients, CRPS minimization is performed in many meteorological post-processing studies since the last decade. In contrast to maximum likelihood estimation, CRPS minimization is claimed to yield more calibrated forecasts. Theoretically, both scoring rules used as an optimization score should be able to locate a similar and unknown optimum. Discrepancies might result from a wrong distributional assumption of the observed quantity. To address this theoretical concept, this study compares maximum likelihood and minimum CRPS estimation for different distributional assumptions. First, a synthetic case study shows that, for an appropriate distributional assumption, both estimation methods yield to similar regression coefficients. The log-likelihood estimator is slightly more efficient. A real world case study for surface temperature forecasts at different sites in Europe confirms these results but shows that surface temperature does not always follow the classical assumption of a Gaussian distribution. KEYWORDS: ensemble post-processing, maximum likelihood estimation, CRPS minimization, probabilistic temperature forecasting, distributional regression models

  16. [Evaluation of estimation of prevalence ratio using bayesian log-binomial regression model].

    Science.gov (United States)

    Gao, W L; Lin, H; Liu, X N; Ren, X W; Li, J S; Shen, X P; Zhu, S L

    2017-03-10

    To evaluate the estimation of prevalence ratio ( PR ) by using bayesian log-binomial regression model and its application, we estimated the PR of medical care-seeking prevalence to caregivers' recognition of risk signs of diarrhea in their infants by using bayesian log-binomial regression model in Openbugs software. The results showed that caregivers' recognition of infant' s risk signs of diarrhea was associated significantly with a 13% increase of medical care-seeking. Meanwhile, we compared the differences in PR 's point estimation and its interval estimation of medical care-seeking prevalence to caregivers' recognition of risk signs of diarrhea and convergence of three models (model 1: not adjusting for the covariates; model 2: adjusting for duration of caregivers' education, model 3: adjusting for distance between village and township and child month-age based on model 2) between bayesian log-binomial regression model and conventional log-binomial regression model. The results showed that all three bayesian log-binomial regression models were convergence and the estimated PRs were 1.130(95 %CI : 1.005-1.265), 1.128(95 %CI : 1.001-1.264) and 1.132(95 %CI : 1.004-1.267), respectively. Conventional log-binomial regression model 1 and model 2 were convergence and their PRs were 1.130(95 % CI : 1.055-1.206) and 1.126(95 % CI : 1.051-1.203), respectively, but the model 3 was misconvergence, so COPY method was used to estimate PR , which was 1.125 (95 %CI : 1.051-1.200). In addition, the point estimation and interval estimation of PRs from three bayesian log-binomial regression models differed slightly from those of PRs from conventional log-binomial regression model, but they had a good consistency in estimating PR . Therefore, bayesian log-binomial regression model can effectively estimate PR with less misconvergence and have more advantages in application compared with conventional log-binomial regression model.

  17. The efficiency of modified jackknife and ridge type regression estimators: a comparison

    Directory of Open Access Journals (Sweden)

    Sharad Damodar Gore

    2008-09-01

    Full Text Available A common problem in multiple regression models is multicollinearity, which produces undesirable effects on the least squares estimator. To circumvent this problem, two well known estimation procedures are often suggested in the literature. They are Generalized Ridge Regression (GRR estimation suggested by Hoerl and Kennard iteb8 and the Jackknifed Ridge Regression (JRR estimation suggested by Singh et al. iteb13. The GRR estimation leads to a reduction in the sampling variance, whereas, JRR leads to a reduction in the bias. In this paper, we propose a new estimator namely, Modified Jackknife Ridge Regression Estimator (MJR. It is based on the criterion that combines the ideas underlying both the GRR and JRR estimators. We have investigated standard properties of this new estimator. From a simulation study, we find that the new estimator often outperforms the LASSO, and it is superior to both GRR and JRR estimators, using the mean squared error criterion. The conditions under which the MJR estimator is better than the other two competing estimators have been investigated.

  18. Is There Evidence for Systematic Upcoding of ASA Physical Status Coincident with Payer Incentives? A Regression Discontinuity Analysis of the National Anesthesia Clinical Outcomes Registry.

    Science.gov (United States)

    Schonberger, Robert B; Dutton, Richard P; Dai, Feng

    2016-01-01

    Modifications in physician billing patterns have been shown to occur in response to payer incentives, but the phenomenon remains largely unexplored in billing for anesthesia services. Within the field of anesthesiology, Medicare's policy not to provide additional reimbursement for higher ASA physical status scores contrasts with the practices of most private payers, and this pattern of reimbursement introduces a change in billing incentives once patients attain Medicare eligibility. We hypothesized that, coincident with the onset of widespread Medicare eligibility at age 65 years, a discontinuity in reported ASA physical status scores would be observed after controlling for the underlying trend of increasing ASA physical status scores with age. This phenomenon would manifest as a pattern of upcoding of ASA physical status scores for patients younger than 65 years that would become less common in patients age 65 years and older. Using data on age, sex, ASA physical status scores, and type of surgery from the National Anesthesia Clinical Outcomes Registry, we used a quasi-experimental regression discontinuity design to analyze whether there was evidence for a discontinuity in reported ASA physical status scores occurring at age 65 years for the nondeferrable anesthesia services accompanying hip, femur, or lower leg fracture repair. A total of 49,850 records were analyzed. In models designed to detect regression discontinuity at 65 years of age, neither the binary variable "age ≥ 65" nor the interaction term of age × age ≥ 65 was a statistically significant predictor of the outcome of ASA physical status score. The statistical inference was unchanged when ASA physical status scores were reclassified as a binary outcome (I-II vs III-V) and when different bandwidths around age 65 years were used. To test the validity of our study design for detecting regression discontinuity, simulations of the occurrence of deliberate upcoding of ASA physical status scores

  19. Dynamic travel time estimation using regression trees.

    Science.gov (United States)

    2008-10-01

    This report presents a methodology for travel time estimation by using regression trees. The dissemination of travel time information has become crucial for effective traffic management, especially under congested road conditions. In the absence of c...

  20. The effect of high leverage points on the logistic ridge regression estimator having multicollinearity

    Science.gov (United States)

    Ariffin, Syaiba Balqish; Midi, Habshah

    2014-06-01

    This article is concerned with the performance of logistic ridge regression estimation technique in the presence of multicollinearity and high leverage points. In logistic regression, multicollinearity exists among predictors and in the information matrix. The maximum likelihood estimator suffers a huge setback in the presence of multicollinearity which cause regression estimates to have unduly large standard errors. To remedy this problem, a logistic ridge regression estimator is put forward. It is evident that the logistic ridge regression estimator outperforms the maximum likelihood approach for handling multicollinearity. The effect of high leverage points are then investigated on the performance of the logistic ridge regression estimator through real data set and simulation study. The findings signify that logistic ridge regression estimator fails to provide better parameter estimates in the presence of both high leverage points and multicollinearity.

  1. A SAS-macro for estimation of the cumulative incidence using Poisson regression

    DEFF Research Database (Denmark)

    Waltoft, Berit Lindum

    2009-01-01

    the hazard rates, and the hazard rates are often estimated by the Cox regression. This procedure may not be suitable for large studies due to limited computer resources. Instead one uses Poisson regression, which approximates the Cox regression. Rosthøj et al. presented a SAS-macro for the estimation...... of the cumulative incidences based on the Cox regression. I present the functional form of the probabilities and variances when using piecewise constant hazard rates and a SAS-macro for the estimation using Poisson regression. The use of the macro is demonstrated through examples and compared to the macro presented...

  2. A flexible fuzzy regression algorithm for forecasting oil consumption estimation

    International Nuclear Information System (INIS)

    Azadeh, A.; Khakestani, M.; Saberi, M.

    2009-01-01

    Oil consumption plays a vital role in socio-economic development of most countries. This study presents a flexible fuzzy regression algorithm for forecasting oil consumption based on standard economic indicators. The standard indicators are annual population, cost of crude oil import, gross domestic production (GDP) and annual oil production in the last period. The proposed algorithm uses analysis of variance (ANOVA) to select either fuzzy regression or conventional regression for future demand estimation. The significance of the proposed algorithm is three fold. First, it is flexible and identifies the best model based on the results of ANOVA and minimum absolute percentage error (MAPE), whereas previous studies consider the best fitted fuzzy regression model based on MAPE or other relative error results. Second, the proposed model may identify conventional regression as the best model for future oil consumption forecasting because of its dynamic structure, whereas previous studies assume that fuzzy regression always provide the best solutions and estimation. Third, it utilizes the most standard independent variables for the regression models. To show the applicability and superiority of the proposed flexible fuzzy regression algorithm the data for oil consumption in Canada, United States, Japan and Australia from 1990 to 2005 are used. The results show that the flexible algorithm provides accurate solution for oil consumption estimation problem. The algorithm may be used by policy makers to accurately foresee the behavior of oil consumption in various regions.

  3. Tightness of M-estimators for multiple linear regression in time series

    DEFF Research Database (Denmark)

    Johansen, Søren; Nielsen, Bent

    We show tightness of a general M-estimator for multiple linear regression in time series. The positive criterion function for the M-estimator is assumed lower semi-continuous and sufficiently large for large argument: Particular cases are the Huber-skip and quantile regression. Tightness requires...

  4. First-line antiretroviral drug discontinuations in children.

    Directory of Open Access Journals (Sweden)

    Melony Fortuin-de Smidt

    Full Text Available There are a limited number of paediatric antiretroviral drug options. Characterising the long term safety and durability of different antiretrovirals in children is important to optimise management of HIV infected children and to determine the estimated need for alternative drugs in paediatric regimens. We describe first-line antiretroviral therapy (ART durability and reasons for discontinuations in children at two South African ART programmes, where lopinavir/ritonavir has been recommended for children <3 years old since 2004, and abacavir replaced stavudine as the preferred nucleoside reverse transcriptase inhibitor in 2010.We included children (<16 years at ART initiation who initiated ≥3 antiretrovirals between 2004-2014 with ≥1 follow-up visit on ART. We estimated the incidence of first antiretroviral discontinuation using Kaplan-Meier analysis. We determined the reasons for antiretroviral discontinuations using competing risks analysis. We used Cox regression to identify factors associated with treatment-limiting toxicity.We included 3579 children with median follow-up duration of 41 months (IQR 14-72. At ART initiation, median age was 44 months (IQR 13-89 and median CD4 percent was 15% (IQR 9-21%. At three and five years on ART, 72% and 26% of children respectively remained on their initial regimen. By five years on ART, the most common reasons for discontinuations were toxicity (32%, treatment failure (18%, treatment simplification (5%, drug interactions (3%, and other or unspecified reasons (18%. The incidences of treatment limiting toxicity were 50.6 (95% CI 46.2-55.4, 1.6 (0.5-4.8, 2.0 (1.2-3.3, and 1.3 (0.6-2.8 per 1000 patient years for stavudine, abacavir, efavirenz and lopinavir/ritonavir respectively.While stavudine was associated with a high risk of treatment-limiting toxicity, abacavir, lopinavir/ritonavir and efavirenz were well-tolerated. This supports the World Health Organization recommendation to replace stavudine with

  5. Analysis of an a posteriori error estimator for the transport equation with SN and discontinuous Galerkin discretizations

    International Nuclear Information System (INIS)

    Fournier, D.; Le Tellier, R.; Suteau, C.

    2011-01-01

    We present an error estimator for the S N neutron transport equation discretized with an arbitrary high-order discontinuous Galerkin method. As a starting point, the estimator is obtained for conforming Cartesian meshes with a uniform polynomial order for the trial space then adapted to deal with non-conforming meshes and a variable polynomial order. Some numerical tests illustrate the properties of the estimator and its limitations. Finally, a simple shielding benchmark is analyzed in order to show the relevance of the estimator in an adaptive process.

  6. Factors predicting successful discontinuation of continuous renal replacement therapy.

    Science.gov (United States)

    Katayama, S; Uchino, S; Uji, M; Ohnuma, T; Namba, Y; Kawarazaki, H; Toki, N; Takeda, K; Yasuda, H; Izawa, J; Tokuhira, N; Nagata, I

    2016-07-01

    This multicentre, retrospective observational study was conducted from January 2010 to December 2010 to determine the optimal time for discontinuing continuous renal replacement therapy (CRRT) by evaluating factors predictive of successful discontinuation in patients with acute kidney injury. Analysis was performed for patients after CRRT was discontinued because of renal function recovery. Patients were divided into two groups according to the success or failure of CRRT discontinuation. In multivariate logistic regression analysis, urine output at discontinuation, creatinine level and CRRT duration were found to be significant variables (area under the receiver operating characteristic curve for urine output, 0.814). In conclusion, we found that higher urine output, lower creatinine and shorter CRRT duration were significant factors to predict successful discontinuation of CRRT.

  7. On the estimation and testing of predictive panel regressions

    NARCIS (Netherlands)

    Karabiyik, H.; Westerlund, Joakim; Narayan, Paresh

    2016-01-01

    Hjalmarsson (2010) considers an OLS-based estimator of predictive panel regressions that is argued to be mixed normal under very general conditions. In a recent paper, Westerlund et al. (2016) show that while consistent, the estimator is generally not mixed normal, which invalidates standard normal

  8. Multiplication factor versus regression analysis in stature estimation from hand and foot dimensions.

    Science.gov (United States)

    Krishan, Kewal; Kanchan, Tanuj; Sharma, Abhilasha

    2012-05-01

    Estimation of stature is an important parameter in identification of human remains in forensic examinations. The present study is aimed to compare the reliability and accuracy of stature estimation and to demonstrate the variability in estimated stature and actual stature using multiplication factor and regression analysis methods. The study is based on a sample of 246 subjects (123 males and 123 females) from North India aged between 17 and 20 years. Four anthropometric measurements; hand length, hand breadth, foot length and foot breadth taken on the left side in each subject were included in the study. Stature was measured using standard anthropometric techniques. Multiplication factors were calculated and linear regression models were derived for estimation of stature from hand and foot dimensions. Derived multiplication factors and regression formula were applied to the hand and foot measurements in the study sample. The estimated stature from the multiplication factors and regression analysis was compared with the actual stature to find the error in estimated stature. The results indicate that the range of error in estimation of stature from regression analysis method is less than that of multiplication factor method thus, confirming that the regression analysis method is better than multiplication factor analysis in stature estimation. Copyright © 2012 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  9. Estimation of Ordinary Differential Equation Parameters Using Constrained Local Polynomial Regression.

    Science.gov (United States)

    Ding, A Adam; Wu, Hulin

    2014-10-01

    We propose a new method to use a constrained local polynomial regression to estimate the unknown parameters in ordinary differential equation models with a goal of improving the smoothing-based two-stage pseudo-least squares estimate. The equation constraints are derived from the differential equation model and are incorporated into the local polynomial regression in order to estimate the unknown parameters in the differential equation model. We also derive the asymptotic bias and variance of the proposed estimator. Our simulation studies show that our new estimator is clearly better than the pseudo-least squares estimator in estimation accuracy with a small price of computational cost. An application example on immune cell kinetics and trafficking for influenza infection further illustrates the benefits of the proposed new method.

  10. Parameter Estimation for Improving Association Indicators in Binary Logistic Regression

    Directory of Open Access Journals (Sweden)

    Mahdi Bashiri

    2012-02-01

    Full Text Available The aim of this paper is estimation of Binary logistic regression parameters for maximizing the log-likelihood function with improved association indicators. In this paper the parameter estimation steps have been explained and then measures of association have been introduced and their calculations have been analyzed. Moreover a new related indicators based on membership degree level have been expressed. Indeed association measures demonstrate the number of success responses occurred in front of failure in certain number of Bernoulli independent experiments. In parameter estimation, existing indicators values is not sensitive to the parameter values, whereas the proposed indicators are sensitive to the estimated parameters during the iterative procedure. Therefore, proposing a new association indicator of binary logistic regression with more sensitivity to the estimated parameters in maximizing the log- likelihood in iterative procedure is innovation of this study.

  11. Small sample GEE estimation of regression parameters for longitudinal data.

    Science.gov (United States)

    Paul, Sudhir; Zhang, Xuemao

    2014-09-28

    Longitudinal (clustered) response data arise in many bio-statistical applications which, in general, cannot be assumed to be independent. Generalized estimating equation (GEE) is a widely used method to estimate marginal regression parameters for correlated responses. The advantage of the GEE is that the estimates of the regression parameters are asymptotically unbiased even if the correlation structure is misspecified, although their small sample properties are not known. In this paper, two bias adjusted GEE estimators of the regression parameters in longitudinal data are obtained when the number of subjects is small. One is based on a bias correction, and the other is based on a bias reduction. Simulations show that the performances of both the bias-corrected methods are similar in terms of bias, efficiency, coverage probability, average coverage length, impact of misspecification of correlation structure, and impact of cluster size on bias correction. Both these methods show superior properties over the GEE estimates for small samples. Further, analysis of data involving a small number of subjects also shows improvement in bias, MSE, standard error, and length of the confidence interval of the estimates by the two bias adjusted methods over the GEE estimates. For small to moderate sample sizes (N ≤50), either of the bias-corrected methods GEEBc and GEEBr can be used. However, the method GEEBc should be preferred over GEEBr, as the former is computationally easier. For large sample sizes, the GEE method can be used. Copyright © 2014 John Wiley & Sons, Ltd.

  12. Early cost estimating for road construction projects using multiple regression techniques

    Directory of Open Access Journals (Sweden)

    Ibrahim Mahamid

    2011-12-01

    Full Text Available The objective of this study is to develop early cost estimating models for road construction projects using multiple regression techniques, based on 131 sets of data collected in the West Bank in Palestine. As the cost estimates are required at early stages of a project, considerations were given to the fact that the input data for the required regression model could be easily extracted from sketches or scope definition of the project. 11 regression models are developed to estimate the total cost of road construction project in US dollar; 5 of them include bid quantities as input variables and 6 include road length and road width. The coefficient of determination r2 for the developed models is ranging from 0.92 to 0.98 which indicate that the predicted values from a forecast models fit with the real-life data. The values of the mean absolute percentage error (MAPE of the developed regression models are ranging from 13% to 31%, the results compare favorably with past researches which have shown that the estimate accuracy in the early stages of a project is between ±25% and ±50%.

  13. Higher-order Multivariable Polynomial Regression to Estimate Human Affective States

    Science.gov (United States)

    Wei, Jie; Chen, Tong; Liu, Guangyuan; Yang, Jiemin

    2016-03-01

    From direct observations, facial, vocal, gestural, physiological, and central nervous signals, estimating human affective states through computational models such as multivariate linear-regression analysis, support vector regression, and artificial neural network, have been proposed in the past decade. In these models, linear models are generally lack of precision because of ignoring intrinsic nonlinearities of complex psychophysiological processes; and nonlinear models commonly adopt complicated algorithms. To improve accuracy and simplify model, we introduce a new computational modeling method named as higher-order multivariable polynomial regression to estimate human affective states. The study employs standardized pictures in the International Affective Picture System to induce thirty subjects’ affective states, and obtains pure affective patterns of skin conductance as input variables to the higher-order multivariable polynomial model for predicting affective valence and arousal. Experimental results show that our method is able to obtain efficient correlation coefficients of 0.98 and 0.96 for estimation of affective valence and arousal, respectively. Moreover, the method may provide certain indirect evidences that valence and arousal have their brain’s motivational circuit origins. Thus, the proposed method can serve as a novel one for efficiently estimating human affective states.

  14. Estimating monotonic rates from biological data using local linear regression.

    Science.gov (United States)

    Olito, Colin; White, Craig R; Marshall, Dustin J; Barneche, Diego R

    2017-03-01

    Accessing many fundamental questions in biology begins with empirical estimation of simple monotonic rates of underlying biological processes. Across a variety of disciplines, ranging from physiology to biogeochemistry, these rates are routinely estimated from non-linear and noisy time series data using linear regression and ad hoc manual truncation of non-linearities. Here, we introduce the R package LoLinR, a flexible toolkit to implement local linear regression techniques to objectively and reproducibly estimate monotonic biological rates from non-linear time series data, and demonstrate possible applications using metabolic rate data. LoLinR provides methods to easily and reliably estimate monotonic rates from time series data in a way that is statistically robust, facilitates reproducible research and is applicable to a wide variety of research disciplines in the biological sciences. © 2017. Published by The Company of Biologists Ltd.

  15. Performance of the modified Poisson regression approach for estimating relative risks from clustered prospective data.

    Science.gov (United States)

    Yelland, Lisa N; Salter, Amy B; Ryan, Philip

    2011-10-15

    Modified Poisson regression, which combines a log Poisson regression model with robust variance estimation, is a useful alternative to log binomial regression for estimating relative risks. Previous studies have shown both analytically and by simulation that modified Poisson regression is appropriate for independent prospective data. This method is often applied to clustered prospective data, despite a lack of evidence to support its use in this setting. The purpose of this article is to evaluate the performance of the modified Poisson regression approach for estimating relative risks from clustered prospective data, by using generalized estimating equations to account for clustering. A simulation study is conducted to compare log binomial regression and modified Poisson regression for analyzing clustered data from intervention and observational studies. Both methods generally perform well in terms of bias, type I error, and coverage. Unlike log binomial regression, modified Poisson regression is not prone to convergence problems. The methods are contrasted by using example data sets from 2 large studies. The results presented in this article support the use of modified Poisson regression as an alternative to log binomial regression for analyzing clustered prospective data when clustering is taken into account by using generalized estimating equations.

  16. Robust best linear estimation for regression analysis using surrogate and instrumental variables.

    Science.gov (United States)

    Wang, C Y

    2012-04-01

    We investigate methods for regression analysis when covariates are measured with errors. In a subset of the whole cohort, a surrogate variable is available for the true unobserved exposure variable. The surrogate variable satisfies the classical measurement error model, but it may not have repeated measurements. In addition to the surrogate variables that are available among the subjects in the calibration sample, we assume that there is an instrumental variable (IV) that is available for all study subjects. An IV is correlated with the unobserved true exposure variable and hence can be useful in the estimation of the regression coefficients. We propose a robust best linear estimator that uses all the available data, which is the most efficient among a class of consistent estimators. The proposed estimator is shown to be consistent and asymptotically normal under very weak distributional assumptions. For Poisson or linear regression, the proposed estimator is consistent even if the measurement error from the surrogate or IV is heteroscedastic. Finite-sample performance of the proposed estimator is examined and compared with other estimators via intensive simulation studies. The proposed method and other methods are applied to a bladder cancer case-control study.

  17. truncSP: An R Package for Estimation of Semi-Parametric Truncated Linear Regression Models

    Directory of Open Access Journals (Sweden)

    Maria Karlsson

    2014-05-01

    Full Text Available Problems with truncated data occur in many areas, complicating estimation and inference. Regarding linear regression models, the ordinary least squares estimator is inconsistent and biased for these types of data and is therefore unsuitable for use. Alternative estimators, designed for the estimation of truncated regression models, have been developed. This paper presents the R package truncSP. The package contains functions for the estimation of semi-parametric truncated linear regression models using three different estimators: the symmetrically trimmed least squares, quadratic mode, and left truncated estimators, all of which have been shown to have good asymptotic and ?nite sample properties. The package also provides functions for the analysis of the estimated models. Data from the environmental sciences are used to illustrate the functions in the package.

  18. Height and Weight Estimation From Anthropometric Measurements Using Machine Learning Regressions.

    Science.gov (United States)

    Rativa, Diego; Fernandes, Bruno J T; Roque, Alexandre

    2018-01-01

    Height and weight are measurements explored to tracking nutritional diseases, energy expenditure, clinical conditions, drug dosages, and infusion rates. Many patients are not ambulant or may be unable to communicate, and a sequence of these factors may not allow accurate estimation or measurements; in those cases, it can be estimated approximately by anthropometric means. Different groups have proposed different linear or non-linear equations which coefficients are obtained by using single or multiple linear regressions. In this paper, we present a complete study of the application of different learning models to estimate height and weight from anthropometric measurements: support vector regression, Gaussian process, and artificial neural networks. The predicted values are significantly more accurate than that obtained with conventional linear regressions. In all the cases, the predictions are non-sensitive to ethnicity, and to gender, if more than two anthropometric parameters are analyzed. The learning model analysis creates new opportunities for anthropometric applications in industry, textile technology, security, and health care.

  19. Predictors of discontinuation of antipsychotic medication and subsequent outcomes in the European First Episode Schizophrenia Trial (EUFEST).

    Science.gov (United States)

    Landolt, Karin; Rössler, Wulf; Ajdacic-Gross, Vladeta; Derks, Eske M; Libiger, Jan; Kahn, René S; Fleischhacker, W Wolfgang

    2016-04-01

    This study had two aims: to describe patients suffering from first-episode schizophrenia who had stopped taking any antipsychotic medication, and to gain information on the predictors of successful discontinuation. We investigated data from the European First Episode Schizophrenia Trial (EUFEST). From the 325 patients included, 15.7% discontinued all antipsychotic medication. In a first analysis, clinical and sociodemographical predictors of discontinuing any antipsychotic medication were identified, using Cox regression. In the second analysis, logistic regression was used to determine variables associated with those patients who had stopped taking antipsychotic medication and had a favourable outcome, i.e., successful discontinuation. A good outcome was defined as a) having had no relapse within the whole observation period (80.6%), and b) having had no relapse and symptomatic remission at 12-month-follow-up (37.2%). Cox regression revealed that a higher proportion of patients from Western European countries and Israel stopped antipsychotic medication than from Central and Eastern European countries, that relapse was associated with discontinuation, and that discontinuers had lower compliance and higher quality of life. Predictors of successful discontinuation differed with the outcome definition used. Using definition b), successful discontinuers had a better baseline prognosis and better baseline social integration. Using definition a), successful discontinuers more often were from Western European countries. Region and clinical factors were associated with discontinuation. Prognosis and social integration played an important role in predicting successful discontinuation. As this study had several limitations, for example the observational design regarding discontinuation, further studies are needed to identify predictors of successful discontinuation. Copyright © 2016 Elsevier B.V. All rights reserved.

  20. The Collinearity Free and Bias Reduced Regression Estimation Project: The Theory of Normalization Ridge Regression. Report No. 2.

    Science.gov (United States)

    Bulcock, J. W.; And Others

    Multicollinearity refers to the presence of highly intercorrelated independent variables in structural equation models, that is, models estimated by using techniques such as least squares regression and maximum likelihood. There is a problem of multicollinearity in both the natural and social sciences where theory formulation and estimation is in…

  1. Simultaneous Estimation of Regression Functions for Marine Corps Technical Training Specialties.

    Science.gov (United States)

    Dunbar, Stephen B.; And Others

    This paper considers the application of Bayesian techniques for simultaneous estimation to the specification of regression weights for selection tests used in various technical training courses in the Marine Corps. Results of a method for m-group regression developed by Molenaar and Lewis (1979) suggest that common weights for training courses…

  2. Robust estimation for homoscedastic regression in the secondary analysis of case-control data

    KAUST Repository

    Wei, Jiawei; Carroll, Raymond J.; Mü ller, Ursula U.; Keilegom, Ingrid Van; Chatterjee, Nilanjan

    2012-01-01

    Primary analysis of case-control studies focuses on the relationship between disease D and a set of covariates of interest (Y, X). A secondary application of the case-control study, which is often invoked in modern genetic epidemiologic association studies, is to investigate the interrelationship between the covariates themselves. The task is complicated owing to the case-control sampling, where the regression of Y on X is different from what it is in the population. Previous work has assumed a parametric distribution for Y given X and derived semiparametric efficient estimation and inference without any distributional assumptions about X. We take up the issue of estimation of a regression function when Y given X follows a homoscedastic regression model, but otherwise the distribution of Y is unspecified. The semiparametric efficient approaches can be used to construct semiparametric efficient estimates, but they suffer from a lack of robustness to the assumed model for Y given X. We take an entirely different approach. We show how to estimate the regression parameters consistently even if the assumed model for Y given X is incorrect, and thus the estimates are model robust. For this we make the assumption that the disease rate is known or well estimated. The assumption can be dropped when the disease is rare, which is typically so for most case-control studies, and the estimation algorithm simplifies. Simulations and empirical examples are used to illustrate the approach.

  3. Robust estimation for homoscedastic regression in the secondary analysis of case-control data

    KAUST Repository

    Wei, Jiawei

    2012-12-04

    Primary analysis of case-control studies focuses on the relationship between disease D and a set of covariates of interest (Y, X). A secondary application of the case-control study, which is often invoked in modern genetic epidemiologic association studies, is to investigate the interrelationship between the covariates themselves. The task is complicated owing to the case-control sampling, where the regression of Y on X is different from what it is in the population. Previous work has assumed a parametric distribution for Y given X and derived semiparametric efficient estimation and inference without any distributional assumptions about X. We take up the issue of estimation of a regression function when Y given X follows a homoscedastic regression model, but otherwise the distribution of Y is unspecified. The semiparametric efficient approaches can be used to construct semiparametric efficient estimates, but they suffer from a lack of robustness to the assumed model for Y given X. We take an entirely different approach. We show how to estimate the regression parameters consistently even if the assumed model for Y given X is incorrect, and thus the estimates are model robust. For this we make the assumption that the disease rate is known or well estimated. The assumption can be dropped when the disease is rare, which is typically so for most case-control studies, and the estimation algorithm simplifies. Simulations and empirical examples are used to illustrate the approach.

  4. Inverse estimation of multiple muscle activations based on linear logistic regression.

    Science.gov (United States)

    Sekiya, Masashi; Tsuji, Toshiaki

    2017-07-01

    This study deals with a technology to estimate the muscle activity from the movement data using a statistical model. A linear regression (LR) model and artificial neural networks (ANN) have been known as statistical models for such use. Although ANN has a high estimation capability, it is often in the clinical application that the lack of data amount leads to performance deterioration. On the other hand, the LR model has a limitation in generalization performance. We therefore propose a muscle activity estimation method to improve the generalization performance through the use of linear logistic regression model. The proposed method was compared with the LR model and ANN in the verification experiment with 7 participants. As a result, the proposed method showed better generalization performance than the conventional methods in various tasks.

  5. Optimized support vector regression for drilling rate of penetration estimation

    Science.gov (United States)

    Bodaghi, Asadollah; Ansari, Hamid Reza; Gholami, Mahsa

    2015-12-01

    In the petroleum industry, drilling optimization involves the selection of operating conditions for achieving the desired depth with the minimum expenditure while requirements of personal safety, environment protection, adequate information of penetrated formations and productivity are fulfilled. Since drilling optimization is highly dependent on the rate of penetration (ROP), estimation of this parameter is of great importance during well planning. In this research, a novel approach called `optimized support vector regression' is employed for making a formulation between input variables and ROP. Algorithms used for optimizing the support vector regression are the genetic algorithm (GA) and the cuckoo search algorithm (CS). Optimization implementation improved the support vector regression performance by virtue of selecting proper values for its parameters. In order to evaluate the ability of optimization algorithms in enhancing SVR performance, their results were compared to the hybrid of pattern search and grid search (HPG) which is conventionally employed for optimizing SVR. The results demonstrated that the CS algorithm achieved further improvement on prediction accuracy of SVR compared to the GA and HPG as well. Moreover, the predictive model derived from back propagation neural network (BPNN), which is the traditional approach for estimating ROP, is selected for comparisons with CSSVR. The comparative results revealed the superiority of CSSVR. This study inferred that CSSVR is a viable option for precise estimation of ROP.

  6. Effect of air quality alerts on human health: a regression discontinuity analysis in Toronto, Canada.

    Science.gov (United States)

    Chen, Hong; Li, Qiongsi; Kaufman, Jay S; Wang, Jun; Copes, Ray; Su, Yushan; Benmarhnia, Tarik

    2018-01-01

    Ambient air pollution is a major health risk globally. To reduce adverse health effects on days when air pollution is high, government agencies worldwide have implemented air quality alert programmes. Despite their widespread use, little is known about whether these programmes produce any observable public-health benefits. We assessed the effectiveness of such programmes using a quasi-experimental approach. We assembled a population-based cohort comprising all individuals who resided in the city of Toronto (Ontario, Canada) from 2003 to 2012 (about 2·6 million people). We ascertained seven health outcomes known to be affected by short-term elevation of air pollution, using provincial health administrative databases. These health outcomes were cardiovascular-related mortality, respiratory-related mortality, and hospital admissions or emergency-department visits for acute myocardial infarction, heart failure, stroke, asthma, and chronic obstructive pulmonary disease (COPD). We applied a regression discontinuity design to assess the effectiveness of an intervention (ie, the air quality alert programme). To quantify the effect of the air quality alert programme, we estimated for each outcome both the absolute rate difference and the rate ratio attributable to programme eligibility (by intention-to-treat analysis) and the alerts themselves (by two-stage regression approach), respectively. Between Jan 1, 2003, and Dec 31, 2012, on average between three and 27 daily cardiovascular or respiratory events were reported in Toronto (depending on the outcome). Alert announcements reduced asthma-related emergency-department visits by 4·73 cases per 1 000 000 people per day (95% CI 0·55-9·38), or in relative terms by 25% (95% CI 1-47). Programme eligibility also led to 2·05 (95% CI 0·07-4·00) fewer daily emergency-department visits for asthma. We did not detect a significant reduction in any other health outcome as a result of alert announcements or programme

  7. Treatment eligibility and retention in clinical HIV care: A regression discontinuity study in South Africa.

    Directory of Open Access Journals (Sweden)

    Jacob Bor

    2017-11-01

    Full Text Available Loss to follow-up is high among HIV patients not yet receiving antiretroviral therapy (ART. Clinical trials have demonstrated the clinical efficacy of early ART; however, these trials may miss an important real-world consequence of providing ART at diagnosis: its impact on retention in care.We examined the effect of immediate (versus deferred ART on retention in care using a regression discontinuity design. The analysis included all patients (N = 11,306 entering clinical HIV care with a first CD4 count between 12 August 2011 and 31 December 2012 in a public-sector HIV care and treatment program in rural South Africa. Patients were assigned to immediate versus deferred ART eligibility, as determined by a CD4 count < 350 cells/μl, per South African national guidelines. Patients referred to pre-ART care were instructed to return every 6 months for CD4 monitoring. Patients initiated on ART were instructed to return at 6 and 12 months post-initiation and annually thereafter for CD4 and viral load monitoring. We assessed retention in HIV care at 12 months, as measured by the presence of a clinic visit, lab test, or ART initiation 6 to 18 months after initial CD4 test. Differences in retention between patients presenting with CD4 counts just above versus just below the 350-cells/μl threshold were estimated using local linear regression models with a data-driven bandwidth and with the algorithm for selecting the bandwidth chosen ex ante. Among patients with CD4 counts close to the 350-cells/μl threshold, having an ART-eligible CD4 count (<350 cells/μl was associated with higher 12-month retention than not having an ART-eligible CD4 count (50% versus 32%, an intention-to-treat risk difference of 18 percentage points (95% CI 11 to 23; p < 0.001. The decision to start ART was determined by CD4 count for one in four patients (25% presenting close to the eligibility threshold (95% CI 20% to 31%; p < 0.001. In this subpopulation, having an ART-eligible CD

  8. Two biased estimation techniques in linear regression: Application to aircraft

    Science.gov (United States)

    Klein, Vladislav

    1988-01-01

    Several ways for detection and assessment of collinearity in measured data are discussed. Because data collinearity usually results in poor least squares estimates, two estimation techniques which can limit a damaging effect of collinearity are presented. These two techniques, the principal components regression and mixed estimation, belong to a class of biased estimation techniques. Detection and assessment of data collinearity and the two biased estimation techniques are demonstrated in two examples using flight test data from longitudinal maneuvers of an experimental aircraft. The eigensystem analysis and parameter variance decomposition appeared to be a promising tool for collinearity evaluation. The biased estimators had far better accuracy than the results from the ordinary least squares technique.

  9. Generalized allometric regression to estimate biomass of Populus in short-rotation coppice

    Energy Technology Data Exchange (ETDEWEB)

    Ben Brahim, Mohammed; Gavaland, Andre; Cabanettes, Alain [INRA Centre de Toulouse, Castanet-Tolosane Cedex (France). Unite Agroforesterie et Foret Paysanne

    2000-07-01

    Data from four different stands were combined to establish a single generalized allometric equation to estimate above-ground biomass of individual Populus trees grown on short-rotation coppice. The generalized model was performed using diameter at breast height, the mean diameter and the mean height of each site as dependent variables and then compared with the stand-specific regressions using F-test. Results showed that this single regression estimates tree biomass well at each stand and does not introduce bias with increasing diameter.

  10. Airline loyalty (programs) across borders : A geographic discontinuity approach

    NARCIS (Netherlands)

    de Jong, Gerben; Behrens, Christiaan; van Ommeren, Jos

    2018-01-01

    We analyze brand loyalty advantages of national airlines in their domestic countries using geocoded data from a major international frequent flier program. We employ a geographic discontinuity design that estimates discontinuities in program activity at the national borders of the program's

  11. Comparison of some biased estimation methods (including ordinary subset regression) in the linear model

    Science.gov (United States)

    Sidik, S. M.

    1975-01-01

    Ridge, Marquardt's generalized inverse, shrunken, and principal components estimators are discussed in terms of the objectives of point estimation of parameters, estimation of the predictive regression function, and hypothesis testing. It is found that as the normal equations approach singularity, more consideration must be given to estimable functions of the parameters as opposed to estimation of the full parameter vector; that biased estimators all introduce constraints on the parameter space; that adoption of mean squared error as a criterion of goodness should be independent of the degree of singularity; and that ordinary least-squares subset regression is the best overall method.

  12. Brillouin Scattering Spectrum Analysis Based on Auto-Regressive Spectral Estimation

    Science.gov (United States)

    Huang, Mengyun; Li, Wei; Liu, Zhangyun; Cheng, Linghao; Guan, Bai-Ou

    2018-06-01

    Auto-regressive (AR) spectral estimation technology is proposed to analyze the Brillouin scattering spectrum in Brillouin optical time-domain refelectometry. It shows that AR based method can reliably estimate the Brillouin frequency shift with an accuracy much better than fast Fourier transform (FFT) based methods provided the data length is not too short. It enables about 3 times improvement over FFT at a moderate spatial resolution.

  13. Brillouin Scattering Spectrum Analysis Based on Auto-Regressive Spectral Estimation

    Science.gov (United States)

    Huang, Mengyun; Li, Wei; Liu, Zhangyun; Cheng, Linghao; Guan, Bai-Ou

    2018-03-01

    Auto-regressive (AR) spectral estimation technology is proposed to analyze the Brillouin scattering spectrum in Brillouin optical time-domain refelectometry. It shows that AR based method can reliably estimate the Brillouin frequency shift with an accuracy much better than fast Fourier transform (FFT) based methods provided the data length is not too short. It enables about 3 times improvement over FFT at a moderate spatial resolution.

  14. The impact of medical insurance for the poor in Georgia: a regression discontinuity approach.

    Science.gov (United States)

    Bauhoff, Sebastian; Hotchkiss, David R; Smith, Owen

    2011-11-01

    Improving access to health care and financial protection of the poor is a key concern for policymakers in low- and middle-income countries, but there have been few rigorous program evaluations. The Medical Insurance Program for the Poor in the republic of Georgia provides a free and extensive benefit package and operates through a publicly funded voucher program, enabling beneficiaries to choose their own private insurance company. Eligibility is determined by a proxy means test administered to applicant households. The objective of this study is to evaluate the program's impact on key outcomes including utilization, financial risk protection, and health behavior and management. A dedicated survey of approximately 3500 households around the thresholds was designed to minimize unobserved heterogeneity by sampling clusters with both beneficiary and non-beneficiary households. The research design exploits the sharp discontinuities at two regional eligibility thresholds to estimate local average treatment effects. Results suggest that the program did not affect utilization of health services but decreased mean out-of-pocket expenditures for some groups and reduced the risk of high inpatient expenditures. There are no systematic impacts on health behavior, management of chronic illnesses, and patient satisfaction. Copyright © 2010 John Wiley & Sons, Ltd.

  15. Regression estimators for generic health-related quality of life and quality-adjusted life years.

    Science.gov (United States)

    Basu, Anirban; Manca, Andrea

    2012-01-01

    To develop regression models for outcomes with truncated supports, such as health-related quality of life (HRQoL) data, and account for features typical of such data such as a skewed distribution, spikes at 1 or 0, and heteroskedasticity. Regression estimators based on features of the Beta distribution. First, both a single equation and a 2-part model are presented, along with estimation algorithms based on maximum-likelihood, quasi-likelihood, and Bayesian Markov-chain Monte Carlo methods. A novel Bayesian quasi-likelihood estimator is proposed. Second, a simulation exercise is presented to assess the performance of the proposed estimators against ordinary least squares (OLS) regression for a variety of HRQoL distributions that are encountered in practice. Finally, the performance of the proposed estimators is assessed by using them to quantify the treatment effect on QALYs in the EVALUATE hysterectomy trial. Overall model fit is studied using several goodness-of-fit tests such as Pearson's correlation test, link and reset tests, and a modified Hosmer-Lemeshow test. The simulation results indicate that the proposed methods are more robust in estimating covariate effects than OLS, especially when the effects are large or the HRQoL distribution has a large spike at 1. Quasi-likelihood techniques are more robust than maximum likelihood estimators. When applied to the EVALUATE trial, all but the maximum likelihood estimators produce unbiased estimates of the treatment effect. One and 2-part Beta regression models provide flexible approaches to regress the outcomes with truncated supports, such as HRQoL, on covariates, after accounting for many idiosyncratic features of the outcomes distribution. This work will provide applied researchers with a practical set of tools to model outcomes in cost-effectiveness analysis.

  16. The comparison between several robust ridge regression estimators in the presence of multicollinearity and multiple outliers

    Science.gov (United States)

    Zahari, Siti Meriam; Ramli, Norazan Mohamed; Moktar, Balkiah; Zainol, Mohammad Said

    2014-09-01

    In the presence of multicollinearity and multiple outliers, statistical inference of linear regression model using ordinary least squares (OLS) estimators would be severely affected and produces misleading results. To overcome this, many approaches have been investigated. These include robust methods which were reported to be less sensitive to the presence of outliers. In addition, ridge regression technique was employed to tackle multicollinearity problem. In order to mitigate both problems, a combination of ridge regression and robust methods was discussed in this study. The superiority of this approach was examined when simultaneous presence of multicollinearity and multiple outliers occurred in multiple linear regression. This study aimed to look at the performance of several well-known robust estimators; M, MM, RIDGE and robust ridge regression estimators, namely Weighted Ridge M-estimator (WRM), Weighted Ridge MM (WRMM), Ridge MM (RMM), in such a situation. Results of the study showed that in the presence of simultaneous multicollinearity and multiple outliers (in both x and y-direction), the RMM and RIDGE are more or less similar in terms of superiority over the other estimators, regardless of the number of observation, level of collinearity and percentage of outliers used. However, when outliers occurred in only single direction (y-direction), the WRMM estimator is the most superior among the robust ridge regression estimators, by producing the least variance. In conclusion, the robust ridge regression is the best alternative as compared to robust and conventional least squares estimators when dealing with simultaneous presence of multicollinearity and outliers.

  17. Estimating Loess Plateau Average Annual Precipitation with Multiple Linear Regression Kriging and Geographically Weighted Regression Kriging

    Directory of Open Access Journals (Sweden)

    Qiutong Jin

    2016-06-01

    Full Text Available Estimating the spatial distribution of precipitation is an important and challenging task in hydrology, climatology, ecology, and environmental science. In order to generate a highly accurate distribution map of average annual precipitation for the Loess Plateau in China, multiple linear regression Kriging (MLRK and geographically weighted regression Kriging (GWRK methods were employed using precipitation data from the period 1980–2010 from 435 meteorological stations. The predictors in regression Kriging were selected by stepwise regression analysis from many auxiliary environmental factors, such as elevation (DEM, normalized difference vegetation index (NDVI, solar radiation, slope, and aspect. All predictor distribution maps had a 500 m spatial resolution. Validation precipitation data from 130 hydrometeorological stations were used to assess the prediction accuracies of the MLRK and GWRK approaches. Results showed that both prediction maps with a 500 m spatial resolution interpolated by MLRK and GWRK had a high accuracy and captured detailed spatial distribution data; however, MLRK produced a lower prediction error and a higher variance explanation than GWRK, although the differences were small, in contrast to conclusions from similar studies.

  18. Discontinuation of nicotine replacement therapy among smoking-cessation attempters.

    Science.gov (United States)

    Burns, Emily K; Levinson, Arnold H

    2008-03-01

    Nicotine replacement therapy (NRT) doubles successful quitting, but more than half of NRT users do not comply with optimal treatment regimens. From the 2005 Colorado state tobacco survey, quit attempters who utilized NRT (N=366) were analyzed in spring 2007. Descriptive and regression analyses were used to examine reasons for discontinuing NRT, length of time on NRT, and quit intentions. The reasons for discontinuing NRT were resuming smoking (34%), side effects (17%), NRT not helping with quitting (14%), quitting smoking (10%), and cost (5%). Poverty, age, and non-Latino minority status were associated with reasons for discontinuation other than quitting smoking. Having side effects was associated with a short duration of NRT use and 95% lower odds of intending to quit in the next month. In the first population-level study examining reasons for discontinuing NRT, general-population smokers who initiate NRT use when attempting to quit are highly likely to discontinue NRT prematurely. Age and culturally-appropriate medication management interventions may increase NRT compliance and improve cessation outcomes.

  19. Online and Batch Supervised Background Estimation via L1 Regression

    KAUST Repository

    Dutta, Aritra

    2017-11-23

    We propose a surprisingly simple model for supervised video background estimation. Our model is based on $\\\\ell_1$ regression. As existing methods for $\\\\ell_1$ regression do not scale to high-resolution videos, we propose several simple and scalable methods for solving the problem, including iteratively reweighted least squares, a homotopy method, and stochastic gradient descent. We show through extensive experiments that our model and methods match or outperform the state-of-the-art online and batch methods in virtually all quantitative and qualitative measures.

  20. Online and Batch Supervised Background Estimation via L1 Regression

    KAUST Repository

    Dutta, Aritra; Richtarik, Peter

    2017-01-01

    We propose a surprisingly simple model for supervised video background estimation. Our model is based on $\\ell_1$ regression. As existing methods for $\\ell_1$ regression do not scale to high-resolution videos, we propose several simple and scalable methods for solving the problem, including iteratively reweighted least squares, a homotopy method, and stochastic gradient descent. We show through extensive experiments that our model and methods match or outperform the state-of-the-art online and batch methods in virtually all quantitative and qualitative measures.

  1. On the robust nonparametric regression estimation for a functional regressor

    OpenAIRE

    Azzedine , Nadjia; Laksaci , Ali; Ould-Saïd , Elias

    2009-01-01

    On the robust nonparametric regression estimation for a functional regressor correspondance: Corresponding author. (Ould-Said, Elias) (Azzedine, Nadjia) (Laksaci, Ali) (Ould-Said, Elias) Departement de Mathematiques--> , Univ. Djillali Liabes--> , BP 89--> , 22000 Sidi Bel Abbes--> - ALGERIA (Azzedine, Nadjia) Departement de Mathema...

  2. Application of Boosting Regression Trees to Preliminary Cost Estimation in Building Construction Projects

    Directory of Open Access Journals (Sweden)

    Yoonseok Shin

    2015-01-01

    Full Text Available Among the recent data mining techniques available, the boosting approach has attracted a great deal of attention because of its effective learning algorithm and strong boundaries in terms of its generalization performance. However, the boosting approach has yet to be used in regression problems within the construction domain, including cost estimations, but has been actively utilized in other domains. Therefore, a boosting regression tree (BRT is applied to cost estimations at the early stage of a construction project to examine the applicability of the boosting approach to a regression problem within the construction domain. To evaluate the performance of the BRT model, its performance was compared with that of a neural network (NN model, which has been proven to have a high performance in cost estimation domains. The BRT model has shown results similar to those of NN model using 234 actual cost datasets of a building construction project. In addition, the BRT model can provide additional information such as the importance plot and structure model, which can support estimators in comprehending the decision making process. Consequently, the boosting approach has potential applicability in preliminary cost estimations in a building construction project.

  3. Accounting for estimated IQ in neuropsychological test performance with regression-based techniques.

    Science.gov (United States)

    Testa, S Marc; Winicki, Jessica M; Pearlson, Godfrey D; Gordon, Barry; Schretlen, David J

    2009-11-01

    Regression-based normative techniques account for variability in test performance associated with multiple predictor variables and generate expected scores based on algebraic equations. Using this approach, we show that estimated IQ, based on oral word reading, accounts for 1-9% of the variability beyond that explained by individual differences in age, sex, race, and years of education for most cognitive measures. These results confirm that adding estimated "premorbid" IQ to demographic predictors in multiple regression models can incrementally improve the accuracy with which regression-based norms (RBNs) benchmark expected neuropsychological test performance in healthy adults. It remains to be seen whether the incremental variance in test performance explained by estimated "premorbid" IQ translates to improved diagnostic accuracy in patient samples. We describe these methods, and illustrate the step-by-step application of RBNs with two cases. We also discuss the rationale, assumptions, and caveats of this approach. More broadly, we note that adjusting test scores for age and other characteristics might actually decrease the accuracy with which test performance predicts absolute criteria, such as the ability to drive or live independently.

  4. Estimation of Panel Data Regression Models with Two-Sided Censoring or Truncation

    DEFF Research Database (Denmark)

    Alan, Sule; Honore, Bo E.; Hu, Luojia

    2014-01-01

    This paper constructs estimators for panel data regression models with individual speci…fic heterogeneity and two–sided censoring and truncation. Following Powell (1986) the estimation strategy is based on moment conditions constructed from re–censored or re–truncated residuals. While these moment...

  5. Nonparametric Regression Estimation for Multivariate Null Recurrent Processes

    Directory of Open Access Journals (Sweden)

    Biqing Cai

    2015-04-01

    Full Text Available This paper discusses nonparametric kernel regression with the regressor being a \\(d\\-dimensional \\(\\beta\\-null recurrent process in presence of conditional heteroscedasticity. We show that the mean function estimator is consistent with convergence rate \\(\\sqrt{n(Th^{d}}\\, where \\(n(T\\ is the number of regenerations for a \\(\\beta\\-null recurrent process and the limiting distribution (with proper normalization is normal. Furthermore, we show that the two-step estimator for the volatility function is consistent. The finite sample performance of the estimate is quite reasonable when the leave-one-out cross validation method is used for bandwidth selection. We apply the proposed method to study the relationship of Federal funds rate with 3-month and 5-year T-bill rates and discover the existence of nonlinearity of the relationship. Furthermore, the in-sample and out-of-sample performance of the nonparametric model is far better than the linear model.

  6. Performance of a New Restricted Biased Estimator in Logistic Regression

    Directory of Open Access Journals (Sweden)

    Yasin ASAR

    2017-12-01

    Full Text Available It is known that the variance of the maximum likelihood estimator (MLE inflates when the explanatory variables are correlated. This situation is called the multicollinearity problem. As a result, the estimations of the model may not be trustful. Therefore, this paper introduces a new restricted estimator (RLTE that may be applied to get rid of the multicollinearity when the parameters lie in some linear subspace  in logistic regression. The mean squared errors (MSE and the matrix mean squared errors (MMSE of the estimators considered in this paper are given. A Monte Carlo experiment is designed to evaluate the performances of the proposed estimator, the restricted MLE (RMLE, MLE and Liu-type estimator (LTE. The criterion of performance is chosen to be MSE. Moreover, a real data example is presented. According to the results, proposed estimator has better performance than MLE, RMLE and LTE.

  7. On the Choice of Difference Sequence in a Unified Framework for Variance Estimation in Nonparametric Regression

    KAUST Repository

    Dai, Wenlin; Tong, Tiejun; Zhu, Lixing

    2017-01-01

    Difference-based methods do not require estimating the mean function in nonparametric regression and are therefore popular in practice. In this paper, we propose a unified framework for variance estimation that combines the linear regression method with the higher-order difference estimators systematically. The unified framework has greatly enriched the existing literature on variance estimation that includes most existing estimators as special cases. More importantly, the unified framework has also provided a smart way to solve the challenging difference sequence selection problem that remains a long-standing controversial issue in nonparametric regression for several decades. Using both theory and simulations, we recommend to use the ordinary difference sequence in the unified framework, no matter if the sample size is small or if the signal-to-noise ratio is large. Finally, to cater for the demands of the application, we have developed a unified R package, named VarED, that integrates the existing difference-based estimators and the unified estimators in nonparametric regression and have made it freely available in the R statistical program http://cran.r-project.org/web/packages/.

  8. On the Choice of Difference Sequence in a Unified Framework for Variance Estimation in Nonparametric Regression

    KAUST Repository

    Dai, Wenlin

    2017-09-01

    Difference-based methods do not require estimating the mean function in nonparametric regression and are therefore popular in practice. In this paper, we propose a unified framework for variance estimation that combines the linear regression method with the higher-order difference estimators systematically. The unified framework has greatly enriched the existing literature on variance estimation that includes most existing estimators as special cases. More importantly, the unified framework has also provided a smart way to solve the challenging difference sequence selection problem that remains a long-standing controversial issue in nonparametric regression for several decades. Using both theory and simulations, we recommend to use the ordinary difference sequence in the unified framework, no matter if the sample size is small or if the signal-to-noise ratio is large. Finally, to cater for the demands of the application, we have developed a unified R package, named VarED, that integrates the existing difference-based estimators and the unified estimators in nonparametric regression and have made it freely available in the R statistical program http://cran.r-project.org/web/packages/.

  9. [Hyperspectral Estimation of Apple Tree Canopy LAI Based on SVM and RF Regression].

    Science.gov (United States)

    Han, Zhao-ying; Zhu, Xi-cun; Fang, Xian-yi; Wang, Zhuo-yuan; Wang, Ling; Zhao, Geng-Xing; Jiang, Yuan-mao

    2016-03-01

    Leaf area index (LAI) is the dynamic index of crop population size. Hyperspectral technology can be used to estimate apple canopy LAI rapidly and nondestructively. It can be provide a reference for monitoring the tree growing and yield estimation. The Red Fuji apple trees of full bearing fruit are the researching objects. Ninety apple trees canopies spectral reflectance and LAI values were measured by the ASD Fieldspec3 spectrometer and LAI-2200 in thirty orchards in constant two years in Qixia research area of Shandong Province. The optimal vegetation indices were selected by the method of correlation analysis of the original spectral reflectance and vegetation indices. The models of predicting the LAI were built with the multivariate regression analysis method of support vector machine (SVM) and random forest (RF). The new vegetation indices, GNDVI527, ND-VI676, RVI682, FD-NVI656 and GRVI517 and the previous two main vegetation indices, NDVI670 and NDVI705, are in accordance with LAI. In the RF regression model, the calibration set decision coefficient C-R2 of 0.920 and validation set decision coefficient V-R2 of 0.889 are higher than the SVM regression model by 0.045 and 0.033 respectively. The root mean square error of calibration set C-RMSE of 0.249, the root mean square error validation set V-RMSE of 0.236 are lower than that of the SVM regression model by 0.054 and 0.058 respectively. Relative analysis of calibrating error C-RPD and relative analysis of validation set V-RPD reached 3.363 and 2.520, 0.598 and 0.262, respectively, which were higher than the SVM regression model. The measured and predicted the scatterplot trend line slope of the calibration set and validation set C-S and V-S are close to 1. The estimation result of RF regression model is better than that of the SVM. RF regression model can be used to estimate the LAI of red Fuji apple trees in full fruit period.

  10. A different approach to estimate nonlinear regression model using numerical methods

    Science.gov (United States)

    Mahaboob, B.; Venkateswarlu, B.; Mokeshrayalu, G.; Balasiddamuni, P.

    2017-11-01

    This research paper concerns with the computational methods namely the Gauss-Newton method, Gradient algorithm methods (Newton-Raphson method, Steepest Descent or Steepest Ascent algorithm method, the Method of Scoring, the Method of Quadratic Hill-Climbing) based on numerical analysis to estimate parameters of nonlinear regression model in a very different way. Principles of matrix calculus have been used to discuss the Gradient-Algorithm methods. Yonathan Bard [1] discussed a comparison of gradient methods for the solution of nonlinear parameter estimation problems. However this article discusses an analytical approach to the gradient algorithm methods in a different way. This paper describes a new iterative technique namely Gauss-Newton method which differs from the iterative technique proposed by Gorden K. Smyth [2]. Hans Georg Bock et.al [10] proposed numerical methods for parameter estimation in DAE’s (Differential algebraic equation). Isabel Reis Dos Santos et al [11], Introduced weighted least squares procedure for estimating the unknown parameters of a nonlinear regression metamodel. For large-scale non smooth convex minimization the Hager and Zhang (HZ) conjugate gradient Method and the modified HZ (MHZ) method were presented by Gonglin Yuan et al [12].

  11. Regression tools for CO2 inversions: application of a shrinkage estimator to process attribution

    International Nuclear Information System (INIS)

    Shaby, Benjamin A.; Field, Christopher B.

    2006-01-01

    In this study we perform an atmospheric inversion based on a shrinkage estimator. This method is used to estimate surface fluxes of CO 2 , first partitioned according to constituent geographic regions, and then according to constituent processes that are responsible for the total flux. Our approach differs from previous approaches in two important ways. The first is that the technique of linear Bayesian inversion is recast as a regression problem. Seen as such, standard regression tools are employed to analyse and reduce errors in the resultant estimates. A shrinkage estimator, which combines standard ridge regression with the linear 'Bayesian inversion' model, is introduced. This method introduces additional bias into the model with the aim of reducing variance such that errors are decreased overall. Compared with standard linear Bayesian inversion, the ridge technique seems to reduce both flux estimation errors and prediction errors. The second divergence from previous studies is that instead of dividing the world into geographically distinct regions and estimating the CO 2 flux in each region, the flux space is divided conceptually into processes that contribute to the total global flux. Formulating the problem in this manner adds to the interpretability of the resultant estimates and attempts to shed light on the problem of attributing sources and sinks to their underlying mechanisms

  12. Estimating traffic volume on Wyoming low volume roads using linear and logistic regression methods

    Directory of Open Access Journals (Sweden)

    Dick Apronti

    2016-12-01

    Full Text Available Traffic volume is an important parameter in most transportation planning applications. Low volume roads make up about 69% of road miles in the United States. Estimating traffic on the low volume roads is a cost-effective alternative to taking traffic counts. This is because traditional traffic counts are expensive and impractical for low priority roads. The purpose of this paper is to present the development of two alternative means of cost-effectively estimating traffic volumes for low volume roads in Wyoming and to make recommendations for their implementation. The study methodology involves reviewing existing studies, identifying data sources, and carrying out the model development. The utility of the models developed were then verified by comparing actual traffic volumes to those predicted by the model. The study resulted in two regression models that are inexpensive and easy to implement. The first regression model was a linear regression model that utilized pavement type, access to highways, predominant land use types, and population to estimate traffic volume. In verifying the model, an R2 value of 0.64 and a root mean square error of 73.4% were obtained. The second model was a logistic regression model that identified the level of traffic on roads using five thresholds or levels. The logistic regression model was verified by estimating traffic volume thresholds and determining the percentage of roads that were accurately classified as belonging to the given thresholds. For the five thresholds, the percentage of roads classified correctly ranged from 79% to 88%. In conclusion, the verification of the models indicated both model types to be useful for accurate and cost-effective estimation of traffic volumes for low volume Wyoming roads. The models developed were recommended for use in traffic volume estimations for low volume roads in pavement management and environmental impact assessment studies.

  13. A stepwise regression tree for nonlinear approximation: applications to estimating subpixel land cover

    Science.gov (United States)

    Huang, C.; Townshend, J.R.G.

    2003-01-01

    A stepwise regression tree (SRT) algorithm was developed for approximating complex nonlinear relationships. Based on the regression tree of Breiman et al . (BRT) and a stepwise linear regression (SLR) method, this algorithm represents an improvement over SLR in that it can approximate nonlinear relationships and over BRT in that it gives more realistic predictions. The applicability of this method to estimating subpixel forest was demonstrated using three test data sets, on all of which it gave more accurate predictions than SLR and BRT. SRT also generated more compact trees and performed better than or at least as well as BRT at all 10 equal forest proportion interval ranging from 0 to 100%. This method is appealing to estimating subpixel land cover over large areas.

  14. Relative Age in School and Suicide among Young Individuals in Japan: A Regression Discontinuity Approach.

    Directory of Open Access Journals (Sweden)

    Tetsuya Matsubayashi

    Full Text Available Evidence collected in many parts of the world suggests that, compared to older students, students who are relatively younger at school entry tend to have worse academic performance and lower levels of income. This study examined how relative age in a grade affects suicide rates of adolescents and young adults between 15 and 25 years of age using data from Japan.We examined individual death records in the Vital Statistics of Japan from 1989 to 2010. In contrast to other countries, late entry to primary school is not allowed in Japan. We took advantage of the school entry cutoff date to implement a regression discontinuity (RD design, assuming that the timing of births around the school entry cutoff date was randomly determined and therefore that individuals who were born just before and after the cutoff date have similar baseline characteristics.We found that those who were born right before the school cutoff day and thus youngest in their cohort have higher mortality rates by suicide, compared to their peers who were born right after the cutoff date and thus older. We also found that those with relative age disadvantage tend to follow a different career path than those with relative age advantage, which may explain their higher suicide mortality rates.Relative age effects have broader consequences than was previously supposed. This study suggests that policy intervention that alleviates the relative age effect can be important.

  15. Asymptotic normality of kernel estimator of $\\psi$-regression function for functional ergodic data

    OpenAIRE

    Laksaci ALI; Benziadi Fatima; Gheriballak Abdelkader

    2016-01-01

    In this paper we consider the problem of the estimation of the $\\psi$-regression function when the covariates take values in an infinite dimensional space. Our main aim is to establish, under a stationary ergodic process assumption, the asymptotic normality of this estimate.

  16. Estimating integrated variance in the presence of microstructure noise using linear regression

    Science.gov (United States)

    Holý, Vladimír

    2017-07-01

    Using financial high-frequency data for estimation of integrated variance of asset prices is beneficial but with increasing number of observations so-called microstructure noise occurs. This noise can significantly bias the realized variance estimator. We propose a method for estimation of the integrated variance robust to microstructure noise as well as for testing the presence of the noise. Our method utilizes linear regression in which realized variances estimated from different data subsamples act as dependent variable while the number of observations act as explanatory variable. We compare proposed estimator with other methods on simulated data for several microstructure noise structures.

  17. Saving Teens: Using a Policy Discontinuity to Estimate the Effects of Medicaid Eligibility

    OpenAIRE

    Bruce D. Meyer; Laura R. Wherry

    2012-01-01

    This paper uses a policy discontinuity to identify the immediate and long-term effects of public health insurance coverage during childhood. Our identification strategy exploits a unique feature of several early Medicaid expansions that extended eligibility only to children born after September 30, 1983. This feature resulted in a large discontinuity in the lifetime years of Medicaid eligibility of children at this birthdate cutoff. Those with family incomes at or just below the poverty line ...

  18. Genomic breeding value estimation using nonparametric additive regression models

    Directory of Open Access Journals (Sweden)

    Solberg Trygve

    2009-01-01

    Full Text Available Abstract Genomic selection refers to the use of genomewide dense markers for breeding value estimation and subsequently for selection. The main challenge of genomic breeding value estimation is the estimation of many effects from a limited number of observations. Bayesian methods have been proposed to successfully cope with these challenges. As an alternative class of models, non- and semiparametric models were recently introduced. The present study investigated the ability of nonparametric additive regression models to predict genomic breeding values. The genotypes were modelled for each marker or pair of flanking markers (i.e. the predictors separately. The nonparametric functions for the predictors were estimated simultaneously using additive model theory, applying a binomial kernel. The optimal degree of smoothing was determined by bootstrapping. A mutation-drift-balance simulation was carried out. The breeding values of the last generation (genotyped was predicted using data from the next last generation (genotyped and phenotyped. The results show moderate to high accuracies of the predicted breeding values. A determination of predictor specific degree of smoothing increased the accuracy.

  19. Identifying the factors underlying discontinuation of triptans.

    Science.gov (United States)

    Wells, Rebecca E; Markowitz, Shira Y; Baron, Eric P; Hentz, Joseph G; Kalidas, Kavita; Mathew, Paul G; Halker, Rashmi; Dodick, David W; Schwedt, Todd J

    2014-02-01

    To identify factors associated with triptan discontinuation among migraine patients. It is unclear why many migraine patients who are prescribed triptans discontinue this treatment. This study investigated correlates of triptan discontinuation with a focus on potentially modifiable factors to improve compliance. This multicenter cross-sectional survey (n = 276) was performed at US tertiary care headache clinics. Headache fellows who were members of the American Headache Society Headache Fellows Research Consortium recruited episodic and chronic migraine patients who were current triptan users (use within prior 3 months and for ≥1 year) or past triptan users (no use within 6 months; prior use within 2 years). Univariate analyses were first completed to compare current triptan users to past users for: migraine characteristics, other migraine treatments, triptan education, triptan efficacy, triptan side effects, type of prescribing provider, Migraine Disability Assessment (MIDAS) scores and Beck Depression Inventory (BDI) scores. Then, a multivariable logistic regression model was selected from all possible combinations of predictor variables to determine the factors that best correlated with triptan discontinuation. Compared with those still using triptans (n = 207), those who had discontinued use (n = 69) had higher rates of medication overuse (30 vs. 18%, P = .04) and were more likely to have ever used opioids for migraine treatment (57 vs. 38%, P = .006) as well as higher MIDAS (mean 63 vs. 37, P = .001) and BDI scores (mean 10.4 vs. 7.4, P = .009). Compared with discontinued users, current triptan users were more likely to have had their triptan prescribed by a specialist (neurologist, headache specialist, or pain specialist) (74 vs. 54%, P = .002) and were more likely to report headache resolution (53 vs. 14%, P  24 (2.6, [1.5, 4.6]), BDI >4 (2.5, [1.4, 4.5]), and a history of ever using opioids for migraine therapy (2.2, [1

  20. Treatment eligibility and retention in clinical HIV care: A regression discontinuity study in South Africa.

    Science.gov (United States)

    Bor, Jacob; Fox, Matthew P; Rosen, Sydney; Venkataramani, Atheendar; Tanser, Frank; Pillay, Deenan; Bärnighausen, Till

    2017-11-01

    Loss to follow-up is high among HIV patients not yet receiving antiretroviral therapy (ART). Clinical trials have demonstrated the clinical efficacy of early ART; however, these trials may miss an important real-world consequence of providing ART at diagnosis: its impact on retention in care. We examined the effect of immediate (versus deferred) ART on retention in care using a regression discontinuity design. The analysis included all patients (N = 11,306) entering clinical HIV care with a first CD4 count between 12 August 2011 and 31 December 2012 in a public-sector HIV care and treatment program in rural South Africa. Patients were assigned to immediate versus deferred ART eligibility, as determined by a CD4 count linear regression models with a data-driven bandwidth and with the algorithm for selecting the bandwidth chosen ex ante. Among patients with CD4 counts close to the 350-cells/μl threshold, having an ART-eligible CD4 count (<350 cells/μl) was associated with higher 12-month retention than not having an ART-eligible CD4 count (50% versus 32%), an intention-to-treat risk difference of 18 percentage points (95% CI 11 to 23; p < 0.001). The decision to start ART was determined by CD4 count for one in four patients (25%) presenting close to the eligibility threshold (95% CI 20% to 31%; p < 0.001). In this subpopulation, having an ART-eligible CD4 count was associated with higher 12-month retention than not having an ART-eligible CD4 count (91% versus 21%), a complier causal risk difference of 70 percentage points (95% CI 42 to 98; p < 0.001). The major limitations of the study are the potential for limited generalizability, the potential for outcome misclassification, and the absence of data on longer-term health outcomes. Patients who were eligible for immediate ART had dramatically higher retention in HIV care than patients who just missed the CD4-count eligibility cutoff. The clinical and population health benefits of offering immediate ART regardless

  1. Ordinal Regression Based Subpixel Shift Estimation for Video Super-Resolution

    Directory of Open Access Journals (Sweden)

    Petrovic Nemanja

    2007-01-01

    Full Text Available We present a supervised learning-based approach for subpixel motion estimation which is then used to perform video super-resolution. The novelty of this work is the formulation of the problem of subpixel motion estimation in a ranking framework. The ranking formulation is a variant of classification and regression formulation, in which the ordering present in class labels namely, the shift between patches is explicitly taken into account. Finally, we demonstrate the applicability of our approach on superresolving synthetically generated images with global subpixel shifts and enhancing real video frames by accounting for both local integer and subpixel shifts.

  2. The importance of the chosen technique to estimate diffuse solar radiation by means of regression

    Energy Technology Data Exchange (ETDEWEB)

    Arslan, Talha; Altyn Yavuz, Arzu [Department of Statistics. Science and Literature Faculty. Eskisehir Osmangazi University (Turkey)], email: mtarslan@ogu.edu.tr, email: aaltin@ogu.edu.tr; Acikkalp, Emin [Department of Mechanical and Manufacturing Engineering. Engineering Faculty. Bilecik University (Turkey)], email: acikkalp@gmail.com

    2011-07-01

    The Ordinary Least Squares (OLS) method is one of the most frequently used for estimation of diffuse solar radiation. The data set must provide certain assumptions for the OLS method to work. The most important is that the regression equation offered by OLS error terms must fit within the normal distribution. Utilizing an alternative robust estimator to get parameter estimations is highly effective in solving problems where there is a lack of normal distribution due to the presence of outliers or some other factor. The purpose of this study is to investigate the value of the chosen technique for the estimation of diffuse radiation. This study described alternative robust methods frequently used in applications and compared them with the OLS method. Making a comparison of the data set analysis of the OLS and that of the M Regression (Huber, Andrews and Tukey) techniques, it was study found that robust regression techniques are preferable to OLS because of the smoother explanation values.

  3. Outlier Detection in Regression Using an Iterated One-Step Approximation to the Huber-Skip Estimator

    DEFF Research Database (Denmark)

    Johansen, Søren; Nielsen, Bent

    2013-01-01

    In regression we can delete outliers based upon a preliminary estimator and reestimate the parameters by least squares based upon the retained observations. We study the properties of an iteratively defined sequence of estimators based on this idea. We relate the sequence to the Huber-skip estima......In regression we can delete outliers based upon a preliminary estimator and reestimate the parameters by least squares based upon the retained observations. We study the properties of an iteratively defined sequence of estimators based on this idea. We relate the sequence to the Huber...... that the normalized estimation errors are tight and are close to a linear function of the kernel, thus providing a stochastic expansion of the estimators, which is the same as for the Huber-skip. This implies that the iterated estimator is a close approximation of the Huber-skip...

  4. Estimating HIES Data through Ratio and Regression Methods for Different Sampling Designs

    Directory of Open Access Journals (Sweden)

    Faqir Muhammad

    2007-01-01

    Full Text Available In this study, comparison has been made for different sampling designs, using the HIES data of North West Frontier Province (NWFP for 2001-02 and 1998-99 collected from the Federal Bureau of Statistics, Statistical Division, Government of Pakistan, Islamabad. The performance of the estimators has also been considered using bootstrap and Jacknife. A two-stage stratified random sample design is adopted by HIES. In the first stage, enumeration blocks and villages are treated as the first stage Primary Sampling Units (PSU. The sample PSU’s are selected with probability proportional to size. Secondary Sampling Units (SSU i.e., households are selected by systematic sampling with a random start. They have used a single study variable. We have compared the HIES technique with some other designs, which are: Stratified Simple Random Sampling. Stratified Systematic Sampling. Stratified Ranked Set Sampling. Stratified Two Phase Sampling. Ratio and Regression methods were applied with two study variables, which are: Income (y and Household sizes (x. Jacknife and Bootstrap are used for variance replication. Simple Random Sampling with sample size (462 to 561 gave moderate variances both by Jacknife and Bootstrap. By applying Systematic Sampling, we received moderate variance with sample size (467. In Jacknife with Systematic Sampling, we obtained variance of regression estimator greater than that of ratio estimator for a sample size (467 to 631. At a sample size (952 variance of ratio estimator gets greater than that of regression estimator. The most efficient design comes out to be Ranked set sampling compared with other designs. The Ranked set sampling with jackknife and bootstrap, gives minimum variance even with the smallest sample size (467. Two Phase sampling gave poor performance. Multi-stage sampling applied by HIES gave large variances especially if used with a single study variable.

  5. A robust background regression based score estimation algorithm for hyperspectral anomaly detection

    Science.gov (United States)

    Zhao, Rui; Du, Bo; Zhang, Liangpei; Zhang, Lefei

    2016-12-01

    Anomaly detection has become a hot topic in the hyperspectral image analysis and processing fields in recent years. The most important issue for hyperspectral anomaly detection is the background estimation and suppression. Unreasonable or non-robust background estimation usually leads to unsatisfactory anomaly detection results. Furthermore, the inherent nonlinearity of hyperspectral images may cover up the intrinsic data structure in the anomaly detection. In order to implement robust background estimation, as well as to explore the intrinsic data structure of the hyperspectral image, we propose a robust background regression based score estimation algorithm (RBRSE) for hyperspectral anomaly detection. The Robust Background Regression (RBR) is actually a label assignment procedure which segments the hyperspectral data into a robust background dataset and a potential anomaly dataset with an intersection boundary. In the RBR, a kernel expansion technique, which explores the nonlinear structure of the hyperspectral data in a reproducing kernel Hilbert space, is utilized to formulate the data as a density feature representation. A minimum squared loss relationship is constructed between the data density feature and the corresponding assigned labels of the hyperspectral data, to formulate the foundation of the regression. Furthermore, a manifold regularization term which explores the manifold smoothness of the hyperspectral data, and a maximization term of the robust background average density, which suppresses the bias caused by the potential anomalies, are jointly appended in the RBR procedure. After this, a paired-dataset based k-nn score estimation method is undertaken on the robust background and potential anomaly datasets, to implement the detection output. The experimental results show that RBRSE achieves superior ROC curves, AUC values, and background-anomaly separation than some of the other state-of-the-art anomaly detection methods, and is easy to implement

  6. Prevalence and associated factors of contraceptive discontinuation and switching among Bangladeshi married women of reproductive age

    Directory of Open Access Journals (Sweden)

    Mahumud RA

    2015-01-01

    Full Text Available Rashidul Alam Mahumud,1 Md Golam Hossain,2 Abdur Razzaque Sarkar,1 Md Nurul Islam,2 Md Ripter Hossain,2 Aik Saw,3 Jahangir AM Khan1,4 1Health Economics and Financing Research Group, Center for Equity and Health Systems, International Centre for Diarrhoeal Disease Research, Bangladesh, Dhaka, Bangladesh; 2Department of Statistics, University of Rajshahi, Rajshahi, Bangladesh; 3Department of Orthopaedic Surgery, National Orthopaedic Centre of Excellence for Research and Learning, Faculty of Medicine, University of Malaya, Kuala Lumpur, Malaysia; 4Adjunct Faculty, Health Economics Unit, Department of Learning, Informatics, Management and Ethics, Karolinska Institutet, Stockholm, Sweden Introduction: Contraceptive discontinuation is a worldwide incident that may be connected with low incentive to avoid pregnancy. Contraceptive discontinuation highly contributes to unplanned pregnancy and unwanted births.Objectives: The objective of this study was to observe the prevalence of discontinuation and switching of contraceptive methods among Bangladeshi married women. In addition, the sociodemographic factors associated with contraceptive discontinuation and switching were assessed.Methods: Secondary cross-sectional data was used in this study. A total of 16,273 married Bangladeshi women of reproductive age (15–49 years were considered in the present study, from the Bangladesh Demographic and Health Survey, 2011. Logistic regression models were used to determine the relationships between key sociodemographic factors and user status.Results: The prevalence of discontinuation and switching of contraceptive method among women were 38.4% and 15.4%, respectively. The logistic regression model demonstrated that women in early reproductive years (25–29 years and 30–34 years significantly more often (odds ratio [OR] =0.84 and 0.71, respectively discontinued use of contraceptives. Significantly higher rates of discontinuation were pronounced among women who

  7. An evaluation of regression methods to estimate nutritional condition of canvasbacks and other water birds

    Science.gov (United States)

    Sparling, D.W.; Barzen, J.A.; Lovvorn, J.R.; Serie, J.R.

    1992-01-01

    Regression equations that use mensural data to estimate body condition have been developed for several water birds. These equations often have been based on data that represent different sexes, age classes, or seasons, without being adequately tested for intergroup differences. We used proximate carcass analysis of 538 adult and juvenile canvasbacks (Aythya valisineria ) collected during fall migration, winter, and spring migrations in 1975-76 and 1982-85 to test regression methods for estimating body condition.

  8. A Comparison of Regression Techniques for Estimation of Above-Ground Winter Wheat Biomass Using Near-Surface Spectroscopy

    Directory of Open Access Journals (Sweden)

    Jibo Yue

    2018-01-01

    Full Text Available Above-ground biomass (AGB provides a vital link between solar energy consumption and yield, so its correct estimation is crucial to accurately monitor crop growth and predict yield. In this work, we estimate AGB by using 54 vegetation indexes (e.g., Normalized Difference Vegetation Index, Soil-Adjusted Vegetation Index and eight statistical regression techniques: artificial neural network (ANN, multivariable linear regression (MLR, decision-tree regression (DT, boosted binary regression tree (BBRT, partial least squares regression (PLSR, random forest regression (RF, support vector machine regression (SVM, and principal component regression (PCR, which are used to analyze hyperspectral data acquired by using a field spectrophotometer. The vegetation indexes (VIs determined from the spectra were first used to train regression techniques for modeling and validation to select the best VI input, and then summed with white Gaussian noise to study how remote sensing errors affect the regression techniques. Next, the VIs were divided into groups of different sizes by using various sampling methods for modeling and validation to test the stability of the techniques. Finally, the AGB was estimated by using a leave-one-out cross validation with these powerful techniques. The results of the study demonstrate that, of the eight techniques investigated, PLSR and MLR perform best in terms of stability and are most suitable when high-accuracy and stable estimates are required from relatively few samples. In addition, RF is extremely robust against noise and is best suited to deal with repeated observations involving remote-sensing data (i.e., data affected by atmosphere, clouds, observation times, and/or sensor noise. Finally, the leave-one-out cross-validation method indicates that PLSR provides the highest accuracy (R2 = 0.89, RMSE = 1.20 t/ha, MAE = 0.90 t/ha, NRMSE = 0.07, CV (RMSE = 0.18; thus, PLSR is best suited for works requiring high

  9. Parameter estimation and statistical test of geographically weighted bivariate Poisson inverse Gaussian regression models

    Science.gov (United States)

    Amalia, Junita; Purhadi, Otok, Bambang Widjanarko

    2017-11-01

    Poisson distribution is a discrete distribution with count data as the random variables and it has one parameter defines both mean and variance. Poisson regression assumes mean and variance should be same (equidispersion). Nonetheless, some case of the count data unsatisfied this assumption because variance exceeds mean (over-dispersion). The ignorance of over-dispersion causes underestimates in standard error. Furthermore, it causes incorrect decision in the statistical test. Previously, paired count data has a correlation and it has bivariate Poisson distribution. If there is over-dispersion, modeling paired count data is not sufficient with simple bivariate Poisson regression. Bivariate Poisson Inverse Gaussian Regression (BPIGR) model is mix Poisson regression for modeling paired count data within over-dispersion. BPIGR model produces a global model for all locations. In another hand, each location has different geographic conditions, social, cultural and economic so that Geographically Weighted Regression (GWR) is needed. The weighting function of each location in GWR generates a different local model. Geographically Weighted Bivariate Poisson Inverse Gaussian Regression (GWBPIGR) model is used to solve over-dispersion and to generate local models. Parameter estimation of GWBPIGR model obtained by Maximum Likelihood Estimation (MLE) method. Meanwhile, hypothesis testing of GWBPIGR model acquired by Maximum Likelihood Ratio Test (MLRT) method.

  10. Reduced Rank Regression

    DEFF Research Database (Denmark)

    Johansen, Søren

    2008-01-01

    The reduced rank regression model is a multivariate regression model with a coefficient matrix with reduced rank. The reduced rank regression algorithm is an estimation procedure, which estimates the reduced rank regression model. It is related to canonical correlations and involves calculating...

  11. Early Discontinuation of Metformin in Individuals Treated with Inhibitors of Transporters of Metformin

    DEFF Research Database (Denmark)

    Stage, Tore Bjerregaard; Lee, Moa P; Hallas, Jesper

    2016-01-01

    The aim of this study was to examine the risk of early discontinuation of metformin as a proxy for intolerance, associated with use of drugs known to inhibit transporters involved in metformin distribution. We analysed all incident users of metformin in Denmark between 2000 and 2012 (n = 132......,221) and in a cohort of US patients (n = 296,903). Risk of early discontinuation of metformin was assessed using adjusted logistic regression for 28 drugs putatively inhibiting metformin transporters and four negative controls. Increased odds ratio of early discontinuation of metformin was only associated with codeine...... drugs were associated with a decreased risk. These findings indicate that codeine use may be associated with risk of early discontinuation of metformin and could be used as a basis for further investigation....

  12. A subagging regression method for estimating the qualitative and quantitative state of groundwater

    Science.gov (United States)

    Jeong, Jina; Park, Eungyu; Han, Weon Shik; Kim, Kue-Young

    2017-08-01

    A subsample aggregating (subagging) regression (SBR) method for the analysis of groundwater data pertaining to trend-estimation-associated uncertainty is proposed. The SBR method is validated against synthetic data competitively with other conventional robust and non-robust methods. From the results, it is verified that the estimation accuracies of the SBR method are consistent and superior to those of other methods, and the uncertainties are reasonably estimated; the others have no uncertainty analysis option. To validate further, actual groundwater data are employed and analyzed comparatively with Gaussian process regression (GPR). For all cases, the trend and the associated uncertainties are reasonably estimated by both SBR and GPR regardless of Gaussian or non-Gaussian skewed data. However, it is expected that GPR has a limitation in applications to severely corrupted data by outliers owing to its non-robustness. From the implementations, it is determined that the SBR method has the potential to be further developed as an effective tool of anomaly detection or outlier identification in groundwater state data such as the groundwater level and contaminant concentration.

  13. Estimation of Covariance Matrix on Bi-Response Longitudinal Data Analysis with Penalized Spline Regression

    Science.gov (United States)

    Islamiyati, A.; Fatmawati; Chamidah, N.

    2018-03-01

    The correlation assumption of the longitudinal data with bi-response occurs on the measurement between the subjects of observation and the response. It causes the auto-correlation of error, and this can be overcome by using a covariance matrix. In this article, we estimate the covariance matrix based on the penalized spline regression model. Penalized spline involves knot points and smoothing parameters simultaneously in controlling the smoothness of the curve. Based on our simulation study, the estimated regression model of the weighted penalized spline with covariance matrix gives a smaller error value compared to the error of the model without covariance matrix.

  14. A two-stage approach to estimate spatial and spatio-temporal disease risks in the presence of local discontinuities and clusters.

    Science.gov (United States)

    Adin, A; Lee, D; Goicoa, T; Ugarte, María Dolores

    2018-01-01

    Disease risk maps for areal unit data are often estimated from Poisson mixed models with local spatial smoothing, for example by incorporating random effects with a conditional autoregressive prior distribution. However, one of the limitations is that local discontinuities in the spatial pattern are not usually modelled, leading to over-smoothing of the risk maps and a masking of clusters of hot/coldspot areas. In this paper, we propose a novel two-stage approach to estimate and map disease risk in the presence of such local discontinuities and clusters. We propose approaches in both spatial and spatio-temporal domains, where for the latter the clusters can either be fixed or allowed to vary over time. In the first stage, we apply an agglomerative hierarchical clustering algorithm to training data to provide sets of potential clusters, and in the second stage, a two-level spatial or spatio-temporal model is applied to each potential cluster configuration. The superiority of the proposed approach with regard to a previous proposal is shown by simulation, and the methodology is applied to two important public health problems in Spain, namely stomach cancer mortality across Spain and brain cancer incidence in the Navarre and Basque Country regions of Spain.

  15. Reducing Monte Carlo error in the Bayesian estimation of risk ratios using log-binomial regression models.

    Science.gov (United States)

    Salmerón, Diego; Cano, Juan A; Chirlaque, María D

    2015-08-30

    In cohort studies, binary outcomes are very often analyzed by logistic regression. However, it is well known that when the goal is to estimate a risk ratio, the logistic regression is inappropriate if the outcome is common. In these cases, a log-binomial regression model is preferable. On the other hand, the estimation of the regression coefficients of the log-binomial model is difficult owing to the constraints that must be imposed on these coefficients. Bayesian methods allow a straightforward approach for log-binomial regression models and produce smaller mean squared errors in the estimation of risk ratios than the frequentist methods, and the posterior inferences can be obtained using the software WinBUGS. However, Markov chain Monte Carlo methods implemented in WinBUGS can lead to large Monte Carlo errors in the approximations to the posterior inferences because they produce correlated simulations, and the accuracy of the approximations are inversely related to this correlation. To reduce correlation and to improve accuracy, we propose a reparameterization based on a Poisson model and a sampling algorithm coded in R. Copyright © 2015 John Wiley & Sons, Ltd.

  16. Support Vector Regression-Based Adaptive Divided Difference Filter for Nonlinear State Estimation Problems

    Directory of Open Access Journals (Sweden)

    Hongjian Wang

    2014-01-01

    Full Text Available We present a support vector regression-based adaptive divided difference filter (SVRADDF algorithm for improving the low state estimation accuracy of nonlinear systems, which are typically affected by large initial estimation errors and imprecise prior knowledge of process and measurement noises. The derivative-free SVRADDF algorithm is significantly simpler to compute than other methods and is implemented using only functional evaluations. The SVRADDF algorithm involves the use of the theoretical and actual covariance of the innovation sequence. Support vector regression (SVR is employed to generate the adaptive factor to tune the noise covariance at each sampling instant when the measurement update step executes, which improves the algorithm’s robustness. The performance of the proposed algorithm is evaluated by estimating states for (i an underwater nonmaneuvering target bearing-only tracking system and (ii maneuvering target bearing-only tracking in an air-traffic control system. The simulation results show that the proposed SVRADDF algorithm exhibits better performance when compared with a traditional DDF algorithm.

  17. Mass estimation of loose parts in nuclear power plant based on multiple regression

    International Nuclear Information System (INIS)

    He, Yuanfeng; Cao, Yanlong; Yang, Jiangxin; Gan, Chunbiao

    2012-01-01

    According to the application of the Hilbert–Huang transform to the non-stationary signal and the relation between the mass of loose parts in nuclear power plant and corresponding frequency content, a new method for loose part mass estimation based on the marginal Hilbert–Huang spectrum (MHS) and multiple regression is proposed in this paper. The frequency spectrum of a loose part in a nuclear power plant can be expressed by the MHS. The multiple regression model that is constructed by the MHS feature of the impact signals for mass estimation is used to predict the unknown masses of a loose part. A simulated experiment verified that the method is feasible and the errors of the results are acceptable. (paper)

  18. Semi-parametric estimation of random effects in a logistic regression model using conditional inference

    DEFF Research Database (Denmark)

    Petersen, Jørgen Holm

    2016-01-01

    This paper describes a new approach to the estimation in a logistic regression model with two crossed random effects where special interest is in estimating the variance of one of the effects while not making distributional assumptions about the other effect. A composite likelihood is studied...

  19. Adding a Parameter Increases the Variance of an Estimated Regression Function

    Science.gov (United States)

    Withers, Christopher S.; Nadarajah, Saralees

    2011-01-01

    The linear regression model is one of the most popular models in statistics. It is also one of the simplest models in statistics. It has received applications in almost every area of science, engineering and medicine. In this article, the authors show that adding a predictor to a linear model increases the variance of the estimated regression…

  20. Influence of regression model and initial intensity of an incremental test on the relationship between the lactate threshold estimated by the maximal-deviation method and running performance.

    Science.gov (United States)

    Santos-Concejero, Jordan; Tucker, Ross; Granados, Cristina; Irazusta, Jon; Bidaurrazaga-Letona, Iraia; Zabala-Lili, Jon; Gil, Susana María

    2014-01-01

    This study investigated the influence of the regression model and initial intensity during an incremental test on the relationship between the lactate threshold estimated by the maximal-deviation method and performance in elite-standard runners. Twenty-three well-trained runners completed a discontinuous incremental running test on a treadmill. Speed started at 9 km · h(-1) and increased by 1.5 km · h(-1) every 4 min until exhaustion, with a minute of recovery for blood collection. Lactate-speed data were fitted by exponential and polynomial models. The lactate threshold was determined for both models, using all the co-ordinates, excluding the first and excluding the first and second points. The exponential lactate threshold was greater than the polynomial equivalent in any co-ordinate condition (P performance and is independent of the initial intensity of the test.

  1. On the degrees of freedom of reduced-rank estimators in multivariate regression.

    Science.gov (United States)

    Mukherjee, A; Chen, K; Wang, N; Zhu, J

    We study the effective degrees of freedom of a general class of reduced-rank estimators for multivariate regression in the framework of Stein's unbiased risk estimation. A finite-sample exact unbiased estimator is derived that admits a closed-form expression in terms of the thresholded singular values of the least-squares solution and hence is readily computable. The results continue to hold in the high-dimensional setting where both the predictor and the response dimensions may be larger than the sample size. The derived analytical form facilitates the investigation of theoretical properties and provides new insights into the empirical behaviour of the degrees of freedom. In particular, we examine the differences and connections between the proposed estimator and a commonly-used naive estimator. The use of the proposed estimator leads to efficient and accurate prediction risk estimation and model selection, as demonstrated by simulation studies and a data example.

  2. Organising medication discontinuation

    DEFF Research Database (Denmark)

    Nixon, Michael; Kousgaard, Marius Brostrøm

    2016-01-01

    medication? Methods: Twenty four GPs were interviewed using a maximum variation sample strategy. Participant observations were done in three general practices, for one day each, totalling approximately 30 consultations. Results: The results show that different discontinuation cues (related to the type...... a medication, in agreement with the patients, from a professional perspective. Three research questions were examined in this study: when does medication discontinuation occur in general practice, how is discontinuing medication handled in the GP’s practice and how do GPs make decisions about discontinuing...

  3. Comparison of regression coefficient and GIS-based methodologies for regional estimates of forest soil carbon stocks

    International Nuclear Information System (INIS)

    Elliott Campbell, J.; Moen, Jeremie C.; Ney, Richard A.; Schnoor, Jerald L.

    2008-01-01

    Estimates of forest soil organic carbon (SOC) have applications in carbon science, soil quality studies, carbon sequestration technologies, and carbon trading. Forest SOC has been modeled using a regression coefficient methodology that applies mean SOC densities (mass/area) to broad forest regions. A higher resolution model is based on an approach that employs a geographic information system (GIS) with soil databases and satellite-derived landcover images. Despite this advancement, the regression approach remains the basis of current state and federal level greenhouse gas inventories. Both approaches are analyzed in detail for Wisconsin forest soils from 1983 to 2001, applying rigorous error-fixing algorithms to soil databases. Resulting SOC stock estimates are 20% larger when determined using the GIS method rather than the regression approach. Average annual rates of increase in SOC stocks are 3.6 and 1.0 million metric tons of carbon per year for the GIS and regression approaches respectively. - Large differences in estimates of soil organic carbon stocks and annual changes in stocks for Wisconsin forestlands indicate a need for validation from forthcoming forest surveys

  4. Comparison of regression models for estimation of isometric wrist joint torques using surface electromyography

    Directory of Open Access Journals (Sweden)

    Menon Carlo

    2011-09-01

    Full Text Available Abstract Background Several regression models have been proposed for estimation of isometric joint torque using surface electromyography (SEMG signals. Common issues related to torque estimation models are degradation of model accuracy with passage of time, electrode displacement, and alteration of limb posture. This work compares the performance of the most commonly used regression models under these circumstances, in order to assist researchers with identifying the most appropriate model for a specific biomedical application. Methods Eleven healthy volunteers participated in this study. A custom-built rig, equipped with a torque sensor, was used to measure isometric torque as each volunteer flexed and extended his wrist. SEMG signals from eight forearm muscles, in addition to wrist joint torque data were gathered during the experiment. Additional data were gathered one hour and twenty-four hours following the completion of the first data gathering session, for the purpose of evaluating the effects of passage of time and electrode displacement on accuracy of models. Acquired SEMG signals were filtered, rectified, normalized and then fed to models for training. Results It was shown that mean adjusted coefficient of determination (Ra2 values decrease between 20%-35% for different models after one hour while altering arm posture decreased mean Ra2 values between 64% to 74% for different models. Conclusions Model estimation accuracy drops significantly with passage of time, electrode displacement, and alteration of limb posture. Therefore model retraining is crucial for preserving estimation accuracy. Data resampling can significantly reduce model training time without losing estimation accuracy. Among the models compared, ordinary least squares linear regression model (OLS was shown to have high isometric torque estimation accuracy combined with very short training times.

  5. Development of flood regressions and climate change scenarios to explore estimates of future peak flows

    Science.gov (United States)

    Burns, Douglas A.; Smith, Martyn J.; Freehafer, Douglas A.

    2015-12-31

    A new Web-based application, titled “Application of Flood Regressions and Climate Change Scenarios To Explore Estimates of Future Peak Flows”, has been developed by the U.S. Geological Survey, in cooperation with the New York State Department of Transportation, that allows a user to apply a set of regression equations to estimate the magnitude of future floods for any stream or river in New York State (exclusive of Long Island) and the Lake Champlain Basin in Vermont. The regression equations that are the basis of the current application were developed in previous investigations by the U.S. Geological Survey (USGS) and are described at the USGS StreamStats Web sites for New York (http://water.usgs.gov/osw/streamstats/new_york.html) and Vermont (http://water.usgs.gov/osw/streamstats/Vermont.html). These regression equations include several fixed landscape metrics that quantify aspects of watershed geomorphology, basin size, and land cover as well as a climate variable—either annual precipitation or annual runoff.

  6. Fitness in animals correlates with proximity to discontinuities in body mass distributions.

    Science.gov (United States)

    Angeler, David G.; Allen, Craig R.; Vila-Gispert, Anna; Almeida, David

    2014-01-01

    Discontinuous structure in landscapes may cause discontinuous, aggregated species body-mass patterns, reflecting the scales of structure available to animal communities within a landscape. Empirical analyses have shown that the location of species within body mass aggregations, which reflect this scale-specific organization, is non-random with regard to several ecological phenomena, including species extinctions. The propensity of declining species to have body masses proximate to discontinuities suggests that transition zones between scaling regimes ultimately decreases the ecological fitness for some species. We test this proposition using vulnerable and unthreatened fish species in Mediterranean streams with differing levels of human impact. We show that the proximity to discontinuities in body mass aggregations (“distance-to-edge”) of more vs. less fit individuals within vulnerable and unthreatened populations differs. Specifically, regression analysis between the scaled mass index, a proxy of animal fitness, and distance-to-edge reveals negative and positive relationships for vulnerable and unthreatened species, respectively. That is, fitness is higher close to discontinuities in vulnerable populations and toward the center of body mass aggregation groups in unthreatened populations. Our results demonstrate the suitability of the discontinuity framework for scrutinizing non-random patterns of environmental impact in populations. Further exploration of the usefulness of this method across other ecosystems and organism groups is warranted.

  7. Estimation of Fine Particulate Matter in Taipei Using Landuse Regression and Bayesian Maximum Entropy Methods

    Directory of Open Access Journals (Sweden)

    Yi-Ming Kuo

    2011-06-01

    Full Text Available Fine airborne particulate matter (PM2.5 has adverse effects on human health. Assessing the long-term effects of PM2.5 exposure on human health and ecology is often limited by a lack of reliable PM2.5 measurements. In Taipei, PM2.5 levels were not systematically measured until August, 2005. Due to the popularity of geographic information systems (GIS, the landuse regression method has been widely used in the spatial estimation of PM concentrations. This method accounts for the potential contributing factors of the local environment, such as traffic volume. Geostatistical methods, on other hand, account for the spatiotemporal dependence among the observations of ambient pollutants. This study assesses the performance of the landuse regression model for the spatiotemporal estimation of PM2.5 in the Taipei area. Specifically, this study integrates the landuse regression model with the geostatistical approach within the framework of the Bayesian maximum entropy (BME method. The resulting epistemic framework can assimilate knowledge bases including: (a empirical-based spatial trends of PM concentration based on landuse regression, (b the spatio-temporal dependence among PM observation information, and (c site-specific PM observations. The proposed approach performs the spatiotemporal estimation of PM2.5 levels in the Taipei area (Taiwan from 2005–2007.

  8. Estimation of fine particulate matter in Taipei using landuse regression and bayesian maximum entropy methods.

    Science.gov (United States)

    Yu, Hwa-Lung; Wang, Chih-Hsih; Liu, Ming-Che; Kuo, Yi-Ming

    2011-06-01

    Fine airborne particulate matter (PM2.5) has adverse effects on human health. Assessing the long-term effects of PM2.5 exposure on human health and ecology is often limited by a lack of reliable PM2.5 measurements. In Taipei, PM2.5 levels were not systematically measured until August, 2005. Due to the popularity of geographic information systems (GIS), the landuse regression method has been widely used in the spatial estimation of PM concentrations. This method accounts for the potential contributing factors of the local environment, such as traffic volume. Geostatistical methods, on other hand, account for the spatiotemporal dependence among the observations of ambient pollutants. This study assesses the performance of the landuse regression model for the spatiotemporal estimation of PM2.5 in the Taipei area. Specifically, this study integrates the landuse regression model with the geostatistical approach within the framework of the Bayesian maximum entropy (BME) method. The resulting epistemic framework can assimilate knowledge bases including: (a) empirical-based spatial trends of PM concentration based on landuse regression, (b) the spatio-temporal dependence among PM observation information, and (c) site-specific PM observations. The proposed approach performs the spatiotemporal estimation of PM2.5 levels in the Taipei area (Taiwan) from 2005-2007.

  9. Risk of Vascular Thrombotic Events Following Discontinuation of Antithrombotics After Peptic Ulcer Bleeding.

    Science.gov (United States)

    Kim, Seung Young; Hyun, Jong Jin; Suh, Sang Jun; Jung, Sung Woo; Jung, Young Kul; Koo, Ja Seol; Yim, Hyung Joon; Park, Jong Jae; Chun, Hoon Jai; Lee, Sang Woo

    2016-04-01

    To evaluate whether the risk of cardiovascular events increases when antithrombotics are discontinued after ulcer bleeding. Peptic ulcer bleeding associated with antithrombotics has increased due to the increase in the proportion of elderly population. Little is known about the long-term effects of discontinuing antithrombotics after peptic ulcer bleeding. The aim of this study was to evaluate whether the risk of cardiovascular events increases when antithrombotics are discontinued after ulcer bleeding. We reviewed the medical records of patients with ulcer bleeding who were taking antiplatelet agents or anticoagulants at the time of ulcer bleeding. Cox-regression model was used to adjust for potential confounders, and analyzed association between discontinuation of antithrombotic drugs after ulcer bleeding and thrombotic events such as ischemic heart disease or stroke. Of the 544 patients with ulcer bleeding, 72 patients who were taking antithrombotics and followed up for >2 months were analyzed. Forty patients discontinued antithrombotics after ulcer bleeding (discontinuation group) and 32 patients continued antithrombotics with or without transient interruption (continuation group). Thrombotic events developed more often in discontinuation group than in the continuation group [7/32 (21.9%) vs. 1/40 (2.5%), P=0.019]. Hazard ratio for thrombotic event when antithrombotics were continuously discontinued was 10.9 (95% confidence interval, 1.3-89.7). There were no significant differences in recurrent bleeding events between the 2 groups. Discontinuation of antithrombotics after peptic ulcer bleeding increases the risk of cardiovascular events. Therefore, caution should be taken when discontinuing antithrombotics after ulcer bleeding.

  10. Estimating Gestational Age With Sonography: Regression-Derived Formula Versus the Fetal Biometric Average.

    Science.gov (United States)

    Cawyer, Chase R; Anderson, Sarah B; Szychowski, Jeff M; Neely, Cherry; Owen, John

    2018-03-01

    To compare the accuracy of a new regression-derived formula developed from the National Fetal Growth Studies data to the common alternative method that uses the average of the gestational ages (GAs) calculated for each fetal biometric measurement (biparietal diameter, head circumference, abdominal circumference, and femur length). This retrospective cross-sectional study identified nonanomalous singleton pregnancies that had a crown-rump length plus at least 1 additional sonographic examination with complete fetal biometric measurements. With the use of the crown-rump length to establish the referent estimated date of delivery, each method's (National Institute of Child Health and Human Development regression versus Hadlock average [Radiology 1984; 152:497-501]), error at every examination was computed. Error, defined as the difference between the crown-rump length-derived GA and each method's predicted GA (weeks), was compared in 3 GA intervals: 1 (14 weeks-20 weeks 6 days), 2 (21 weeks-28 weeks 6 days), and 3 (≥29 weeks). In addition, the proportion of each method's examinations that had errors outside prespecified (±) day ranges was computed by using odds ratios. A total of 16,904 sonograms were identified. The overall and prespecified GA range subset mean errors were significantly smaller for the regression compared to the average (P < .01), and the regression had significantly lower odds of observing examinations outside the specified range of error in GA intervals 2 (odds ratio, 1.15; 95% confidence interval, 1.01-1.31) and 3 (odds ratio, 1.24; 95% confidence interval, 1.17-1.32) than the average method. In a contemporary unselected population of women dated by a crown-rump length-derived GA, the National Institute of Child Health and Human Development regression formula produced fewer estimates outside a prespecified margin of error than the commonly used Hadlock average; the differences were most pronounced for GA estimates at 29 weeks and later.

  11. Oil and gas pipeline construction cost analysis and developing regression models for cost estimation

    Science.gov (United States)

    Thaduri, Ravi Kiran

    In this study, cost data for 180 pipelines and 136 compressor stations have been analyzed. On the basis of the distribution analysis, regression models have been developed. Material, Labor, ROW and miscellaneous costs make up the total cost of a pipeline construction. The pipelines are analyzed based on different pipeline lengths, diameter, location, pipeline volume and year of completion. In a pipeline construction, labor costs dominate the total costs with a share of about 40%. Multiple non-linear regression models are developed to estimate the component costs of pipelines for various cross-sectional areas, lengths and locations. The Compressor stations are analyzed based on the capacity, year of completion and location. Unlike the pipeline costs, material costs dominate the total costs in the construction of compressor station, with an average share of about 50.6%. Land costs have very little influence on the total costs. Similar regression models are developed to estimate the component costs of compressor station for various capacities and locations.

  12. Graphical evaluation of the ridge-type robust regression estimators in mixture experiments.

    Science.gov (United States)

    Erkoc, Ali; Emiroglu, Esra; Akay, Kadri Ulas

    2014-01-01

    In mixture experiments, estimation of the parameters is generally based on ordinary least squares (OLS). However, in the presence of multicollinearity and outliers, OLS can result in very poor estimates. In this case, effects due to the combined outlier-multicollinearity problem can be reduced to certain extent by using alternative approaches. One of these approaches is to use biased-robust regression techniques for the estimation of parameters. In this paper, we evaluate various ridge-type robust estimators in the cases where there are multicollinearity and outliers during the analysis of mixture experiments. Also, for selection of biasing parameter, we use fraction of design space plots for evaluating the effect of the ridge-type robust estimators with respect to the scaled mean squared error of prediction. The suggested graphical approach is illustrated on Hald cement data set.

  13. Measuring persistence to hormonal therapy in patients with breast cancer: accounting for temporary treatment discontinuation.

    Science.gov (United States)

    Huiart, Laetitia; Ferdynus, Cyril; Dell'Aniello, Sophie; Bakiri, Naciba; Giorgi, Roch; Suissa, Samy

    2014-08-01

    Several studies have been conducted to estimate persistence to hormonal therapy among women with breast cancer (BC). Most studies focus on first treatment discontinuation. Patients, however, can have numerous periods of treatment discontinuation or treatment exposure. Our objective is to estimate persistence to tamoxifen in patients with BC while accounting for temporary treatment discontinuations and this by using multi-state (MS) models. A cohort of 10,806 women with BC having received at least one prescription of tamoxifen between 1998 and 2008 was constituted from the UK General Practice Research Database. We fitted a semi-Markov model with three states to estimate the probability of being off treatment over a 5-year period while accounting for temporary treatment discontinuations (transition between on treatment and off treatment) and competing risks (recurrence of BC or death). Non-persistence, as estimated from the MS model, ranged from 12.1% (95% confidence interval [95%CI]: 9.2-15.1) at 1 year to 14.9% (95%CI: 11.7-18.1) at 5 years. Estimations of non-persistence based on the Kaplan-Meier model were higher, i.e., 29.3% (95%CI: 28.1-30.6) at 5 years, as well as those obtained from a competing risk model, i.e., 24.0% (95%CI: 22.9-25.1). Most temporary discontinuations (94.7%) lasted less than 6 months. Temporary treatment discontinuations are frequent and should be accounted for when measuring adherence to treatment. MS models can provide a useful framework for this sort of analysis insofar as they help describe patients' complex behavior. This may help tailor interventions that improve persistence to hormonal therapy among women with BC. Copyright © 2014 John Wiley & Sons, Ltd.

  14. Changes in renal function after discontinuation of vitamin D analogues in advanced chronic kidney disease.

    Science.gov (United States)

    Caravaca, Francisco; Caravaca-Fontán, Fernando; Azevedo, Lilia; Luna, Enrique

    In routine clinical practice, the prescription of vitamin D analogues (VDA) in patients with chronic kidney disease (CKD) is often associated with a decline of the estimated renal function. The reason for this is not fully understood. To analyse the effects of VDA discontinuation in advanced CKD and to determine the factors associated with changes in renal function. Retrospective cohort study of adult patients with advanced CKD. The case subgroup was treated with VDA and this medication was discontinued at baseline (the first visit). The control subgroup was not treated with VDA and they were selected according to comparability principles for CKD progression by propensity score matching. The primary outcome measure was a change to both the estimated glomerular filtration rate (MDRD-GFR) and the measured glomerular filtration rate (mGFR by combined creatinine and urea clearances). Baseline parameters related to mineral metabolism and creatinine generation were analysed as potential determinants of renal function changes. The study sample consisted of 67 cases and 67 controls. Renal function improved in 67% of cases and worsened in 72% of controls (p<0.0001). Changes in MDRD-GFR for the case subgroup and the control subgroup were +0.455±0.997 vs. -0.436±1.103ml/min/1.73 m 2 /month (p<0.0001), respectively. Total creatinine excretion was slightly higher in cases than in controls but the difference was not significant. According to multivariate logistic and linear regression analyses, baseline total serum calcium was one of the best determinants of both renal function recovery (Odds ratio=3.49; p=0.001), and of the extent of renal function recovery (beta=0.276; p=0.001). Discontinuation of VDA treatment in CKD patients is associated with significant recovery of estimated renal function. The extent of these changes is mainly associated with baseline total serum calcium. Copyright © 2017 Sociedad Española de Nefrología. Published by Elsevier España, S.L.U. All

  15. Relation of whole blood carboxyhemoglobin concentration to ambient carbon monoxide exposure estimated using regression.

    Science.gov (United States)

    Rudra, Carole B; Williams, Michelle A; Sheppard, Lianne; Koenig, Jane Q; Schiff, Melissa A; Frederick, Ihunnaya O; Dills, Russell

    2010-04-15

    Exposure to carbon monoxide (CO) and other ambient air pollutants is associated with adverse pregnancy outcomes. While there are several methods of estimating CO exposure, few have been evaluated against exposure biomarkers. The authors examined the relation between estimated CO exposure and blood carboxyhemoglobin concentration in 708 pregnant western Washington State women (1996-2004). Carboxyhemoglobin was measured in whole blood drawn around 13 weeks' gestation. CO exposure during the month of blood draw was estimated using a regression model containing predictor terms for year, month, street and population densities, and distance to the nearest major road. Year and month were the strongest predictors. Carboxyhemoglobin level was correlated with estimated CO exposure (rho = 0.22, 95% confidence interval (CI): 0.15, 0.29). After adjustment for covariates, each 10% increase in estimated exposure was associated with a 1.12% increase in median carboxyhemoglobin level (95% CI: 0.54, 1.69). This association remained after exclusion of 286 women who reported smoking or being exposed to secondhand smoke (rho = 0.24). In this subgroup, the median carboxyhemoglobin concentration increased 1.29% (95% CI: 0.67, 1.91) for each 10% increase in CO exposure. Monthly estimated CO exposure was moderately correlated with an exposure biomarker. These results support the validity of this regression model for estimating ambient CO exposures in this population and geographic setting.

  16. Direct and simultaneous estimation of cardiac four chamber volumes by multioutput sparse regression.

    Science.gov (United States)

    Zhen, Xiantong; Zhang, Heye; Islam, Ali; Bhaduri, Mousumi; Chan, Ian; Li, Shuo

    2017-02-01

    Cardiac four-chamber volume estimation serves as a fundamental and crucial role in clinical quantitative analysis of whole heart functions. It is a challenging task due to the huge complexity of the four chambers including great appearance variations, huge shape deformation and interference between chambers. Direct estimation has recently emerged as an effective and convenient tool for cardiac ventricular volume estimation. However, existing direct estimation methods were specifically developed for one single ventricle, i.e., left ventricle (LV), or bi-ventricles; they can not be directly used for four chamber volume estimation due to the great combinatorial variability and highly complex anatomical interdependency of the four chambers. In this paper, we propose a new, general framework for direct and simultaneous four chamber volume estimation. We have addressed two key issues, i.e., cardiac image representation and simultaneous four chamber volume estimation, which enables accurate and efficient four-chamber volume estimation. We generate compact and discriminative image representations by supervised descriptor learning (SDL) which can remove irrelevant information and extract discriminative features. We propose direct and simultaneous four-chamber volume estimation by the multioutput sparse latent regression (MSLR), which enables jointly modeling nonlinear input-output relationships and capturing four-chamber interdependence. The proposed method is highly generalized, independent of imaging modalities, which provides a general regression framework that can be extensively used for clinical data prediction to achieve automated diagnosis. Experiments on both MR and CT images show that our method achieves high performance with a correlation coefficient of up to 0.921 with ground truth obtained manually by human experts, which is clinically significant and enables more accurate, convenient and comprehensive assessment of cardiac functions. Copyright © 2016 Elsevier

  17. Regression to fuzziness method for estimation of remaining useful life in power plant components

    Science.gov (United States)

    Alamaniotis, Miltiadis; Grelle, Austin; Tsoukalas, Lefteri H.

    2014-10-01

    Mitigation of severe accidents in power plants requires the reliable operation of all systems and the on-time replacement of mechanical components. Therefore, the continuous surveillance of power systems is a crucial concern for the overall safety, cost control, and on-time maintenance of a power plant. In this paper a methodology called regression to fuzziness is presented that estimates the remaining useful life (RUL) of power plant components. The RUL is defined as the difference between the time that a measurement was taken and the estimated failure time of that component. The methodology aims to compensate for a potential lack of historical data by modeling an expert's operational experience and expertise applied to the system. It initially identifies critical degradation parameters and their associated value range. Once completed, the operator's experience is modeled through fuzzy sets which span the entire parameter range. This model is then synergistically used with linear regression and a component's failure point to estimate the RUL. The proposed methodology is tested on estimating the RUL of a turbine (the basic electrical generating component of a power plant) in three different cases. Results demonstrate the benefits of the methodology for components for which operational data is not readily available and emphasize the significance of the selection of fuzzy sets and the effect of knowledge representation on the predicted output. To verify the effectiveness of the methodology, it was benchmarked against the data-based simple linear regression model used for predictions which was shown to perform equal or worse than the presented methodology. Furthermore, methodology comparison highlighted the improvement in estimation offered by the adoption of appropriate of fuzzy sets for parameter representation.

  18. Estimation of Geographically Weighted Regression Case Study on Wet Land Paddy Productivities in Tulungagung Regency

    Directory of Open Access Journals (Sweden)

    Danang Ariyanto

    2017-11-01

    Full Text Available Regression is a method connected independent variable and dependent variable with estimation parameter as an output. Principal problem in this method is its application in spatial data. Geographically Weighted Regression (GWR method used to solve the problem. GWR  is a regression technique that extends the traditional regression framework by allowing the estimation of local rather than global parameters. In other words, GWR runs a regression for each location, instead of a sole regression for the entire study area. The purpose of this research is to analyze the factors influencing wet land paddy productivities in Tulungagung Regency. The methods used in this research is  GWR using cross validation  bandwidth and weighted by adaptive Gaussian kernel fungtion.This research using  4 variables which are presumed affecting the wet land paddy productivities such as:  the rate of rainfall(X1, the average cost of fertilizer per hectare(X2, the average cost of pestisides per hectare(X3 and Allocation of subsidized NPK fertilizer of food crops sub-sector(X4. Based on the result, X1, X2, X3 and X4  has a different effect on each Distric. So, to improve the productivity of wet land paddy in Tulungagung Regency required a special policy based on the GWR model in each distric.

  19. Estimation of adjusted rate differences using additive negative binomial regression.

    Science.gov (United States)

    Donoghoe, Mark W; Marschner, Ian C

    2016-08-15

    Rate differences are an important effect measure in biostatistics and provide an alternative perspective to rate ratios. When the data are event counts observed during an exposure period, adjusted rate differences may be estimated using an identity-link Poisson generalised linear model, also known as additive Poisson regression. A problem with this approach is that the assumption of equality of mean and variance rarely holds in real data, which often show overdispersion. An additive negative binomial model is the natural alternative to account for this; however, standard model-fitting methods are often unable to cope with the constrained parameter space arising from the non-negativity restrictions of the additive model. In this paper, we propose a novel solution to this problem using a variant of the expectation-conditional maximisation-either algorithm. Our method provides a reliable way to fit an additive negative binomial regression model and also permits flexible generalisations using semi-parametric regression functions. We illustrate the method using a placebo-controlled clinical trial of fenofibrate treatment in patients with type II diabetes, where the outcome is the number of laser therapy courses administered to treat diabetic retinopathy. An R package is available that implements the proposed method. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  20. Extreme interplanetary rotational discontinuities at 1 AU

    Science.gov (United States)

    Lepping, R. P.; Wu, C.-C.

    2005-11-01

    This study is concerned with the identification and description of a special subset of four Wind interplanetary rotational discontinuities (from an earlier study of 134 directional discontinuities by Lepping et al. (2003)) with some "extreme" characteristics, in the sense that every case has (1) an almost planar current sheet surface, (2) a very large discontinuity angle (ω), (3) at least moderately strong normal field components (>0.8 nT), and (4) the overall set has a very broad range of transition layer thicknesses, with one being as thick as 50 RE and another at the other extreme being 1.6 RE, most being much thicker than are usually studied. Each example has a well-determined surface normal (n) according to minimum variance analysis and corroborated via time delay checking of the discontinuity with observations at IMP 8 by employing the local surface planarity. From the variance analyses, most of these cases had unusually large ratios of intermediate-to-minimum eigenvalues (λI/λmin), being on average 32 for three cases (with a fourth being much larger), indicating compact current sheet transition zones, another (the fifth) extreme property. For many years there has been a controversy as to the relative distribution of rotational (RDs) to tangential discontinuities (TDs) in the solar wind at 1 AU (and elsewhere, such as between the Sun and Earth), even to the point where some authors have suggested that RDs with large ∣Bn∣s are probably not generated or, if generated, are unstable and therefore very rare. Some of this disagreement apparently has been due to the different selection criteria used, e.g., some allowed eigenvalue ratios (λI/λmin) to be almost an order of magnitude lower than 32 in estimating n, usually introducing unacceptable error in n and therefore also in ∣Bn∣. However, we suggest that RDs may not be so rare at 1 AU, but good quality cases (where ∣Bn∣ confidently exceeds the error in ∣Bn∣) appear to be uncommon, and further

  1. Energy dependent mesh adaptivity of discontinuous isogeometric discrete ordinate methods with dual weighted residual error estimators

    Science.gov (United States)

    Owens, A. R.; Kópházi, J.; Welch, J. A.; Eaton, M. D.

    2017-04-01

    In this paper a hanging-node, discontinuous Galerkin, isogeometric discretisation of the multigroup, discrete ordinates (SN) equations is presented in which each energy group has its own mesh. The equations are discretised using Non-Uniform Rational B-Splines (NURBS), which allows the coarsest mesh to exactly represent the geometry for a wide range of engineering problems of interest; this would not be the case using straight-sided finite elements. Information is transferred between meshes via the construction of a supermesh. This is a non-trivial task for two arbitrary meshes, but is significantly simplified here by deriving every mesh from a common coarsest initial mesh. In order to take full advantage of this flexible discretisation, goal-based error estimators are derived for the multigroup, discrete ordinates equations with both fixed (extraneous) and fission sources, and these estimators are used to drive an adaptive mesh refinement (AMR) procedure. The method is applied to a variety of test cases for both fixed and fission source problems. The error estimators are found to be extremely accurate for linear NURBS discretisations, with degraded performance for quadratic discretisations owing to a reduction in relative accuracy of the "exact" adjoint solution required to calculate the estimators. Nevertheless, the method seems to produce optimal meshes in the AMR process for both linear and quadratic discretisations, and is ≈×100 more accurate than uniform refinement for the same amount of computational effort for a 67 group deep penetration shielding problem.

  2. Large biases in regression-based constituent flux estimates: causes and diagnostic tools

    Science.gov (United States)

    Hirsch, Robert M.

    2014-01-01

    It has been documented in the literature that, in some cases, widely used regression-based models can produce severely biased estimates of long-term mean river fluxes of various constituents. These models, estimated using sample values of concentration, discharge, and date, are used to compute estimated fluxes for a multiyear period at a daily time step. This study compares results of the LOADEST seven-parameter model, LOADEST five-parameter model, and the Weighted Regressions on Time, Discharge, and Season (WRTDS) model using subsampling of six very large datasets to better understand this bias problem. This analysis considers sample datasets for dissolved nitrate and total phosphorus. The results show that LOADEST-7 and LOADEST-5, although they often produce very nearly unbiased results, can produce highly biased results. This study identifies three conditions that can give rise to these severe biases: (1) lack of fit of the log of concentration vs. log discharge relationship, (2) substantial differences in the shape of this relationship across seasons, and (3) severely heteroscedastic residuals. The WRTDS model is more resistant to the bias problem than the LOADEST models but is not immune to them. Understanding the causes of the bias problem is crucial to selecting an appropriate method for flux computations. Diagnostic tools for identifying the potential for bias problems are introduced, and strategies for resolving bias problems are described.

  3. A Monte Carlo simulation study comparing linear regression, beta regression, variable-dispersion beta regression and fractional logit regression at recovering average difference measures in a two sample design.

    Science.gov (United States)

    Meaney, Christopher; Moineddin, Rahim

    2014-01-24

    In biomedical research, response variables are often encountered which have bounded support on the open unit interval--(0,1). Traditionally, researchers have attempted to estimate covariate effects on these types of response data using linear regression. Alternative modelling strategies may include: beta regression, variable-dispersion beta regression, and fractional logit regression models. This study employs a Monte Carlo simulation design to compare the statistical properties of the linear regression model to that of the more novel beta regression, variable-dispersion beta regression, and fractional logit regression models. In the Monte Carlo experiment we assume a simple two sample design. We assume observations are realizations of independent draws from their respective probability models. The randomly simulated draws from the various probability models are chosen to emulate average proportion/percentage/rate differences of pre-specified magnitudes. Following simulation of the experimental data we estimate average proportion/percentage/rate differences. We compare the estimators in terms of bias, variance, type-1 error and power. Estimates of Monte Carlo error associated with these quantities are provided. If response data are beta distributed with constant dispersion parameters across the two samples, then all models are unbiased and have reasonable type-1 error rates and power profiles. If the response data in the two samples have different dispersion parameters, then the simple beta regression model is biased. When the sample size is small (N0 = N1 = 25) linear regression has superior type-1 error rates compared to the other models. Small sample type-1 error rates can be improved in beta regression models using bias correction/reduction methods. In the power experiments, variable-dispersion beta regression and fractional logit regression models have slightly elevated power compared to linear regression models. Similar results were observed if the

  4. Organising medication discontinuation: a qualitative study exploring the views of general practitioners toward discontinuing statins.

    Science.gov (United States)

    Nixon, Michael; Kousgaard, Marius Brostrøm

    2016-07-07

    Discontinuing medications is a complex decision making process and an important medical practice. It is a tool in reducing polypharmacy, reducing health system expenditure and improving patient quality of life. Few studies have looked at how general practitioners (GPs) discontinue a medication, in agreement with the patients, from a professional perspective. Three research questions were examined in this study: when does medication discontinuation occur in general practice, how is discontinuing medication handled in the GP's practice and how do GPs make decisions about discontinuing medication? Twenty four GPs were interviewed using a maximum variation sample strategy. Participant observations were done in three general practices, for one day each, totalling approximately 30 consultations. The results show that different discontinuation cues (related to the type of consultation, medical records and the patient) create situations of dissonance that can lead to the GP considering the option of discontinuation. We also show that there is a lot of ambiguity in situations of discontinuing and that some GPs trialled discontinuing as means of generating more information that could be used to deal with the ambiguity. We conclude that the practice of discontinuation should be conceptualised as a continually evaluative process and one that requires sustained reflection through a culture of systematically scheduled check-ups, routinely eliciting the patient's experience of taking drugs and trialling discontinuation. Some policy recommendations are offered including supporting GPs with lists or handbooks that directly address discontinuation and by developing more person centred clinical guidelines that discuss discontinuation more explicitly.

  5. Support vector regression methodology for estimating global solar radiation in Algeria

    Science.gov (United States)

    Guermoui, Mawloud; Rabehi, Abdelaziz; Gairaa, Kacem; Benkaciali, Said

    2018-01-01

    Accurate estimation of Daily Global Solar Radiation (DGSR) has been a major goal for solar energy applications. In this paper we show the possibility of developing a simple model based on the Support Vector Regression (SVM-R), which could be used to estimate DGSR on the horizontal surface in Algeria based only on sunshine ratio as input. The SVM model has been developed and tested using a data set recorded over three years (2005-2007). The data was collected at the Applied Research Unit for Renewable Energies (URAER) in Ghardaïa city. The data collected between 2005-2006 are used to train the model while the 2007 data are used to test the performance of the selected model. The measured and the estimated values of DGSR were compared during the testing phase statistically using the Root Mean Square Error (RMSE), Relative Square Error (rRMSE), and correlation coefficient (r2), which amount to 1.59(MJ/m2), 8.46 and 97,4%, respectively. The obtained results show that the SVM-R is highly qualified for DGSR estimation using only sunshine ratio.

  6. Dual Regression

    OpenAIRE

    Spady, Richard; Stouli, Sami

    2012-01-01

    We propose dual regression as an alternative to the quantile regression process for the global estimation of conditional distribution functions under minimal assumptions. Dual regression provides all the interpretational power of the quantile regression process while avoiding the need for repairing the intersecting conditional quantile surfaces that quantile regression often produces in practice. Our approach introduces a mathematical programming characterization of conditional distribution f...

  7. Geographically weighted regression as a generalized Wombling to detect barriers to gene flow.

    Science.gov (United States)

    Diniz-Filho, José Alexandre Felizola; Soares, Thannya Nascimento; de Campos Telles, Mariana Pires

    2016-08-01

    Barriers to gene flow play an important role in structuring populations, especially in human-modified landscapes, and several methods have been proposed to detect such barriers. However, most applications of these methods require a relative large number of individuals or populations distributed in space, connected by vertices from Delaunay or Gabriel networks. Here we show, using both simulated and empirical data, a new application of geographically weighted regression (GWR) to detect such barriers, modeling the genetic variation as a "local" linear function of geographic coordinates (latitude and longitude). In the GWR, standard regression statistics, such as R(2) and slopes, are estimated for each sampling unit and thus are mapped. Peaks in these local statistics are then expected close to the barriers if genetic discontinuities exist, capturing a higher rate of population differentiation among neighboring populations. Isolation-by-Distance simulations on a longitudinally warped lattice revealed that higher local slopes from GWR coincide with the barrier detected with Monmonier algorithm. Even with a relatively small effect of the barrier, the power of local GWR in detecting the east-west barriers was higher than 95 %. We also analyzed empirical data of genetic differentiation among tree populations of Dipteryx alata and Eugenia dysenterica Brazilian Cerrado. GWR was applied to the principal coordinate of the pairwise FST matrix based on microsatellite loci. In both simulated and empirical data, the GWR results were consistent with discontinuities detected by Monmonier algorithm, as well as with previous explanations for the spatial patterns of genetic differentiation for the two species. Our analyses reveal how this new application of GWR can viewed as a generalized Wombling in a continuous space and be a useful approach to detect barriers and discontinuities to gene flow.

  8. Boosted beta regression.

    Directory of Open Access Journals (Sweden)

    Matthias Schmid

    Full Text Available Regression analysis with a bounded outcome is a common problem in applied statistics. Typical examples include regression models for percentage outcomes and the analysis of ratings that are measured on a bounded scale. In this paper, we consider beta regression, which is a generalization of logit models to situations where the response is continuous on the interval (0,1. Consequently, beta regression is a convenient tool for analyzing percentage responses. The classical approach to fit a beta regression model is to use maximum likelihood estimation with subsequent AIC-based variable selection. As an alternative to this established - yet unstable - approach, we propose a new estimation technique called boosted beta regression. With boosted beta regression estimation and variable selection can be carried out simultaneously in a highly efficient way. Additionally, both the mean and the variance of a percentage response can be modeled using flexible nonlinear covariate effects. As a consequence, the new method accounts for common problems such as overdispersion and non-binomial variance structures.

  9. Minimax Regression Quantiles

    DEFF Research Database (Denmark)

    Bache, Stefan Holst

    A new and alternative quantile regression estimator is developed and it is shown that the estimator is root n-consistent and asymptotically normal. The estimator is based on a minimax ‘deviance function’ and has asymptotically equivalent properties to the usual quantile regression estimator. It is......, however, a different and therefore new estimator. It allows for both linear- and nonlinear model specifications. A simple algorithm for computing the estimates is proposed. It seems to work quite well in practice but whether it has theoretical justification is still an open question....

  10. Contraceptive discontinuation and switching among Ghanaian women: evidence from the Ghana Demographic and Health Survey, 2008.

    Science.gov (United States)

    Modey, Emefa J; Aryeetey, Richmond; Adanu, Richard

    2014-03-01

    This study identifies factors associated with contraceptive discontinuation and switching among Ghanaian women of reproductive age, using data from 1,378 female respondents of the 2008 Ghana Demographic and Health Survey. Logistic regression models were used to determine relationships between key socio-demographic factors and user status. Discontinued use occurred among 56% of ever users and switching among 55% of current users. The IUD was most abandoned (70%) and its use was associated with almost twice the odds of discontinuation (OR = 1.97; 95% CI (1.04, 3.75)). Having a history of terminated pregnancy significantly predicted both discontinuation (OR = 1.36; 95% CI (1.03, 1.79) and switching (OR = 1.78; 95% CI (1.16, 2.73)) and intention to limit births significantly predicted lower discontinuation (OR = 0.71; 95% CI (0.52, 0.96)). Counseling services emphasizing contraceptive options and reinforcing switching are critically needed to reduce unwanted pregnancies that may result from poor method use and discontinuation especially among post-abortion clients and IUD users.

  11. Risk factors for early treatment discontinuation in patients with obsessive-compulsive disorder

    Directory of Open Access Journals (Sweden)

    Juliana Belo Diniz

    2011-01-01

    Full Text Available INTRODUCTION: In obsessive-compulsive disorder, early treatment discontinuation can hamper the effectiveness of first-line treatments. OBJECTIVE: This study aimed to investigate the clinical correlates of early treatment discontinuation among obsessive-compulsive disorder patients. METHODS: A group of patients who stopped taking selective serotonin reuptake inhibitors (SSRIs or stopped participating in cognitive behavioral therapy before completion of the first twelve weeks (total n = 41; n = 16 for cognitive behavioral therapy and n = 25 for SSRIs were compared with a paired sample of compliant patients (n = 41. Demographic and clinical characteristics were obtained at baseline using structured clinical interviews. Chisquare and Mann-Whitney tests were used when indicated. Variables presenting a p value <0.15 for the difference between groups were selected for inclusion in a logistic regression analysis that used an interaction model with treatment dropout as the response variable. RESULTS: Agoraphobia was only present in one (2.4% patient who completed the twelve-week therapy, whereas it was present in six (15.0% patients who dropped out (p = 0.044. Social phobia was present in eight (19.5% patients who completed the twelve-week therapy and eighteen (45% patients who dropped out (p = 0.014. Generalized anxiety disorder was present in eight (19.5% patients who completed the twelve-week therapy and twenty (50% dropouts (p = 0.004, and somatization disorder was not present in any of the patients who completed the twelveweek therapy; however, it was present in six (15% dropouts (p = 0.010. According to the logistic regression model, treatment modality (p = 0.05, agoraphobia, the Brown Assessment of Beliefs Scale scores (p = 0.03 and the Beck Anxiety Inventory (p = 0.02 scores were significantly associated with the probability of treatment discontinuation irrespective of interactions with other variables. DISCUSSION AND CONCLUSION: Early treatment

  12. Estimation of genetic parameters related to eggshell strength using random regression models.

    Science.gov (United States)

    Guo, J; Ma, M; Qu, L; Shen, M; Dou, T; Wang, K

    2015-01-01

    This study examined the changes in eggshell strength and the genetic parameters related to this trait throughout a hen's laying life using random regression. The data were collected from a crossbred population between 2011 and 2014, where the eggshell strength was determined repeatedly for 2260 hens. Using random regression models (RRMs), several Legendre polynomials were employed to estimate the fixed, direct genetic and permanent environment effects. The residual effects were treated as independently distributed with heterogeneous variance for each test week. The direct genetic variance was included with second-order Legendre polynomials and the permanent environment with third-order Legendre polynomials. The heritability of eggshell strength ranged from 0.26 to 0.43, the repeatability ranged between 0.47 and 0.69, and the estimated genetic correlations between test weeks was high at > 0.67. The first eigenvalue of the genetic covariance matrix accounted for about 97% of the sum of all the eigenvalues. The flexibility and statistical power of RRM suggest that this model could be an effective method to improve eggshell quality and to reduce losses due to cracked eggs in a breeding plan.

  13. Normalization Ridge Regression in Practice I: Comparisons Between Ordinary Least Squares, Ridge Regression and Normalization Ridge Regression.

    Science.gov (United States)

    Bulcock, J. W.

    The problem of model estimation when the data are collinear was examined. Though the ridge regression (RR) outperforms ordinary least squares (OLS) regression in the presence of acute multicollinearity, it is not a problem free technique for reducing the variance of the estimates. It is a stochastic procedure when it should be nonstochastic and it…

  14. A classical regression framework for mediation analysis: fitting one model to estimate mediation effects.

    Science.gov (United States)

    Saunders, Christina T; Blume, Jeffrey D

    2017-10-26

    Mediation analysis explores the degree to which an exposure's effect on an outcome is diverted through a mediating variable. We describe a classical regression framework for conducting mediation analyses in which estimates of causal mediation effects and their variance are obtained from the fit of a single regression model. The vector of changes in exposure pathway coefficients, which we named the essential mediation components (EMCs), is used to estimate standard causal mediation effects. Because these effects are often simple functions of the EMCs, an analytical expression for their model-based variance follows directly. Given this formula, it is instructive to revisit the performance of routinely used variance approximations (e.g., delta method and resampling methods). Requiring the fit of only one model reduces the computation time required for complex mediation analyses and permits the use of a rich suite of regression tools that are not easily implemented on a system of three equations, as would be required in the Baron-Kenny framework. Using data from the BRAIN-ICU study, we provide examples to illustrate the advantages of this framework and compare it with the existing approaches. © The Author 2017. Published by Oxford University Press.

  15. Adjusting for overdispersion in piecewise exponential regression models to estimate excess mortality rate in population-based research.

    Science.gov (United States)

    Luque-Fernandez, Miguel Angel; Belot, Aurélien; Quaresma, Manuela; Maringe, Camille; Coleman, Michel P; Rachet, Bernard

    2016-10-01

    In population-based cancer research, piecewise exponential regression models are used to derive adjusted estimates of excess mortality due to cancer using the Poisson generalized linear modelling framework. However, the assumption that the conditional mean and variance of the rate parameter given the set of covariates x i are equal is strong and may fail to account for overdispersion given the variability of the rate parameter (the variance exceeds the mean). Using an empirical example, we aimed to describe simple methods to test and correct for overdispersion. We used a regression-based score test for overdispersion under the relative survival framework and proposed different approaches to correct for overdispersion including a quasi-likelihood, robust standard errors estimation, negative binomial regression and flexible piecewise modelling. All piecewise exponential regression models showed the presence of significant inherent overdispersion (p-value regression modelling, with either a quasi-likelihood or robust standard errors, was the best approach as it deals with both, overdispersion due to model misspecification and true or inherent overdispersion.

  16. In search of a corrected prescription drug elasticity estimate: a meta-regression approach.

    Science.gov (United States)

    Gemmill, Marin C; Costa-Font, Joan; McGuire, Alistair

    2007-06-01

    An understanding of the relationship between cost sharing and drug consumption depends on consistent and unbiased price elasticity estimates. However, there is wide heterogeneity among studies, which constrains the applicability of elasticity estimates for empirical purposes and policy simulation. This paper attempts to provide a corrected measure of the drug price elasticity by employing meta-regression analysis (MRA). The results indicate that the elasticity estimates are significantly different from zero, and the corrected elasticity is -0.209 when the results are made robust to heteroskedasticity and clustering of observations. Elasticity values are higher when the study was published in an economic journal, when the study employed a greater number of observations, and when the study used aggregate data. Elasticity estimates are lower when the institutional setting was a tax-based health insurance system.

  17. The Roots of Inequality: Estimating Inequality of Opportunity from Regression Trees

    DEFF Research Database (Denmark)

    Brunori, Paolo; Hufe, Paul; Mahler, Daniel Gerszon

    2017-01-01

    the risk of arbitrary and ad-hoc model selection. Second, they provide a standardized way of trading off upward and downward biases in inequality of opportunity estimations. Finally, regression trees can be graphically represented; their structure is immediate to read and easy to understand. This will make...... the measurement of inequality of opportunity more easily comprehensible to a large audience. These advantages are illustrated by an empirical application based on the 2011 wave of the European Union Statistics on Income and Living Conditions....

  18. Estimating Frequency by Interpolation Using Least Squares Support Vector Regression

    Directory of Open Access Journals (Sweden)

    Changwei Ma

    2015-01-01

    Full Text Available Discrete Fourier transform- (DFT- based maximum likelihood (ML algorithm is an important part of single sinusoid frequency estimation. As signal to noise ratio (SNR increases and is above the threshold value, it will lie very close to Cramer-Rao lower bound (CRLB, which is dependent on the number of DFT points. However, its mean square error (MSE performance is directly proportional to its calculation cost. As a modified version of support vector regression (SVR, least squares SVR (LS-SVR can not only still keep excellent capabilities for generalizing and fitting but also exhibit lower computational complexity. In this paper, therefore, LS-SVR is employed to interpolate on Fourier coefficients of received signals and attain high frequency estimation accuracy. Our results show that the proposed algorithm can make a good compromise between calculation cost and MSE performance under the assumption that the sample size, number of DFT points, and resampling points are already known.

  19. Mammographic density changes following discontinuation of tamoxifen in premenopausal women with oestrogen receptor-positive breast cancer.

    Science.gov (United States)

    Kim, Won Hwa; Cho, Nariya; Kim, Young-Seon; Yi, Ann

    2018-04-06

    To evaluate the changes in mammographic density after tamoxifen discontinuation in premenopausal women with oestrogen receptor-positive breast cancers and the underlying factors METHODS: A total of 213 consecutive premenopausal women with breast cancer who received tamoxifen treatment after curative surgery and underwent three mammograms (baseline, after tamoxifen treatment, after tamoxifen discontinuation) were included. Changes in mammographic density after tamoxifen discontinuation were assessed qualitatively (decrease, no change, or increase) by two readers and measured quantitatively by semi-automated software. The association between % density change and clinicopathological factors was evaluated using univariate and multivariate regression analyses. After tamoxifen discontinuation, a mammographic density increase was observed in 31.9% (68/213, reader 1) to 22.1% (47/213, reader 2) by qualitative assessment, with a mean density increase of 1.8% by quantitative assessment compared to density before tamoxifen discontinuation. In multivariate analysis, younger age (≤ 39 years) and greater % density decline after tamoxifen treatment (≥ 17.0%) were independent factors associated with density change after tamoxifen discontinuation (p density change with a mean density increase of 1.8%, which was associated with younger age and greater density change after tamoxifen treatment. • Increased mammographic density after tamoxifen discontinuation can occur in premenopausal women. • Mean density increase after tamoxifen discontinuation was 1.8%. • Density increase is associated with age and density decrease after tamoxifen.

  20. System dynamics with interaction discontinuity

    CERN Document Server

    Luo, Albert C J

    2015-01-01

    This book describes system dynamics with discontinuity caused by system interactions and presents the theory of flow singularity and switchability at the boundary in discontinuous dynamical systems. Based on such a theory, the authors address dynamics and motion mechanism of engineering discontinuous systems due to interaction. Stability and bifurcations of fixed points in nonlinear discrete dynamical systems are presented, and mapping dynamics are developed for analytical predictions of periodic motions in engineering discontinuous dynamical systems. Ultimately, the book provides an alternative way to discuss the periodic and chaotic behaviors in discontinuous dynamical systems.

  1. Parameter estimation of multivariate multiple regression model using bayesian with non-informative Jeffreys’ prior distribution

    Science.gov (United States)

    Saputro, D. R. S.; Amalia, F.; Widyaningsih, P.; Affan, R. C.

    2018-05-01

    Bayesian method is a method that can be used to estimate the parameters of multivariate multiple regression model. Bayesian method has two distributions, there are prior and posterior distributions. Posterior distribution is influenced by the selection of prior distribution. Jeffreys’ prior distribution is a kind of Non-informative prior distribution. This prior is used when the information about parameter not available. Non-informative Jeffreys’ prior distribution is combined with the sample information resulting the posterior distribution. Posterior distribution is used to estimate the parameter. The purposes of this research is to estimate the parameters of multivariate regression model using Bayesian method with Non-informative Jeffreys’ prior distribution. Based on the results and discussion, parameter estimation of β and Σ which were obtained from expected value of random variable of marginal posterior distribution function. The marginal posterior distributions for β and Σ are multivariate normal and inverse Wishart. However, in calculation of the expected value involving integral of a function which difficult to determine the value. Therefore, approach is needed by generating of random samples according to the posterior distribution characteristics of each parameter using Markov chain Monte Carlo (MCMC) Gibbs sampling algorithm.

  2. Predictors of premature discontinuation of outpatient treatment after discharge of patients with posttraumatic stress disorder.

    Science.gov (United States)

    Wang, Hee Ryung; Woo, Young Sup; Jun, Tae-Youn; Bahk, Won-Myong

    2015-01-01

    This study aimed to examine the sociodemographic and disease-related variables associated with the premature discontinuation of psychiatric outpatient treatment after discharge among patients with noncombat-related posttraumatic stress disorder. We retrospectively reviewed the medical records of patients who were discharged with a diagnosis of posttraumatic stress disorder. Fifty-five percent of subjects (57/104) prematurely discontinued outpatient treatment within 6 months of discharge. Comparing sociodemographic variables between the 6-month non-follow-up group and 6-month follow-up group, there were no variables that differed between the two groups. However, comparing disease-related variables, the 6-month follow-up group showed a longer hospitalization duration and higher Global Assessment of Function score at discharge. The logistic regression analysis showed that a shorter duration of hospitalization predicted premature discontinuation of outpatient treatment within 6 months of discharge. The duration of psychiatric hospitalization for posttraumatic stress disorder appeared to influence the premature discontinuation of outpatient treatment after discharge.

  3. The limiting behavior of the estimated parameters in a misspecified random field regression model

    DEFF Research Database (Denmark)

    Dahl, Christian Møller; Qin, Yu

    This paper examines the limiting properties of the estimated parameters in the random field regression model recently proposed by Hamilton (Econometrica, 2001). Though the model is parametric, it enjoys the flexibility of the nonparametric approach since it can approximate a large collection of n...

  4. Estimation of Stature from Foot Dimensions and Stature among South Indian Medical Students Using Regression Models

    Directory of Open Access Journals (Sweden)

    Rajesh D. R

    2015-01-01

    Full Text Available Background: At times fragments of soft tissues are found disposed off in the open, in ditches at the crime scene and the same are brought to forensic experts for the purpose of identification and such type of cases pose a real challenge. Objectives: This study was aimed at developing a methodology which could help in personal identification by studying the relation between foot dimensions and stature among south subjects using regression models. Material and Methods: Stature and foot length of 100 subjects (age range 18-22 years were measured. Linear regression equations for stature estimation were calculated. Result: The correlation coefficients between stature and foot lengths were found to be positive and statistically significant. Height = 98.159 + 3.746 × FLRT (r = 0.821 and Height = 91.242 + 3.284 × FLRT (r = 0.837 are the regression formulas from foot lengths for males and females respectively. Conclusion: The regression equation derived in the study can be used reliably for estimation of stature in a diverse population group thus would be of immense value in the field of personal identification especially from mutilated bodies or fragmentary remains.

  5. Skeletal height estimation from regression analysis of sternal lengths in a Northwest Indian population of Chandigarh region: a postmortem study.

    Science.gov (United States)

    Singh, Jagmahender; Pathak, R K; Chavali, Krishnadutt H

    2011-03-20

    Skeletal height estimation from regression analysis of eight sternal lengths in the subjects of Chandigarh zone of Northwest India is the topic of discussion in this study. Analysis of eight sternal lengths (length of manubrium, length of mesosternum, combined length of manubrium and mesosternum, total sternal length and first four intercostals lengths of mesosternum) measured from 252 male and 91 female sternums obtained at postmortems revealed that mean cadaver stature and sternal lengths were more in North Indians and males than the South Indians and females. Except intercostal lengths, all the sternal lengths were positively correlated with stature of the deceased in both sexes (P regression analysis of sternal lengths was found more useful than the linear regression for stature estimation. Using multivariate regression analysis, the combined length of manubrium and mesosternum in both sexes and the length of manubrium along with 2nd and 3rd intercostal lengths of mesosternum in males were selected as best estimators of stature. Nonetheless, the stature of males can be predicted with SEE of 6.66 (R(2) = 0.16, r = 0.318) from combination of MBL+BL_3+LM+BL_2, and in females from MBL only, it can be estimated with SEE of 6.65 (R(2) = 0.10, r = 0.318), whereas from the multiple regression analysis of pooled data, stature can be known with SEE of 6.97 (R(2) = 0.387, r = 575) from the combination of MBL+LM+BL_2+TSL+BL_3. The R(2) and F-ratio were found to be statistically significant for almost all the variables in both the sexes, except 4th intercostal length in males and 2nd to 4th intercostal lengths in females. The 'major' sternal lengths were more useful than the 'minor' ones for stature estimation The universal regression analysis used by Kanchan et al. [39] when applied to sternal lengths, gave satisfactory estimates of stature for males only but female stature was comparatively better estimated from simple linear regressions. But they are not proposed for the

  6. Estimation of monthly solar exposure on horizontal surface by Angstrom-type regression equation

    International Nuclear Information System (INIS)

    Ravanshid, S.H.

    1981-01-01

    To obtain solar flux intensity, solar radiation measuring instruments are the best. In the absence of instrumental data there are other meteorological measurements which are related to solar energy and also it is possible to use empirical relationships to estimate solar flux intensit. One of these empirical relationships to estimate monthly averages of total solar radiation on a horizontal surface is the modified angstrom-type regression equation which has been employed in this report in order to estimate the solar flux intensity on a horizontal surface for Tehran. By comparing the results of this equation with four years measured valued by Tehran's meteorological weather station the values of meteorological constants (a,b) in the equation were obtained for Tehran. (author)

  7. Estimation of error components in a multi-error linear regression model, with an application to track fitting

    International Nuclear Information System (INIS)

    Fruehwirth, R.

    1993-01-01

    We present an estimation procedure of the error components in a linear regression model with multiple independent stochastic error contributions. After solving the general problem we apply the results to the estimation of the actual trajectory in track fitting with multiple scattering. (orig.)

  8. Stability Analysis of Discontinuous Galerkin Approximations to the Elastodynamics Problem

    KAUST Repository

    Antonietti, Paola F.

    2015-11-21

    We consider semi-discrete discontinuous Galerkin approximations of both displacement and displacement-stress formulations of the elastodynamics problem. We prove the stability analysis in the natural energy norm and derive optimal a-priori error estimates. For the displacement-stress formulation, schemes preserving the total energy of the system are introduced and discussed. We verify our theoretical estimates on two and three dimensions test problems.

  9. Stability Analysis of Discontinuous Galerkin Approximations to the Elastodynamics Problem

    KAUST Repository

    Antonietti, Paola F.; Ayuso de Dios, Blanca; Mazzieri, Ilario; Quarteroni, Alfio

    2015-01-01

    We consider semi-discrete discontinuous Galerkin approximations of both displacement and displacement-stress formulations of the elastodynamics problem. We prove the stability analysis in the natural energy norm and derive optimal a-priori error estimates. For the displacement-stress formulation, schemes preserving the total energy of the system are introduced and discussed. We verify our theoretical estimates on two and three dimensions test problems.

  10. A comparison of the performances of an artificial neural network and a regression model for GFR estimation.

    Science.gov (United States)

    Liu, Xun; Li, Ning-shan; Lv, Lin-sheng; Huang, Jian-hua; Tang, Hua; Chen, Jin-xia; Ma, Hui-juan; Wu, Xiao-ming; Lou, Tan-qi

    2013-12-01

    Accurate estimation of glomerular filtration rate (GFR) is important in clinical practice. Current models derived from regression are limited by the imprecision of GFR estimates. We hypothesized that an artificial neural network (ANN) might improve the precision of GFR estimates. A study of diagnostic test accuracy. 1,230 patients with chronic kidney disease were enrolled, including the development cohort (n=581), internal validation cohort (n=278), and external validation cohort (n=371). Estimated GFR (eGFR) using a new ANN model and a new regression model using age, sex, and standardized serum creatinine level derived in the development and internal validation cohort, and the CKD-EPI (Chronic Kidney Disease Epidemiology Collaboration) 2009 creatinine equation. Measured GFR (mGFR). GFR was measured using a diethylenetriaminepentaacetic acid renal dynamic imaging method. Serum creatinine was measured with an enzymatic method traceable to isotope-dilution mass spectrometry. In the external validation cohort, mean mGFR was 49±27 (SD) mL/min/1.73 m2 and biases (median difference between mGFR and eGFR) for the CKD-EPI, new regression, and new ANN models were 0.4, 1.5, and -0.5 mL/min/1.73 m2, respectively (P30% from mGFR) were 50.9%, 77.4%, and 78.7%, respectively (Psource of systematic bias in comparisons of new models to CKD-EPI, and both the derivation and validation cohorts consisted of a group of patients who were referred to the same institution. An ANN model using 3 variables did not perform better than a new regression model. Whether ANN can improve GFR estimation using more variables requires further investigation. Copyright © 2013 National Kidney Foundation, Inc. Published by Elsevier Inc. All rights reserved.

  11. Estimation of lung tumor position from multiple anatomical features on 4D-CT using multiple regression analysis.

    Science.gov (United States)

    Ono, Tomohiro; Nakamura, Mitsuhiro; Hirose, Yoshinori; Kitsuda, Kenji; Ono, Yuka; Ishigaki, Takashi; Hiraoka, Masahiro

    2017-09-01

    To estimate the lung tumor position from multiple anatomical features on four-dimensional computed tomography (4D-CT) data sets using single regression analysis (SRA) and multiple regression analysis (MRA) approach and evaluate an impact of the approach on internal target volume (ITV) for stereotactic body radiotherapy (SBRT) of the lung. Eleven consecutive lung cancer patients (12 cases) underwent 4D-CT scanning. The three-dimensional (3D) lung tumor motion exceeded 5 mm. The 3D tumor position and anatomical features, including lung volume, diaphragm, abdominal wall, and chest wall positions, were measured on 4D-CT images. The tumor position was estimated by SRA using each anatomical feature and MRA using all anatomical features. The difference between the actual and estimated tumor positions was defined as the root-mean-square error (RMSE). A standard partial regression coefficient for the MRA was evaluated. The 3D lung tumor position showed a high correlation with the lung volume (R = 0.92 ± 0.10). Additionally, ITVs derived from SRA and MRA approaches were compared with ITV derived from contouring gross tumor volumes on all 10 phases of the 4D-CT (conventional ITV). The RMSE of the SRA was within 3.7 mm in all directions. Also, the RMSE of the MRA was within 1.6 mm in all directions. The standard partial regression coefficient for the lung volume was the largest and had the most influence on the estimated tumor position. Compared with conventional ITV, average percentage decrease of ITV were 31.9% and 38.3% using SRA and MRA approaches, respectively. The estimation accuracy of lung tumor position was improved by the MRA approach, which provided smaller ITV than conventional ITV. © 2017 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  12. On the stability of rotational discontinuities

    International Nuclear Information System (INIS)

    Richter, P.; Scholer, M.

    1989-01-01

    The stability of symmetric rotational discontinuities in which the magnetic field rotates by 180 degree is investigated by means of a one-dimensional self-consistent hybrid code. Rotational discontinuities with an angle Θ > 45 degree between the discontinuity normal direction and the upstream magnetic field are found to be relatively stable. The discontinuity normal is in the x direction and the initial magnetic field has finite y component only in the transition region. In the case of the ion (left-handed) sense of rotation of the tangential magnetic field, the transition region does not broaden with time. In the case of the electron (right-handed) sense of rotation, a damped wavetrain builds up in the B y component downstream of the rotational discontinuity and the discontinuity broadens with time. Rotational discontinuities with smaller angles, Θ, are unstable. Examples for a rotational discontinuity with Θ = 30 degree and the electron sense of rotation as well as a rotational discontinuity with Θ = 15 degree and the ion sense of rotation show that these discontinuities into waves. These waves travel approximately with Alfven velocity in the upstream direction and are therefore phase standing in the simulation system. The magnetic hodograms of these disintegrated discontinuities are S-shaped. The upstream portion of the hodogram is always right-handed; the downstream portion is always left-handed

  13. Estimation of pyrethroid pesticide intake using regression modeling of food groups based on composite dietary samples

    Data.gov (United States)

    U.S. Environmental Protection Agency — Population-based estimates of pesticide intake are needed to characterize exposure for particular demographic groups based on their dietary behaviors. Regression...

  14. A regressive methodology for estimating missing data in rainfall daily time series

    Science.gov (United States)

    Barca, E.; Passarella, G.

    2009-04-01

    The "presence" of gaps in environmental data time series represents a very common, but extremely critical problem, since it can produce biased results (Rubin, 1976). Missing data plagues almost all surveys. The problem is how to deal with missing data once it has been deemed impossible to recover the actual missing values. Apart from the amount of missing data, another issue which plays an important role in the choice of any recovery approach is the evaluation of "missingness" mechanisms. When data missing is conditioned by some other variable observed in the data set (Schafer, 1997) the mechanism is called MAR (Missing at Random). Otherwise, when the missingness mechanism depends on the actual value of the missing data, it is called NCAR (Not Missing at Random). This last is the most difficult condition to model. In the last decade interest arose in the estimation of missing data by using regression (single imputation). More recently multiple imputation has become also available, which returns a distribution of estimated values (Scheffer, 2002). In this paper an automatic methodology for estimating missing data is presented. In practice, given a gauging station affected by missing data (target station), the methodology checks the randomness of the missing data and classifies the "similarity" between the target station and the other gauging stations spread over the study area. Among different methods useful for defining the similarity degree, whose effectiveness strongly depends on the data distribution, the Spearman correlation coefficient was chosen. Once defined the similarity matrix, a suitable, nonparametric, univariate, and regressive method was applied in order to estimate missing data in the target station: the Theil method (Theil, 1950). Even though the methodology revealed to be rather reliable an improvement of the missing data estimation can be achieved by a generalization. A first possible improvement consists in extending the univariate technique to

  15. Discontinuity Preserving Image Registration through Motion Segmentation: A Primal-Dual Approach

    Directory of Open Access Journals (Sweden)

    Silja Kiriyanthan

    2016-01-01

    Full Text Available Image registration is a powerful tool in medical image analysis and facilitates the clinical routine in several aspects. There are many well established elastic registration methods, but none of them can so far preserve discontinuities in the displacement field. These discontinuities appear in particular at organ boundaries during the breathing induced organ motion. In this paper, we exploit the fact that motion segmentation could play a guiding role during discontinuity preserving registration. The motion segmentation is embedded in a continuous cut framework guaranteeing convexity for motion segmentation. Furthermore we show that a primal-dual method can be used to estimate a solution to this challenging variational problem. Experimental results are presented for MR images with apparent breathing induced sliding motion of the liver along the abdominal wall.

  16. Logistic Regression with Multiple Random Effects: A Simulation Study of Estimation Methods and Statistical Packages

    Science.gov (United States)

    Kim, Yoonsang; Emery, Sherry

    2013-01-01

    Several statistical packages are capable of estimating generalized linear mixed models and these packages provide one or more of three estimation methods: penalized quasi-likelihood, Laplace, and Gauss-Hermite. Many studies have investigated these methods’ performance for the mixed-effects logistic regression model. However, the authors focused on models with one or two random effects and assumed a simple covariance structure between them, which may not be realistic. When there are multiple correlated random effects in a model, the computation becomes intensive, and often an algorithm fails to converge. Moreover, in our analysis of smoking status and exposure to anti-tobacco advertisements, we have observed that when a model included multiple random effects, parameter estimates varied considerably from one statistical package to another even when using the same estimation method. This article presents a comprehensive review of the advantages and disadvantages of each estimation method. In addition, we compare the performances of the three methods across statistical packages via simulation, which involves two- and three-level logistic regression models with at least three correlated random effects. We apply our findings to a real dataset. Our results suggest that two packages—SAS GLIMMIX Laplace and SuperMix Gaussian quadrature—perform well in terms of accuracy, precision, convergence rates, and computing speed. We also discuss the strengths and weaknesses of the two packages in regard to sample sizes. PMID:24288415

  17. Logistic Regression with Multiple Random Effects: A Simulation Study of Estimation Methods and Statistical Packages.

    Science.gov (United States)

    Kim, Yoonsang; Choi, Young-Ku; Emery, Sherry

    2013-08-01

    Several statistical packages are capable of estimating generalized linear mixed models and these packages provide one or more of three estimation methods: penalized quasi-likelihood, Laplace, and Gauss-Hermite. Many studies have investigated these methods' performance for the mixed-effects logistic regression model. However, the authors focused on models with one or two random effects and assumed a simple covariance structure between them, which may not be realistic. When there are multiple correlated random effects in a model, the computation becomes intensive, and often an algorithm fails to converge. Moreover, in our analysis of smoking status and exposure to anti-tobacco advertisements, we have observed that when a model included multiple random effects, parameter estimates varied considerably from one statistical package to another even when using the same estimation method. This article presents a comprehensive review of the advantages and disadvantages of each estimation method. In addition, we compare the performances of the three methods across statistical packages via simulation, which involves two- and three-level logistic regression models with at least three correlated random effects. We apply our findings to a real dataset. Our results suggest that two packages-SAS GLIMMIX Laplace and SuperMix Gaussian quadrature-perform well in terms of accuracy, precision, convergence rates, and computing speed. We also discuss the strengths and weaknesses of the two packages in regard to sample sizes.

  18. Sleep discontinuity and impaired sleep continuity affect transition to and from obesity over time: results from the Alameda county study.

    Science.gov (United States)

    Nordin, Maria; Kaplan, Robert M

    2010-03-01

    To investigate the impact of development in sleep continuity on transition to and from obesity over time. The study used self-reported sleep and body mass index (BMI) measures from the 1965, 1974, 1983, and 1994 waves of the longitudinal Alameda County Study. Sleep continuity was assessed by a question on whether the participants had any troubles falling or staying asleep. Change in sleep and BMI were estimated from the sleep and BMI questions in 1965 and 1994 respectively. Multinomial regression analyses were used to examine the risk/chance for a transition to and from obesity (BMI >or=30 kg/m(2)) due to development in sleep continuity. After adjustment for confounding variables, consistent sleep discontinuity both increases the risk for a transition to obesity and reduces the chance of losing weight, whereas impaired sleep continuity lowers the chance for weight loss. Effects for obesity were non-significant for those with improved sleep continuity. Consistent sleep discontinuity and impaired sleep continuity increases the risk of transition to obesity or of remaining obese.

  19. A Seemingly Unrelated Poisson Regression Model

    OpenAIRE

    King, Gary

    1989-01-01

    This article introduces a new estimator for the analysis of two contemporaneously correlated endogenous event count variables. This seemingly unrelated Poisson regression model (SUPREME) estimator combines the efficiencies created by single equation Poisson regression model estimators and insights from "seemingly unrelated" linear regression models.

  20. Using a Regression Method for Estimating Performance in a Rapid Serial Visual Presentation Target-Detection Task

    Science.gov (United States)

    2017-12-01

    Fig. 2 Simulation method; the process for one iteration of the simulation . It was repeated 250 times per combination of HR and FAR. Analysis was...distribution is unlimited. 8 Fig. 2 Simulation method; the process for one iteration of the simulation . It was repeated 250 times per combination of HR...stimuli. Simulations show that this regression method results in an unbiased and accurate estimate of target detection performance. The regression

  1. Simple estimation procedures for regression analysis of interval-censored failure time data under the proportional hazards model.

    Science.gov (United States)

    Sun, Jianguo; Feng, Yanqin; Zhao, Hui

    2015-01-01

    Interval-censored failure time data occur in many fields including epidemiological and medical studies as well as financial and sociological studies, and many authors have investigated their analysis (Sun, The statistical analysis of interval-censored failure time data, 2006; Zhang, Stat Modeling 9:321-343, 2009). In particular, a number of procedures have been developed for regression analysis of interval-censored data arising from the proportional hazards model (Finkelstein, Biometrics 42:845-854, 1986; Huang, Ann Stat 24:540-568, 1996; Pan, Biometrics 56:199-203, 2000). For most of these procedures, however, one drawback is that they involve estimation of both regression parameters and baseline cumulative hazard function. In this paper, we propose two simple estimation approaches that do not need estimation of the baseline cumulative hazard function. The asymptotic properties of the resulting estimates are given, and an extensive simulation study is conducted and indicates that they work well for practical situations.

  2. Evaluation of Regression and Neuro_Fuzzy Models in Estimating Saturated Hydraulic Conductivity

    Directory of Open Access Journals (Sweden)

    J. Behmanesh

    2015-06-01

    Full Text Available Study of soil hydraulic properties such as saturated and unsaturated hydraulic conductivity is required in the environmental investigations. Despite numerous research, measuring saturated hydraulic conductivity using by direct methods are still costly, time consuming and professional. Therefore estimating saturated hydraulic conductivity using rapid and low cost methods such as pedo-transfer functions with acceptable accuracy was developed. The purpose of this research was to compare and evaluate 11 pedo-transfer functions and Adaptive Neuro-Fuzzy Inference System (ANFIS to estimate saturated hydraulic conductivity of soil. In this direct, saturated hydraulic conductivity and physical properties in 40 points of Urmia were calculated. The soil excavated was used in the lab to determine its easily accessible parameters. The results showed that among existing models, Aimrun et al model had the best estimation for soil saturated hydraulic conductivity. For mentioned model, the Root Mean Square Error and Mean Absolute Error parameters were 0.174 and 0.028 m/day respectively. The results of the present research, emphasises the importance of effective porosity application as an important accessible parameter in accuracy of pedo-transfer functions. sand and silt percent, bulk density and soil particle density were selected to apply in 561 ANFIS models. In training phase of best ANFIS model, the R2 and RMSE were calculated 1 and 1.2×10-7 respectively. These amounts in the test phase were 0.98 and 0.0006 respectively. Comparison of regression and ANFIS models showed that the ANFIS model had better results than regression functions. Also Nuro-Fuzzy Inference System had capability to estimatae with high accuracy in various soil textures.

  3. A Quantile Regression Approach to Estimating the Distribution of Anesthetic Procedure Time during Induction.

    Directory of Open Access Journals (Sweden)

    Hsin-Lun Wu

    Full Text Available Although procedure time analyses are important for operating room management, it is not easy to extract useful information from clinical procedure time data. A novel approach was proposed to analyze procedure time during anesthetic induction. A two-step regression analysis was performed to explore influential factors of anesthetic induction time (AIT. Linear regression with stepwise model selection was used to select significant correlates of AIT and then quantile regression was employed to illustrate the dynamic relationships between AIT and selected variables at distinct quantiles. A total of 1,060 patients were analyzed. The first and second-year residents (R1-R2 required longer AIT than the third and fourth-year residents and attending anesthesiologists (p = 0.006. Factors prolonging AIT included American Society of Anesthesiologist physical status ≧ III, arterial, central venous and epidural catheterization, and use of bronchoscopy. Presence of surgeon before induction would decrease AIT (p < 0.001. Types of surgery also had significant influence on AIT. Quantile regression satisfactorily estimated extra time needed to complete induction for each influential factor at distinct quantiles. Our analysis on AIT demonstrated the benefit of quantile regression analysis to provide more comprehensive view of the relationships between procedure time and related factors. This novel two-step regression approach has potential applications to procedure time analysis in operating room management.

  4. Testing and Estimating Shape-Constrained Nonparametric Density and Regression in the Presence of Measurement Error

    KAUST Repository

    Carroll, Raymond J.

    2011-03-01

    In many applications we can expect that, or are interested to know if, a density function or a regression curve satisfies some specific shape constraints. For example, when the explanatory variable, X, represents the value taken by a treatment or dosage, the conditional mean of the response, Y , is often anticipated to be a monotone function of X. Indeed, if this regression mean is not monotone (in the appropriate direction) then the medical or commercial value of the treatment is likely to be significantly curtailed, at least for values of X that lie beyond the point at which monotonicity fails. In the case of a density, common shape constraints include log-concavity and unimodality. If we can correctly guess the shape of a curve, then nonparametric estimators can be improved by taking this information into account. Addressing such problems requires a method for testing the hypothesis that the curve of interest satisfies a shape constraint, and, if the conclusion of the test is positive, a technique for estimating the curve subject to the constraint. Nonparametric methodology for solving these problems already exists, but only in cases where the covariates are observed precisely. However in many problems, data can only be observed with measurement errors, and the methods employed in the error-free case typically do not carry over to this error context. In this paper we develop a novel approach to hypothesis testing and function estimation under shape constraints, which is valid in the context of measurement errors. Our method is based on tilting an estimator of the density or the regression mean until it satisfies the shape constraint, and we take as our test statistic the distance through which it is tilted. Bootstrap methods are used to calibrate the test. The constrained curve estimators that we develop are also based on tilting, and in that context our work has points of contact with methodology in the error-free case.

  5. Flexible regression models for estimating postmortem interval (PMI) in forensic medicine.

    Science.gov (United States)

    Muñoz Barús, José Ignacio; Febrero-Bande, Manuel; Cadarso-Suárez, Carmen

    2008-10-30

    Correct determination of time of death is an important goal in forensic medicine. Numerous methods have been described for estimating postmortem interval (PMI), but most are imprecise, poorly reproducible and/or have not been validated with real data. In recent years, however, some progress in PMI estimation has been made, notably through the use of new biochemical methods for quantifying relevant indicator compounds in the vitreous humour. The best, but unverified, results have been obtained with [K+] and hypoxanthine [Hx], using simple linear regression (LR) models. The main aim of this paper is to offer more flexible alternatives to LR, such as generalized additive models (GAMs) and support vector machines (SVMs) in order to obtain improved PMI estimates. The present study, based on detailed analysis of [K+] and [Hx] in more than 200 vitreous humour samples from subjects with known PMI, compared classical LR methodology with GAM and SVM methodologies. Both proved better than LR for estimation of PMI. SVM showed somewhat greater precision than GAM, but GAM offers a readily interpretable graphical output, facilitating understanding of findings by legal professionals; there are thus arguments for using both types of models. R code for these methods is available from the authors, permitting accurate prediction of PMI from vitreous humour [K+], [Hx] and [U], with confidence intervals and graphical output provided. Copyright 2008 John Wiley & Sons, Ltd.

  6. Estimation of Stature from Footprint Anthropometry Using Regression Analysis: A Study on the Bidayuh Population of East Malaysia

    Directory of Open Access Journals (Sweden)

    T. Nataraja Moorthy

    2015-05-01

    Full Text Available The human foot has been studied for a variety of reasons, i.e., for forensic as well as non-forensic purposes by anatomists, forensic scientists, anthropologists, physicians, podiatrists, and numerous other groups. An aspect of human identification that has received scant attention from forensic anthropologists is the study of human feet and the footprints made by the feet. The present study, conducted during 2013-2014, aimed to derive population specific regression equations to estimate stature from the footprint anthropometry of indigenous adult Bidayuhs in the east of Malaysia. The study sample consisted of 480 bilateral footprints collected using a footprint kit from 240 Bidayuhs (120 males and 120 females, who consented to taking part in the study. Their ages ranged from 18 to 70 years. Stature was measured using a portable body meter device (SECA model 206. The data were analyzed using PASW Statistics version 20. In this investigation, better results were obtained in terms of correlation coefficient (R between stature and various footprint measurements and regression analysis in estimating the stature. The (R values showed a positive and statistically significant (p < 0.001 relationship between the two parameters. The correlation coefficients in the pooled sample (0.861–0.882 were comparatively higher than those of an individual male (0.762-0.795 and female (0.722-0.765. This study provided regression equations to estimate stature from footprints in the Bidayuh population. The result showed that the regression equations without sex indicators performed significantly better than models with gender indications. The regression equations derived for a pooled sample can be used to estimate stature, even when the sex of the footprint is unknown, as in real crime scenes.

  7. Estimation of Electrically-Evoked Knee Torque from Mechanomyography Using Support Vector Regression

    Directory of Open Access Journals (Sweden)

    Morufu Olusola Ibitoye

    2016-07-01

    Full Text Available The difficulty of real-time muscle force or joint torque estimation during neuromuscular electrical stimulation (NMES in physical therapy and exercise science has motivated recent research interest in torque estimation from other muscle characteristics. This study investigated the accuracy of a computational intelligence technique for estimating NMES-evoked knee extension torque based on the Mechanomyographic signals (MMG of contracting muscles that were recorded from eight healthy males. Simulation of the knee torque was modelled via Support Vector Regression (SVR due to its good generalization ability in related fields. Inputs to the proposed model were MMG amplitude characteristics, the level of electrical stimulation or contraction intensity, and knee angle. Gaussian kernel function, as well as its optimal parameters were identified with the best performance measure and were applied as the SVR kernel function to build an effective knee torque estimation model. To train and test the model, the data were partitioned into training (70% and testing (30% subsets, respectively. The SVR estimation accuracy, based on the coefficient of determination (R2 between the actual and the estimated torque values was up to 94% and 89% during the training and testing cases, with root mean square errors (RMSE of 9.48 and 12.95, respectively. The knee torque estimations obtained using SVR modelling agreed well with the experimental data from an isokinetic dynamometer. These findings support the realization of a closed-loop NMES system for functional tasks using MMG as the feedback signal source and an SVR algorithm for joint torque estimation.

  8. Performance and separation occurrence of binary probit regression estimator using maximum likelihood method and Firths approach under different sample size

    Science.gov (United States)

    Lusiana, Evellin Dewi

    2017-12-01

    The parameters of binary probit regression model are commonly estimated by using Maximum Likelihood Estimation (MLE) method. However, MLE method has limitation if the binary data contains separation. Separation is the condition where there are one or several independent variables that exactly grouped the categories in binary response. It will result the estimators of MLE method become non-convergent, so that they cannot be used in modeling. One of the effort to resolve the separation is using Firths approach instead. This research has two aims. First, to identify the chance of separation occurrence in binary probit regression model between MLE method and Firths approach. Second, to compare the performance of binary probit regression model estimator that obtained by MLE method and Firths approach using RMSE criteria. Those are performed using simulation method and under different sample size. The results showed that the chance of separation occurrence in MLE method for small sample size is higher than Firths approach. On the other hand, for larger sample size, the probability decreased and relatively identic between MLE method and Firths approach. Meanwhile, Firths estimators have smaller RMSE than MLEs especially for smaller sample sizes. But for larger sample sizes, the RMSEs are not much different. It means that Firths estimators outperformed MLE estimator.

  9. Image Jacobian Matrix Estimation Based on Online Support Vector Regression

    Directory of Open Access Journals (Sweden)

    Shangqin Mao

    2012-10-01

    Full Text Available Research into robotics visual servoing is an important area in the field of robotics. It has proven difficult to achieve successful results for machine vision and robotics in unstructured environments without using any a priori camera or kinematic models. In uncalibrated visual servoing, image Jacobian matrix estimation methods can be divided into two groups: the online method and the offline method. The offline method is not appropriate for most natural environments. The online method is robust but rough. Moreover, if the images feature configuration changes, it needs to restart the approximating procedure. A novel approach based on an online support vector regression (OL-SVR algorithm is proposed which overcomes the drawbacks and combines the virtues just mentioned.

  10. Power system state estimation using an iteratively reweighted least squares method for sequential L{sub 1}-regression

    Energy Technology Data Exchange (ETDEWEB)

    Jabr, R.A. [Electrical, Computer and Communication Engineering Department, Notre Dame University, P.O. Box 72, Zouk Mikhael, Zouk Mosbeh (Lebanon)

    2006-02-15

    This paper presents an implementation of the least absolute value (LAV) power system state estimator based on obtaining a sequence of solutions to the L{sub 1}-regression problem using an iteratively reweighted least squares (IRLS{sub L1}) method. The proposed implementation avoids reformulating the regression problem into standard linear programming (LP) form and consequently does not require the use of common methods of LP, such as those based on the simplex method or interior-point methods. It is shown that the IRLS{sub L1} method is equivalent to solving a sequence of linear weighted least squares (LS) problems. Thus, its implementation presents little additional effort since the sparse LS solver is common to existing LS state estimators. Studies on the termination criteria of the IRLS{sub L1} method have been carried out to determine a procedure for which the proposed estimator is more computationally efficient than a previously proposed non-linear iteratively reweighted least squares (IRLS) estimator. Indeed, it is revealed that the proposed method is a generalization of the previously reported IRLS estimator, but is based on more rigorous theory. (author)

  11. Comparison of Classical and Robust Estimates of Threshold Auto-regression Parameters

    Directory of Open Access Journals (Sweden)

    V. B. Goryainov

    2017-01-01

    Full Text Available The study object is the first-order threshold auto-regression model with a single zero-located threshold. The model describes a stochastic temporal series with discrete time by means of a piecewise linear equation consisting of two linear classical first-order autoregressive equations. One of these equations is used to calculate a running value of the temporal series. A control variable that determines the choice between these two equations is the sign of the previous value of the same series.The first-order threshold autoregressive model with a single threshold depends on two real parameters that coincide with the coefficients of the piecewise linear threshold equation. These parameters are assumed to be unknown. The paper studies an estimate of the least squares, an estimate the least modules, and the M-estimates of these parameters. The aim of the paper is a comparative study of the accuracy of these estimates for the main probabilistic distributions of the updating process of the threshold autoregressive equation. These probability distributions were normal, contaminated normal, logistic, double-exponential distributions, a Student's distribution with different number of degrees of freedom, and a Cauchy distribution.As a measure of the accuracy of each estimate, was chosen its variance to measure the scattering of the estimate around the estimated parameter. An estimate with smaller variance made from the two estimates was considered to be the best. The variance was estimated by computer simulation. To estimate the smallest modules an iterative weighted least-squares method was used and the M-estimates were done by the method of a deformable polyhedron (the Nelder-Mead method. To calculate the least squares estimate, an explicit analytic expression was used.It turned out that the estimation of least squares is best only with the normal distribution of the updating process. For the logistic distribution and the Student's distribution with the

  12. Polynomial regression analysis and significance test of the regression function

    International Nuclear Information System (INIS)

    Gao Zhengming; Zhao Juan; He Shengping

    2012-01-01

    In order to analyze the decay heating power of a certain radioactive isotope per kilogram with polynomial regression method, the paper firstly demonstrated the broad usage of polynomial function and deduced its parameters with ordinary least squares estimate. Then significance test method of polynomial regression function is derived considering the similarity between the polynomial regression model and the multivariable linear regression model. Finally, polynomial regression analysis and significance test of the polynomial function are done to the decay heating power of the iso tope per kilogram in accord with the authors' real work. (authors)

  13. A menu-driven software package of Bayesian nonparametric (and parametric) mixed models for regression analysis and density estimation.

    Science.gov (United States)

    Karabatsos, George

    2017-02-01

    Most of applied statistics involves regression analysis of data. In practice, it is important to specify a regression model that has minimal assumptions which are not violated by data, to ensure that statistical inferences from the model are informative and not misleading. This paper presents a stand-alone and menu-driven software package, Bayesian Regression: Nonparametric and Parametric Models, constructed from MATLAB Compiler. Currently, this package gives the user a choice from 83 Bayesian models for data analysis. They include 47 Bayesian nonparametric (BNP) infinite-mixture regression models; 5 BNP infinite-mixture models for density estimation; and 31 normal random effects models (HLMs), including normal linear models. Each of the 78 regression models handles either a continuous, binary, or ordinal dependent variable, and can handle multi-level (grouped) data. All 83 Bayesian models can handle the analysis of weighted observations (e.g., for meta-analysis), and the analysis of left-censored, right-censored, and/or interval-censored data. Each BNP infinite-mixture model has a mixture distribution assigned one of various BNP prior distributions, including priors defined by either the Dirichlet process, Pitman-Yor process (including the normalized stable process), beta (two-parameter) process, normalized inverse-Gaussian process, geometric weights prior, dependent Dirichlet process, or the dependent infinite-probits prior. The software user can mouse-click to select a Bayesian model and perform data analysis via Markov chain Monte Carlo (MCMC) sampling. After the sampling completes, the software automatically opens text output that reports MCMC-based estimates of the model's posterior distribution and model predictive fit to the data. Additional text and/or graphical output can be generated by mouse-clicking other menu options. This includes output of MCMC convergence analyses, and estimates of the model's posterior predictive distribution, for selected

  14. Regression and kriging analysis for grid power factor estimation

    Directory of Open Access Journals (Sweden)

    Rajesh Guntaka

    2014-12-01

    Full Text Available The measurement of power factor (PF in electrical utility grids is a mainstay of load balancing and is also a critical element of transmission and distribution efficiency. The measurement of PF dates back to the earliest periods of electrical power distribution to public grids. In the wide-area distribution grid, measurement of current waveforms is trivial and may be accomplished at any point in the grid using a current tap transformer. However, voltage measurement requires reference to ground and so is more problematic and measurements are normally constrained to points that have ready and easy access to a ground source. We present two mathematical analysis methods based on kriging and linear least square estimation (LLSE (regression to derive PF at nodes with unknown voltages that are within a perimeter of sample nodes with ground reference across a selected power grid. Our results indicate an error average of 1.884% that is within acceptable tolerances for PF measurements that are used in load balancing tasks.

  15. Unbalanced Regressions and the Predictive Equation

    DEFF Research Database (Denmark)

    Osterrieder, Daniela; Ventosa-Santaulària, Daniel; Vera-Valdés, J. Eduardo

    Predictive return regressions with persistent regressors are typically plagued by (asymptotically) biased/inconsistent estimates of the slope, non-standard or potentially even spurious statistical inference, and regression unbalancedness. We alleviate the problem of unbalancedness in the theoreti......Predictive return regressions with persistent regressors are typically plagued by (asymptotically) biased/inconsistent estimates of the slope, non-standard or potentially even spurious statistical inference, and regression unbalancedness. We alleviate the problem of unbalancedness...... in the theoretical predictive equation by suggesting a data generating process, where returns are generated as linear functions of a lagged latent I(0) risk process. The observed predictor is a function of this latent I(0) process, but it is corrupted by a fractionally integrated noise. Such a process may arise due...... to aggregation or unexpected level shifts. In this setup, the practitioner estimates a misspecified, unbalanced, and endogenous predictive regression. We show that the OLS estimate of this regression is inconsistent, but standard inference is possible. To obtain a consistent slope estimate, we then suggest...

  16. General Practitioners’ Decisions about Discontinuation of Medication

    DEFF Research Database (Denmark)

    Nixon, Michael Simon; Vendelø, Morten Thanning

    2016-01-01

    insights about decision making when discontinuing medication. It also offers one of the first examinations of how the institutional context embedding GPs influences their decisions about discontinuation. For policymakers interested in the discontinuation of medication, the findings suggest that de......Purpose – The purpose of this paper is to investigate how general practitioners’ (GPs) decisions about discontinuation of medication are influenced by their institutional context. Design/methodology/approach – In total, 24 GPs were interviewed, three practices were observed and documents were...... a weak frame for discontinuation. Three reasons for this are identified: the guidelines provide dominating triggers for prescribing, they provide weak priming for discontinuation as an option, and they underscore a cognitive constraint against discontinuation. Originality/value – The analysis offers new...

  17. Estimating overall exposure effects for the clustered and censored outcome using random effect Tobit regression models.

    Science.gov (United States)

    Wang, Wei; Griswold, Michael E

    2016-11-30

    The random effect Tobit model is a regression model that accommodates both left- and/or right-censoring and within-cluster dependence of the outcome variable. Regression coefficients of random effect Tobit models have conditional interpretations on a constructed latent dependent variable and do not provide inference of overall exposure effects on the original outcome scale. Marginalized random effects model (MREM) permits likelihood-based estimation of marginal mean parameters for the clustered data. For random effect Tobit models, we extend the MREM to marginalize over both the random effects and the normal space and boundary components of the censored response to estimate overall exposure effects at population level. We also extend the 'Average Predicted Value' method to estimate the model-predicted marginal means for each person under different exposure status in a designated reference group by integrating over the random effects and then use the calculated difference to assess the overall exposure effect. The maximum likelihood estimation is proposed utilizing a quasi-Newton optimization algorithm with Gauss-Hermite quadrature to approximate the integration of the random effects. We use these methods to carefully analyze two real datasets. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  18. Factors affecting IUCD discontinuation in Nepal

    DEFF Research Database (Denmark)

    Thapa, Subash; Paudel, Ishwari Sharma; Bhattarai, Sailesh

    2015-01-01

    Information related to contraception discontinuation, especially in the context of Nepal is very limited. A nested case-control study was carried out to determine the factors affecting discontinuation of intrauterine contraceptive devices (IUCDs). A total of 115 cases (IUCD discontinuers) and 115...

  19. Fixed-time stability of dynamical systems and fixed-time synchronization of coupled discontinuous neural networks.

    Science.gov (United States)

    Hu, Cheng; Yu, Juan; Chen, Zhanheng; Jiang, Haijun; Huang, Tingwen

    2017-05-01

    In this paper, the fixed-time stability of dynamical systems and the fixed-time synchronization of coupled discontinuous neural networks are investigated under the framework of Filippov solution. Firstly, by means of reduction to absurdity, a theorem of fixed-time stability is established and a high-precision estimation of the settling-time is given. It is shown by theoretic proof that the estimation bound of the settling time given in this paper is less conservative and more accurate compared with the classical results. Besides, as an important application, the fixed-time synchronization of coupled neural networks with discontinuous activation functions is proposed. By designing a discontinuous control law and using the theory of differential inclusions, some new criteria are derived to ensure the fixed-time synchronization of the addressed coupled networks. Finally, two numerical examples are provided to show the effectiveness and validity of the theoretical results. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Estimating the input function non-invasively for FDG-PET quantification with multiple linear regression analysis: simulation and verification with in vivo data

    International Nuclear Information System (INIS)

    Fang, Yu-Hua; Kao, Tsair; Liu, Ren-Shyan; Wu, Liang-Chih

    2004-01-01

    A novel statistical method, namely Regression-Estimated Input Function (REIF), is proposed in this study for the purpose of non-invasive estimation of the input function for fluorine-18 2-fluoro-2-deoxy-d-glucose positron emission tomography (FDG-PET) quantitative analysis. We collected 44 patients who had undergone a blood sampling procedure during their FDG-PET scans. First, we generated tissue time-activity curves of the grey matter and the whole brain with a segmentation technique for every subject. Summations of different intervals of these two curves were used as a feature vector, which also included the net injection dose. Multiple linear regression analysis was then applied to find the correlation between the input function and the feature vector. After a simulation study with in vivo data, the data of 29 patients were applied to calculate the regression coefficients, which were then used to estimate the input functions of the other 15 subjects. Comparing the estimated input functions with the corresponding real input functions, the averaged error percentages of the area under the curve and the cerebral metabolic rate of glucose (CMRGlc) were 12.13±8.85 and 16.60±9.61, respectively. Regression analysis of the CMRGlc values derived from the real and estimated input functions revealed a high correlation (r=0.91). No significant difference was found between the real CMRGlc and that derived from our regression-estimated input function (Student's t test, P>0.05). The proposed REIF method demonstrated good abilities for input function and CMRGlc estimation, and represents a reliable replacement for the blood sampling procedures in FDG-PET quantification. (orig.)

  1. Predictors of premature discontinuation of outpatient treatment after discharge of patients with posttraumatic stress disorder

    Directory of Open Access Journals (Sweden)

    Wang HR

    2015-03-01

    Full Text Available Hee Ryung Wang, Young Sup Woo, Tae-Youn Jun, Won-Myong Bahk Department of Psychiatry, College of Medicine, The Catholic University of Korea, Seoul, Korea Objective: This study aimed to examine the sociodemographic and disease-related variables associated with the premature discontinuation of psychiatric outpatient treatment after discharge among patients with noncombat-related posttraumatic stress disorder. Methods: We retrospectively reviewed the medical records of patients who were discharged with a diagnosis of posttraumatic stress disorder. Results: Fifty-five percent of subjects (57/104 prematurely discontinued outpatient treatment within 6 months of discharge. Comparing sociodemographic variables between the 6-month non-follow-up group and 6-month follow-up group, there were no variables that differed between the two groups. However, comparing disease-related variables, the 6-month follow-up group showed a longer hospitalization duration and higher Global Assessment of Function score at discharge. The logistic regression analysis showed that a shorter duration of hospitalization predicted premature discontinuation of outpatient treatment within 6 months of discharge. Conclusion: The duration of psychiatric hospitalization for posttraumatic stress disorder appeared to influence the premature discontinuation of outpatient treatment after discharge. Keywords: posttraumatic stress disorder, discontinuation, compliance, predictor

  2. 27 CFR 555.128 - Discontinuance of business.

    Science.gov (United States)

    2010-04-01

    ... 27 Alcohol, Tobacco Products and Firearms 3 2010-04-01 2010-04-01 false Discontinuance of business... Discontinuance of business. Where an explosive materials business or operations is discontinued and succeeded by... such facts and shall be delivered to the successor. Where discontinuance of the business or operations...

  3. Soil moisture estimation using multi linear regression with terraSAR-X data

    Directory of Open Access Journals (Sweden)

    G. García

    2016-06-01

    Full Text Available The first five centimeters of soil form an interface where the main heat fluxes exchanges between the land surface and the atmosphere occur. Besides ground measurements, remote sensing has proven to be an excellent tool for the monitoring of spatial and temporal distributed data of the most relevant Earth surface parameters including soil’s parameters. Indeed, active microwave sensors (Synthetic Aperture Radar - SAR offer the opportunity to monitor soil moisture (HS at global, regional and local scales by monitoring involved processes. Several inversion algorithms, that derive geophysical information as HS from SAR data, were developed. Many of them use electromagnetic models for simulating the backscattering coefficient and are based on statistical techniques, such as neural networks, inversion methods and regression models. Recent studies have shown that simple multiple regression techniques yield satisfactory results. The involved geophysical variables in these methodologies are descriptive of the soil structure, microwave characteristics and land use. Therefore, in this paper we aim at developing a multiple linear regression model to estimate HS on flat agricultural regions using TerraSAR-X satellite data and data from a ground weather station. The results show that the backscatter, the precipitation and the relative humidity are the explanatory variables of HS. The results obtained presented a RMSE of 5.4 and a R2  of about 0.6

  4. 27 CFR 478.57 - Discontinuance of business.

    Science.gov (United States)

    2010-04-01

    ... 27 Alcohol, Tobacco Products and Firearms 3 2010-04-01 2010-04-01 false Discontinuance of business... Licenses § 478.57 Discontinuance of business. (a) Where a firearm or ammunition business is either discontinued or succeeded by a new owner, the owner of the business discontinued or succeeded shall within 30...

  5. A fuzzy regression with support vector machine approach to the estimation of horizontal global solar radiation

    International Nuclear Information System (INIS)

    Baser, Furkan; Demirhan, Haydar

    2017-01-01

    Accurate estimation of the amount of horizontal global solar radiation for a particular field is an important input for decision processes in solar radiation investments. In this article, we focus on the estimation of yearly mean daily horizontal global solar radiation by using an approach that utilizes fuzzy regression functions with support vector machine (FRF-SVM). This approach is not seriously affected by outlier observations and does not suffer from the over-fitting problem. To demonstrate the utility of the FRF-SVM approach in the estimation of horizontal global solar radiation, we conduct an empirical study over a dataset collected in Turkey and applied the FRF-SVM approach with several kernel functions. Then, we compare the estimation accuracy of the FRF-SVM approach to an adaptive neuro-fuzzy system and a coplot supported-genetic programming approach. We observe that the FRF-SVM approach with a Gaussian kernel function is not affected by both outliers and over-fitting problem and gives the most accurate estimates of horizontal global solar radiation among the applied approaches. Consequently, the use of hybrid fuzzy functions and support vector machine approaches is found beneficial in long-term forecasting of horizontal global solar radiation over a region with complex climatic and terrestrial characteristics. - Highlights: • A fuzzy regression functions with support vector machines approach is proposed. • The approach is robust against outlier observations and over-fitting problem. • Estimation accuracy of the model is superior to several existent alternatives. • A new solar radiation estimation model is proposed for the region of Turkey. • The model is useful under complex terrestrial and climatic conditions.

  6. Estimation of evapotranspiration across the conterminous United States using a regression with climate and land-cover data

    Science.gov (United States)

    Sanford, Ward E.; Selnick, David L.

    2013-01-01

    Evapotranspiration (ET) is an important quantity for water resource managers to know because it often represents the largest sink for precipitation (P) arriving at the land surface. In order to estimate actual ET across the conterminous United States (U.S.) in this study, a water-balance method was combined with a climate and land-cover regression equation. Precipitation and streamflow records were compiled for 838 watersheds for 1971-2000 across the U.S. to obtain long-term estimates of actual ET. A regression equation was developed that related the ratio ET/P to climate and land-cover variables within those watersheds. Precipitation and temperatures were used from the PRISM climate dataset, and land-cover data were used from the USGS National Land Cover Dataset. Results indicate that ET can be predicted relatively well at a watershed or county scale with readily available climate variables alone, and that land-cover data can also improve those predictions. Using the climate and land-cover data at an 800-m scale and then averaging to the county scale, maps were produced showing estimates of ET and ET/P for the entire conterminous U.S. Using the regression equation, such maps could also be made for more detailed state coverages, or for other areas of the world where climate and land-cover data are plentiful.

  7. Automated extraction and analysis of rock discontinuity characteristics from 3D point clouds

    Science.gov (United States)

    Bianchetti, Matteo; Villa, Alberto; Agliardi, Federico; Crosta, Giovanni B.

    2016-04-01

    A reliable characterization of fractured rock masses requires an exhaustive geometrical description of discontinuities, including orientation, spacing, and size. These are required to describe discontinuum rock mass structure, perform Discrete Fracture Network and DEM modelling, or provide input for rock mass classification or equivalent continuum estimate of rock mass properties. Although several advanced methodologies have been developed in the last decades, a complete characterization of discontinuity geometry in practice is still challenging, due to scale-dependent variability of fracture patterns and difficult accessibility to large outcrops. Recent advances in remote survey techniques, such as terrestrial laser scanning and digital photogrammetry, allow a fast and accurate acquisition of dense 3D point clouds, which promoted the development of several semi-automatic approaches to extract discontinuity features. Nevertheless, these often need user supervision on algorithm parameters which can be difficult to assess. To overcome this problem, we developed an original Matlab tool, allowing fast, fully automatic extraction and analysis of discontinuity features with no requirements on point cloud accuracy, density and homogeneity. The tool consists of a set of algorithms which: (i) process raw 3D point clouds, (ii) automatically characterize discontinuity sets, (iii) identify individual discontinuity surfaces, and (iv) analyse their spacing and persistence. The tool operates in either a supervised or unsupervised mode, starting from an automatic preliminary exploration data analysis. The identification and geometrical characterization of discontinuity features is divided in steps. First, coplanar surfaces are identified in the whole point cloud using K-Nearest Neighbor and Principal Component Analysis algorithms optimized on point cloud accuracy and specified typical facet size. Then, discontinuity set orientation is calculated using Kernel Density Estimation and

  8. Stellar atmospheric parameter estimation using Gaussian process regression

    Science.gov (United States)

    Bu, Yude; Pan, Jingchang

    2015-02-01

    As is well known, it is necessary to derive stellar parameters from massive amounts of spectral data automatically and efficiently. However, in traditional automatic methods such as artificial neural networks (ANNs) and kernel regression (KR), it is often difficult to optimize the algorithm structure and determine the optimal algorithm parameters. Gaussian process regression (GPR) is a recently developed method that has been proven to be capable of overcoming these difficulties. Here we apply GPR to derive stellar atmospheric parameters from spectra. Through evaluating the performance of GPR on Sloan Digital Sky Survey (SDSS) spectra, Medium resolution Isaac Newton Telescope Library of Empirical Spectra (MILES) spectra, ELODIE spectra and the spectra of member stars of galactic globular clusters, we conclude that GPR can derive stellar parameters accurately and precisely, especially when we use data preprocessed with principal component analysis (PCA). We then compare the performance of GPR with that of several widely used regression methods (ANNs, support-vector regression and KR) and find that with GPR it is easier to optimize structures and parameters and more efficient and accurate to extract atmospheric parameters.

  9. Methods for estimating disease transmission rates: Evaluating the precision of Poisson regression and two novel methods

    DEFF Research Database (Denmark)

    Kirkeby, Carsten Thure; Hisham Beshara Halasa, Tariq; Gussmann, Maya Katrin

    2017-01-01

    the transmission rate. We use data from the two simulation models and vary the sampling intervals and the size of the population sampled. We devise two new methods to determine transmission rate, and compare these to the frequently used Poisson regression method in both epidemic and endemic situations. For most...... tested scenarios these new methods perform similar or better than Poisson regression, especially in the case of long sampling intervals. We conclude that transmission rate estimates are easily biased, which is important to take into account when using these rates in simulation models....

  10. Management applications of discontinuity theory

    Science.gov (United States)

    Angeler, David G.; Allen, Craig R.; Barichievy, Chris; Eason, Tarsha; Garmestani, Ahjond S.; Graham, Nicholas A.J.; Granholm, Dean; Gunderson, Lance H.; Knutson, Melinda; Nash, Kirsty L.; Nelson, R. John; Nystrom, Magnus; Spanbauer, Trisha; Stow, Craig A.; Sundstrom, Shana M.

    2015-01-01

    Human impacts on the environment are multifaceted and can occur across distinct spatiotemporal scales. Ecological responses to environmental change are therefore difficult to predict, and entail large degrees of uncertainty. Such uncertainty requires robust tools for management to sustain ecosystem goods and services and maintain resilient ecosystems.We propose an approach based on discontinuity theory that accounts for patterns and processes at distinct spatial and temporal scales, an inherent property of ecological systems. Discontinuity theory has not been applied in natural resource management and could therefore improve ecosystem management because it explicitly accounts for ecological complexity.Synthesis and applications. We highlight the application of discontinuity approaches for meeting management goals. Specifically, discontinuity approaches have significant potential to measure and thus understand the resilience of ecosystems, to objectively identify critical scales of space and time in ecological systems at which human impact might be most severe, to provide warning indicators of regime change, to help predict and understand biological invasions and extinctions and to focus monitoring efforts. Discontinuity theory can complement current approaches, providing a broader paradigm for ecological management and conservation.

  11. Monopole and dipole estimation for multi-frequency sky maps by linear regression

    Science.gov (United States)

    Wehus, I. K.; Fuskeland, U.; Eriksen, H. K.; Banday, A. J.; Dickinson, C.; Ghosh, T.; Górski, K. M.; Lawrence, C. R.; Leahy, J. P.; Maino, D.; Reich, P.; Reich, W.

    2017-01-01

    We describe a simple but efficient method for deriving a consistent set of monopole and dipole corrections for multi-frequency sky map data sets, allowing robust parametric component separation with the same data set. The computational core of this method is linear regression between pairs of frequency maps, often called T-T plots. Individual contributions from monopole and dipole terms are determined by performing the regression locally in patches on the sky, while the degeneracy between different frequencies is lifted whenever the dominant foreground component exhibits a significant spatial spectral index variation. Based on this method, we present two different, but each internally consistent, sets of monopole and dipole coefficients for the nine-year WMAP, Planck 2013, SFD 100 μm, Haslam 408 MHz and Reich & Reich 1420 MHz maps. The two sets have been derived with different analysis assumptions and data selection, and provide an estimate of residual systematic uncertainties. In general, our values are in good agreement with previously published results. Among the most notable results are a relative dipole between the WMAP and Planck experiments of 10-15μK (depending on frequency), an estimate of the 408 MHz map monopole of 8.9 ± 1.3 K, and a non-zero dipole in the 1420 MHz map of 0.15 ± 0.03 K pointing towards Galactic coordinates (l,b) = (308°,-36°) ± 14°. These values represent the sum of any instrumental and data processing offsets, as well as any Galactic or extra-Galactic component that is spectrally uniform over the full sky.

  12. Improved regression models for ventilation estimation based on chest and abdomen movements

    International Nuclear Information System (INIS)

    Liu, Shaopeng; Gao, Robert; He, Qingbo; Staudenmayer, John; Freedson, Patty

    2012-01-01

    Non-invasive estimation of minute ventilation is important for quantifying the intensity of physical activity of individuals. In this paper, several improved regression models are presented, based on the measurement of chest and abdomen movements from sensor belts worn by subjects (n = 50) engaged in 14 types of physical activity. Five linear models involving a combination of 11 features were developed, and the effects of different model training approaches and window sizes for computing the features were investigated. The performance of the models was evaluated using experimental data collected during the physical activity protocol. The predicted minute ventilation was compared to the criterion ventilation measured using a bidirectional digital volume transducer housed in a respiratory gas exchange system. The results indicate that the inclusion of breathing frequency and the use of percentile points instead of interdecile ranges over a 60 s window size reduced error by about 43%, when applied to the classical two-degrees-of-freedom model. The mean percentage error of the minute ventilation estimated for all the activities was below 7.5%, verifying reasonably good performance of the models and the applicability of the wearable sensing system for minute ventilation estimation during physical activity. (paper)

  13. Unbalanced Regressions and the Predictive Equation

    DEFF Research Database (Denmark)

    Osterrieder, Daniela; Ventosa-Santaulària, Daniel; Vera-Valdés, J. Eduardo

    Predictive return regressions with persistent regressors are typically plagued by (asymptotically) biased/inconsistent estimates of the slope, non-standard or potentially even spurious statistical inference, and regression unbalancedness. We alleviate the problem of unbalancedness in the theoreti......Predictive return regressions with persistent regressors are typically plagued by (asymptotically) biased/inconsistent estimates of the slope, non-standard or potentially even spurious statistical inference, and regression unbalancedness. We alleviate the problem of unbalancedness...

  14. Vertebral Fractures After Discontinuation of Denosumab

    DEFF Research Database (Denmark)

    Cummings, Steven R; Ferrari, Serge; Eastell, Richard

    2018-01-01

    . We analyzed the risk of new or worsening vertebral fractures, especially multiple vertebral fractures, in participants who discontinued denosumab during the FREEDOM study or its Extension. Participants received ≥2 doses of denosumab or placebo Q6M, discontinued treatment, and stayed in the study ≥7...... months after the last dose. Of 1001 participants who discontinued denosumab during FREEDOM or Extension, the vertebral fracture rate increased from 1.2 per 100 participant-years during the on-treatment period to 7.1, similar to participants who received and then discontinued placebo (n = 470; 8.5 per 100....... Therefore, patients who discontinue denosumab should rapidly transition to an alternative antiresorptive treatment. Clinicaltrails.gov: NCT00089791 (FREEDOM) and NCT00523341 (Extension). © 2017 American Society for Bone and Mineral Research....

  15. Output-Only Modal Parameter Recursive Estimation of Time-Varying Structures via a Kernel Ridge Regression FS-TARMA Approach

    Directory of Open Access Journals (Sweden)

    Zhi-Sai Ma

    2017-01-01

    Full Text Available Modal parameter estimation plays an important role in vibration-based damage detection and is worth more attention and investigation, as changes in modal parameters are usually being used as damage indicators. This paper focuses on the problem of output-only modal parameter recursive estimation of time-varying structures based upon parameterized representations of the time-dependent autoregressive moving average (TARMA. A kernel ridge regression functional series TARMA (FS-TARMA recursive identification scheme is proposed and subsequently employed for the modal parameter estimation of a numerical three-degree-of-freedom time-varying structural system and a laboratory time-varying structure consisting of a simply supported beam and a moving mass sliding on it. The proposed method is comparatively assessed against an existing recursive pseudolinear regression FS-TARMA approach via Monte Carlo experiments and shown to be capable of accurately tracking the time-varying dynamics in a recursive manner.

  16. Identification of a Discontinuous Parameter in Stochastic Parabolic Systems

    International Nuclear Information System (INIS)

    Aihara, S. I.

    1998-01-01

    The purpose of this paper is to study the identification problem for a spatially varying discontinuous parameter in stochastic diffusion equations. The consistency property of the maximum likelihood estimate (M.L.E.) and a generating algorithm for M.L.E. have been explored under the condition that the unknown parameter is in a sufficiently regular space with respect to spatial variables. In order to prove the consistency property of the M.L.E. for a discontinuous diffusion coefficient, we use the method of sieves, i.e., first the admissible class of unknown parameters is projected into a finite-dimensional space and next the convergence of the derived finite-dimensional M.L.E. to the infinite-dimensional M.L.E. is justified under some conditions. An iterative algorithm for generating the M.L.E. is also proposed with two numerical examples

  17. Medication persistence and discontinuation of rivaroxaban versus warfarin among patients with non-valvular atrial fibrillation.

    Science.gov (United States)

    Nelson, Winnie W; Song, Xue; Coleman, Craig I; Thomson, Erin; Smith, David M; Damaraju, C V; Schein, Jeffrey R

    2014-12-01

    To compare real-world persistence and discontinuation among non-valvular atrial fibrillation (NVAF) patients on rivaroxaban and warfarin in the US. A large nationally representative US claims database was used to conduct a retrospective cohort analysis of patients with NVAF treated with rivaroxaban or warfarin from 1 July 2010 through 31 March 2013. Index date was the date of the first prescription of rivaroxaban or warfarin. All patients were followed until the earliest of inpatient death, end of continuous enrollment, or end of study period. Rivaroxaban patients were matched 1:1 by propensity scores. Medication persistence was defined as absence of refill gap of ≥ 60 days. Discontinuation was defined as no additional refill for at least 90 days and until the end of follow-up. Cox proportional hazards models were estimated to examine the adjusted hazard ratios (aHRs) of rivaroxaban vs. warfarin on non-persistence and discontinuation. A total of 32,886 NVAF patients on rivaroxaban or warfarin met the study inclusion criteria. Each of the 7259 rivaroxaban patients identified were matched 1:1 to warfarin patients. Patients on rivaroxaban had a significantly better rate of persistence (aHR: 0.63, 95% CI 0.59-0.68) and lower rate of discontinuation (aHR: 0.54, 95% CI 0.49-0.58) compared to warfarin recipients. Claims data may have contained inaccuracies and miscoding. Confounding may remain even after propensity score matching and additional adjustments in model. Refill data may not fully reflect actual medication use. Longer follow-up may produce more precise estimates of persistence and discontinuation. This matched cohort analysis indicated that rivaroxaban was associated with significantly higher medication persistence and lower discontinuation rates compared to warfarin.

  18. An improved geographically weighted regression model for PM2.5 concentration estimation in large areas

    Science.gov (United States)

    Zhai, Liang; Li, Shuang; Zou, Bin; Sang, Huiyong; Fang, Xin; Xu, Shan

    2018-05-01

    Considering the spatial non-stationary contributions of environment variables to PM2.5 variations, the geographically weighted regression (GWR) modeling method has been using to estimate PM2.5 concentrations widely. However, most of the GWR models in reported studies so far were established based on the screened predictors through pretreatment correlation analysis, and this process might cause the omissions of factors really driving PM2.5 variations. This study therefore developed a best subsets regression (BSR) enhanced principal component analysis-GWR (PCA-GWR) modeling approach to estimate PM2.5 concentration by fully considering all the potential variables' contributions simultaneously. The performance comparison experiment between PCA-GWR and regular GWR was conducted in the Beijing-Tianjin-Hebei (BTH) region over a one-year-period. Results indicated that the PCA-GWR modeling outperforms the regular GWR modeling with obvious higher model fitting- and cross-validation based adjusted R2 and lower RMSE. Meanwhile, the distribution map of PM2.5 concentration from PCA-GWR modeling also clearly depicts more spatial variation details in contrast to the one from regular GWR modeling. It can be concluded that the BSR enhanced PCA-GWR modeling could be a reliable way for effective air pollution concentration estimation in the coming future by involving all the potential predictor variables' contributions to PM2.5 variations.

  19. Special discontinuities in nonlinearly elastic media

    Science.gov (United States)

    Chugainova, A. P.

    2017-06-01

    Solutions of a nonlinear hyperbolic system of equations describing weakly nonlinear quasitransverse waves in a weakly anisotropic elastic medium are studied. The influence of small-scale processes of dissipation and dispersion is investigated. The small-scale processes determine the structure of discontinuities (shocks) and a set of discontinuities with a stationary structure. Among the discontinuities with a stationary structure, there are special ones that, in addition to relations following from conservation laws, satisfy additional relations required for the existence of their structure. In the phase plane, the structure of such discontinuities is represented by an integral curve joining two saddles. Special discontinuities lead to nonunique self-similar solutions of the Riemann problem. Asymptotics of non-self-similar problems for equations with dissipation and dispersion are found numerically. These asymptotics correspond to self-similar solutions of the problems.

  20. Regression with Sparse Approximations of Data

    DEFF Research Database (Denmark)

    Noorzad, Pardis; Sturm, Bob L.

    2012-01-01

    We propose sparse approximation weighted regression (SPARROW), a method for local estimation of the regression function that uses sparse approximation with a dictionary of measurements. SPARROW estimates the regression function at a point with a linear combination of a few regressands selected...... by a sparse approximation of the point in terms of the regressors. We show SPARROW can be considered a variant of \\(k\\)-nearest neighbors regression (\\(k\\)-NNR), and more generally, local polynomial kernel regression. Unlike \\(k\\)-NNR, however, SPARROW can adapt the number of regressors to use based...

  1. Ultrasonic assessment of shrinkage type discontinuities

    International Nuclear Information System (INIS)

    Hubber, John

    2010-01-01

    This investigation into ultrasonic internal discontinuities is intended to demonstrate typical examples of internal 'shrinkage' type discontinuities and its connection with the casting suitability, integrity and reliability in service. This type of discontinuity can be misinterpreted by ultrasonic technicians and can lead to the rejection of castings unnecessarily, due to the mis-characterization of fine shrinkage - discrete porosity. The samples for this investigation were taken from thirty ton heavy section ductile iron mill flange castings, manufactured by Graham Campbell Ferrum International. The sampled area was of discontinuities that were recorded for sizing on an area due to loss of back wall echo, but had acceptable reflectivity. A comparative sample was taken adjacent to the area of discrete porosity. The discontinuities found by this investigation are of a 'spongy' type, gaseous in appearance and are surrounded by acoustically sound material. All discontinuities discussed in this paper are centrally located in the through thickness of the casting. The porous nature of this type of discontinuity consisting of approximately 80-90% metal has its own residual strength, as indicated by the proof stress results which reveal a residual strength of up to 50-60% of that of the unaffected area of the casting. The affected areas are elliptical in shape and vary in density and through thickness throughout.

  2. Data-driven method based on particle swarm optimization and k-nearest neighbor regression for estimating capacity of lithium-ion battery

    International Nuclear Information System (INIS)

    Hu, Chao; Jain, Gaurav; Zhang, Puqiang; Schmidt, Craig; Gomadam, Parthasarathy; Gorka, Tom

    2014-01-01

    Highlights: • We develop a data-driven method for the battery capacity estimation. • Five charge-related features that are indicative of the capacity are defined. • The kNN regression model captures the dependency of the capacity on the features. • Results with 10 years’ continuous cycling data verify the effectiveness of the method. - Abstract: Reliability of lithium-ion (Li-ion) rechargeable batteries used in implantable medical devices has been recognized as of high importance from a broad range of stakeholders, including medical device manufacturers, regulatory agencies, physicians, and patients. To ensure Li-ion batteries in these devices operate reliably, it is important to be able to assess the battery health condition by estimating the battery capacity over the life-time. This paper presents a data-driven method for estimating the capacity of Li-ion battery based on the charge voltage and current curves. The contributions of this paper are three-fold: (i) the definition of five characteristic features of the charge curves that are indicative of the capacity, (ii) the development of a non-linear kernel regression model, based on the k-nearest neighbor (kNN) regression, that captures the complex dependency of the capacity on the five features, and (iii) the adaptation of particle swarm optimization (PSO) to finding the optimal combination of feature weights for creating a kNN regression model that minimizes the cross validation (CV) error in the capacity estimation. Verification with 10 years’ continuous cycling data suggests that the proposed method is able to accurately estimate the capacity of Li-ion battery throughout the whole life-time

  3. Estimating the Impact of Urbanization on Air Quality in China Using Spatial Regression Models

    Directory of Open Access Journals (Sweden)

    Chuanglin Fang

    2015-11-01

    Full Text Available Urban air pollution is one of the most visible environmental problems to have accompanied China’s rapid urbanization. Based on emission inventory data from 2014, gathered from 289 cities, we used Global and Local Moran’s I to measure the spatial autorrelation of Air Quality Index (AQI values at the city level, and employed Ordinary Least Squares (OLS, Spatial Lag Model (SAR, and Geographically Weighted Regression (GWR to quantitatively estimate the comprehensive impact and spatial variations of China’s urbanization process on air quality. The results show that a significant spatial dependence and heterogeneity existed in AQI values. Regression models revealed urbanization has played an important negative role in determining air quality in Chinese cities. The population, urbanization rate, automobile density, and the proportion of secondary industry were all found to have had a significant influence over air quality. Per capita Gross Domestic Product (GDP and the scale of urban land use, however, failed the significance test at 10% level. The GWR model performed better than global models and the results of GWR modeling show that the relationship between urbanization and air quality was not constant in space. Further, the local parameter estimates suggest significant spatial variation in the impacts of various urbanization factors on air quality.

  4. Stabilities of MHD rotational discontinuities

    International Nuclear Information System (INIS)

    Wang, S.

    1984-11-01

    In this paper, the stabilities of MHD rotational discontinuities are analyzed. The results show that the rotational discontinuities in an incompressible magnetofluid are not always stable with respect to infinitesimal perturbation. The instability condition in a special case is obtained. (author)

  5. Abrupt opium discontinuation has no significant triggering effect on acute myocardial infarction.

    Science.gov (United States)

    Masoomi, Mohammad; Zare, Jahangir; Nasri, Hamidreza; Mirzazadeh, Ali; Sheikhvatan, Mehrdad

    2011-04-01

    A deleterious effect of withdrawal symptoms due to abrupt discontinuation of opium on the cardiovascular system is one of the recent interesting topics in the cardiovascular field. The current study hypothesized that the withdrawal syndrome due to discontinuing opium might be an important trigger for the appearance of acute myocardial infarction. Eighty-one opium-addicted individuals who were candidates for cardiovascular clinical evaluation and consecutively hospitalized in the coronary care unit (CCU) ward of Shafa Hospital in Kerman between January and July 2009 were included in the study and categorized in the case group, including patients experiencing withdrawal symptoms within 6-12 h after the reduced or discontinued use of opium according to the Diagnostic and Statistical Manual of Mental Disorders-revised IV version (DSM-IV-R) criteria for opium dependence and withdrawal, and the control group, without opium withdrawal symptoms. The appearance of acute myocardial infarction was compared between the two groups using multivariable regression models. Acute myocardial infarction occurred in 50.0% of those with withdrawal symptoms and in 45.1% of patients without evidence of opium withdrawal (P = 0.669). Multivariable analysis showed that opium withdrawal symptoms were not a trigger for acute myocardial infarction adjusting for demographic characteristics, marital status, education level and common coronary artery disease risk profiles [odds ratio (OR) = 0.920, 95% confidence interval (CI) = 0.350-2.419, P = 0.866]. Also, daily dose of opium before reducing or discontinuing use did not predict the appearance of myocardial infarction in the presence of confounder variables (OR = 0.975, 95% CI = 0.832-1.143, P = 0.755). Withdrawal syndrome due to abrupt discontinuation of opium does not have a triggering role for appearance of acute myocardial infarction.

  6. Shock discontinuities around the confinement-deconfinement transition in baryon-rich dense matter

    International Nuclear Information System (INIS)

    Rischke, D.H.; Waldhauser, B.M.; Stoecker, H.; Greiner, W.; Friman, B.L.

    1989-05-01

    We investigate shock discontinuities that involve a conversion of hadronic matter into quark-gluon matter and vice versa. Such discontinuities may develop when nuclear matter is compressed to energy densities beyond the deconfinement transition and in the hadronization of an expanding quark-gluon plasma. In these investigations we study the influence of various phenomenological equations of state. Consequences for entropy production in heavy-ion collisions are discussed and estimates of inclusive particle ratios at freeze-out are given. We find that antiparticle-to-particle ratios may be enhanced by an order of magnitude if a quark-gluon plasma is created during the collision compared to a purely hadronic collision scenario. (orig.)

  7. Vector regression introduced

    Directory of Open Access Journals (Sweden)

    Mok Tik

    2014-06-01

    Full Text Available This study formulates regression of vector data that will enable statistical analysis of various geodetic phenomena such as, polar motion, ocean currents, typhoon/hurricane tracking, crustal deformations, and precursory earthquake signals. The observed vector variable of an event (dependent vector variable is expressed as a function of a number of hypothesized phenomena realized also as vector variables (independent vector variables and/or scalar variables that are likely to impact the dependent vector variable. The proposed representation has the unique property of solving the coefficients of independent vector variables (explanatory variables also as vectors, hence it supersedes multivariate multiple regression models, in which the unknown coefficients are scalar quantities. For the solution, complex numbers are used to rep- resent vector information, and the method of least squares is deployed to estimate the vector model parameters after transforming the complex vector regression model into a real vector regression model through isomorphism. Various operational statistics for testing the predictive significance of the estimated vector parameter coefficients are also derived. A simple numerical example demonstrates the use of the proposed vector regression analysis in modeling typhoon paths.

  8. Quantile Regression Methods

    DEFF Research Database (Denmark)

    Fitzenberger, Bernd; Wilke, Ralf Andreas

    2015-01-01

    if the mean regression model does not. We provide a short informal introduction into the principle of quantile regression which includes an illustrative application from empirical labor market research. This is followed by briefly sketching the underlying statistical model for linear quantile regression based......Quantile regression is emerging as a popular statistical approach, which complements the estimation of conditional mean models. While the latter only focuses on one aspect of the conditional distribution of the dependent variable, the mean, quantile regression provides more detailed insights...... by modeling conditional quantiles. Quantile regression can therefore detect whether the partial effect of a regressor on the conditional quantiles is the same for all quantiles or differs across quantiles. Quantile regression can provide evidence for a statistical relationship between two variables even...

  9. Bias and efficiency loss in regression estimates due to duplicated observations: a Monte Carlo simulation

    Directory of Open Access Journals (Sweden)

    Francesco Sarracino

    2017-04-01

    Full Text Available Recent studies documented that survey data contain duplicate records. We assess how duplicate records affect regression estimates, and we evaluate the effectiveness of solutions to deal with duplicate records. Results show that the chances of obtaining unbiased estimates when data contain 40 doublets (about 5% of the sample range between 3.5% and 11.5% depending on the distribution of duplicates. If 7 quintuplets are present in the data (2% of the sample, then the probability of obtaining biased estimates ranges between 11% and 20%. Weighting the duplicate records by the inverse of their multiplicity, or dropping superfluous duplicates outperform other solutions in all considered scenarios. Our results illustrate the risk of using data in presence of duplicate records and call for further research on strategies to analyze affected data.

  10. Beyond the mean estimate: a quantile regression analysis of inequalities in educational outcomes using INVALSI survey data

    Directory of Open Access Journals (Sweden)

    Antonella Costanzo

    2017-09-01

    Full Text Available Abstract The number of studies addressing issues of inequality in educational outcomes using cognitive achievement tests and variables from large-scale assessment data has increased. Here the value of using a quantile regression approach is compared with a classical regression analysis approach to study the relationships between educational outcomes and likely predictor variables. Italian primary school data from INVALSI large-scale assessments were analyzed using both quantile and standard regression approaches. Mathematics and reading scores were regressed on students' characteristics and geographical variables selected for their theoretical and policy relevance. The results demonstrated that, in Italy, the role of gender and immigrant status varied across the entire conditional distribution of students’ performance. Analogous results emerged pertaining to the difference in students’ performance across Italian geographic areas. These findings suggest that quantile regression analysis is a useful tool to explore the determinants and mechanisms of inequality in educational outcomes. A proper interpretation of quantile estimates may enable teachers to identify effective learning activities and help policymakers to develop tailored programs that increase equity in education.

  11. Estimating leaf photosynthetic pigments information by stepwise multiple linear regression analysis and a leaf optical model

    Science.gov (United States)

    Liu, Pudong; Shi, Runhe; Wang, Hong; Bai, Kaixu; Gao, Wei

    2014-10-01

    Leaf pigments are key elements for plant photosynthesis and growth. Traditional manual sampling of these pigments is labor-intensive and costly, which also has the difficulty in capturing their temporal and spatial characteristics. The aim of this work is to estimate photosynthetic pigments at large scale by remote sensing. For this purpose, inverse model were proposed with the aid of stepwise multiple linear regression (SMLR) analysis. Furthermore, a leaf radiative transfer model (i.e. PROSPECT model) was employed to simulate the leaf reflectance where wavelength varies from 400 to 780 nm at 1 nm interval, and then these values were treated as the data from remote sensing observations. Meanwhile, simulated chlorophyll concentration (Cab), carotenoid concentration (Car) and their ratio (Cab/Car) were taken as target to build the regression model respectively. In this study, a total of 4000 samples were simulated via PROSPECT with different Cab, Car and leaf mesophyll structures as 70% of these samples were applied for training while the last 30% for model validation. Reflectance (r) and its mathematic transformations (1/r and log (1/r)) were all employed to build regression model respectively. Results showed fair agreements between pigments and simulated reflectance with all adjusted coefficients of determination (R2) larger than 0.8 as 6 wavebands were selected to build the SMLR model. The largest value of R2 for Cab, Car and Cab/Car are 0.8845, 0.876 and 0.8765, respectively. Meanwhile, mathematic transformations of reflectance showed little influence on regression accuracy. We concluded that it was feasible to estimate the chlorophyll and carotenoids and their ratio based on statistical model with leaf reflectance data.

  12. Estimating the Impact of Urbanization on Air Quality in China Using Spatial Regression Models

    OpenAIRE

    Fang, Chuanglin; Liu, Haimeng; Li, Guangdong; Sun, Dongqi; Miao, Zhuang

    2015-01-01

    Urban air pollution is one of the most visible environmental problems to have accompanied China’s rapid urbanization. Based on emission inventory data from 2014, gathered from 289 cities, we used Global and Local Moran’s I to measure the spatial autorrelation of Air Quality Index (AQI) values at the city level, and employed Ordinary Least Squares (OLS), Spatial Lag Model (SAR), and Geographically Weighted Regression (GWR) to quantitatively estimate the comprehensive impact and spatial variati...

  13. Management applications of discontinuity theory | Science ...

    Science.gov (United States)

    1.Human impacts on the environment are multifaceted and can occur across distinct spatiotemporal scales. Ecological responses to environmental change are therefore difficult to predict, and entail large degrees of uncertainty. Such uncertainty requires robust tools for management to sustain ecosystem goods and services and maintain resilient ecosystems. 2.We propose an approach based on discontinuity theory that accounts for patterns and processes at distinct spatial and temporal scales, an inherent property of ecological systems. Discontinuity theory has not been applied in natural resource management and could therefore improve ecosystem management because it explicitly accounts for ecological complexity. 3.Synthesis and applications. We highlight the application of discontinuity approaches for meeting management goals. Specifically, discontinuity approaches have significant potential to measure and thus understand the resilience of ecosystems, to objectively identify critical scales of space and time in ecological systems at which human impact might be most severe, to provide warning indicators of regime change, to help predict and understand biological invasions and extinctions and to focus monitoring efforts. Discontinuity theory can complement current approaches, providing a broader paradigm for ecological management and conservation This manuscript provides insight on using discontinuity approaches to aid in managing complex ecological systems. In part

  14. Medication persistence and discontinuation of rivaroxaban and dabigatran etexilate among patients with non-valvular atrial fibrillation.

    Science.gov (United States)

    Nelson, Winnie W; Song, Xue; Thomson, Erin; Smith, David M; Coleman, Craig I; Damaraju, C V; Schein, Jeffrey R

    2015-01-01

    To compare real-world persistence and discontinuation among non-valvular atrial fibrillation (NVAF) patients on rivaroxaban and dabigatran in the US. A large nationally representative US claims database was used to conduct a retrospective cohort analysis of patients with NVAF on rivaroxaban or dabigatran between October 2010 and March 2013. The index date was the date of the first prescription of rivaroxaban or dabigatran. All patients had ≥6 months of data prior to the index date and were followed until the earliest of inpatient death, end of continuous enrollment, or end of the study period. Rivaroxaban patients were matched 1:1 with dabigatran patients using the propensity score matching technique. Cox proportional hazards models were employed to estimate the adjusted hazard ratios (aHRs) of non-persistence and discontinuation. Persistence was defined as absence of a refill gap of ≥60 days. Discontinuation was defined as no additional refill for at least 90 days and until the end of follow-up. A total of 30,337 NVAF patients on rivaroxaban or dabigatran met the study criteria. All 7259 rivaroxaban patients were matched 1:1 to dabigatran patients. Compared with dabigatran users, rivaroxaban patients were 11% less likely to become non-persistent with therapy (aHR: 0.89, 95% CI 0.84-0.95) and 29% less likely to discontinue therapy (aHR: 0.71, 95% CI 0.66-0.77). Claims data are subject to miscoding and inaccuracies. Refill data may not fully reflect actual medication taken. Confounding may remain even after propensity score matching and additional adjustments in model. Longer follow-up may produce more precise estimates of persistence and discontinuation. This matched cohort analysis indicated that, compared to dabigatran, rivaroxaban was associated with better persistence and lower rates of discontinuation.

  15. Estimating the Counterfactual Impact of Conservation Programs on Land Cover Outcomes: The Role of Matching and Panel Regression Techniques.

    Science.gov (United States)

    Jones, Kelly W; Lewis, David J

    2015-01-01

    Deforestation and conversion of native habitats continues to be the leading driver of biodiversity and ecosystem service loss. A number of conservation policies and programs are implemented--from protected areas to payments for ecosystem services (PES)--to deter these losses. Currently, empirical evidence on whether these approaches stop or slow land cover change is lacking, but there is increasing interest in conducting rigorous, counterfactual impact evaluations, especially for many new conservation approaches, such as PES and REDD, which emphasize additionality. In addition, several new, globally available and free high-resolution remote sensing datasets have increased the ease of carrying out an impact evaluation on land cover change outcomes. While the number of conservation evaluations utilizing 'matching' to construct a valid control group is increasing, the majority of these studies use simple differences in means or linear cross-sectional regression to estimate the impact of the conservation program using this matched sample, with relatively few utilizing fixed effects panel methods--an alternative estimation method that relies on temporal variation in the data. In this paper we compare the advantages and limitations of (1) matching to construct the control group combined with differences in means and cross-sectional regression, which control for observable forms of bias in program evaluation, to (2) fixed effects panel methods, which control for observable and time-invariant unobservable forms of bias, with and without matching to create the control group. We then use these four approaches to estimate forest cover outcomes for two conservation programs: a PES program in Northeastern Ecuador and strict protected areas in European Russia. In the Russia case we find statistically significant differences across estimators--due to the presence of unobservable bias--that lead to differences in conclusions about effectiveness. The Ecuador case illustrates that

  16. Estimating the Counterfactual Impact of Conservation Programs on Land Cover Outcomes: The Role of Matching and Panel Regression Techniques.

    Directory of Open Access Journals (Sweden)

    Kelly W Jones

    Full Text Available Deforestation and conversion of native habitats continues to be the leading driver of biodiversity and ecosystem service loss. A number of conservation policies and programs are implemented--from protected areas to payments for ecosystem services (PES--to deter these losses. Currently, empirical evidence on whether these approaches stop or slow land cover change is lacking, but there is increasing interest in conducting rigorous, counterfactual impact evaluations, especially for many new conservation approaches, such as PES and REDD, which emphasize additionality. In addition, several new, globally available and free high-resolution remote sensing datasets have increased the ease of carrying out an impact evaluation on land cover change outcomes. While the number of conservation evaluations utilizing 'matching' to construct a valid control group is increasing, the majority of these studies use simple differences in means or linear cross-sectional regression to estimate the impact of the conservation program using this matched sample, with relatively few utilizing fixed effects panel methods--an alternative estimation method that relies on temporal variation in the data. In this paper we compare the advantages and limitations of (1 matching to construct the control group combined with differences in means and cross-sectional regression, which control for observable forms of bias in program evaluation, to (2 fixed effects panel methods, which control for observable and time-invariant unobservable forms of bias, with and without matching to create the control group. We then use these four approaches to estimate forest cover outcomes for two conservation programs: a PES program in Northeastern Ecuador and strict protected areas in European Russia. In the Russia case we find statistically significant differences across estimators--due to the presence of unobservable bias--that lead to differences in conclusions about effectiveness. The Ecuador case

  17. Polylinear regression analysis in radiochemistry

    International Nuclear Information System (INIS)

    Kopyrin, A.A.; Terent'eva, T.N.; Khramov, N.N.

    1995-01-01

    A number of radiochemical problems have been formulated in the framework of polylinear regression analysis, which permits the use of conventional mathematical methods for their solution. The authors have considered features of the use of polylinear regression analysis for estimating the contributions of various sources to the atmospheric pollution, for studying irradiated nuclear fuel, for estimating concentrations from spectral data, for measuring neutron fields of a nuclear reactor, for estimating crystal lattice parameters from X-ray diffraction patterns, for interpreting data of X-ray fluorescence analysis, for estimating complex formation constants, and for analyzing results of radiometric measurements. The problem of estimating the target parameters can be incorrect at certain properties of the system under study. The authors showed the possibility of regularization by adding a fictitious set of data open-quotes obtainedclose quotes from the orthogonal design. To estimate only a part of the parameters under consideration, the authors used incomplete rank models. In this case, it is necessary to take into account the possibility of confounding estimates. An algorithm for evaluating the degree of confounding is presented which is realized using standard software or regression analysis

  18. Convergence of Discontinuous Galerkin Methods for Incompressible Two-Phase Flow in Heterogeneous Media

    KAUST Repository

    Kou, Jisheng; Sun, Shuyu

    2013-01-01

    A class of discontinuous Galerkin methods with interior penalties is presented for incompressible two-phase flow in heterogeneous porous media with capillary pressures. The semidiscrete approximate schemes for fully coupled system of two-phase flow are formulated. In highly heterogeneous permeable media, the saturation is discontinuous due to different capillary pressures, and therefore, the proposed methods incorporate the capillary pressures in the pressure equation instead of saturation equation. By introducing a coupling approach for stability and error estimates instead of the conventional separate analysis for pressure and saturation, the stability of the schemes in space and time and a priori hp error estimates are presented in the L2(H 1) for pressure and in the L∞(L2) and L2(H1) for saturation. Two time discretization schemes are introduced for effectively computing the discrete solutions. © 2013 Societ y for Industrial and Applied Mathematics.

  19. 27 CFR 478.127 - Discontinuance of business.

    Science.gov (United States)

    2010-04-01

    ... business was located: Provided, however, Where State law or local ordinance requires the delivery of... 27 Alcohol, Tobacco Products and Firearms 3 2010-04-01 2010-04-01 false Discontinuance of business... Records § 478.127 Discontinuance of business. Where a licensed business is discontinued and succeeded by a...

  20. Applied Prevalence Ratio estimation with different Regression models: An example from a cross-national study on substance use research.

    Science.gov (United States)

    Espelt, Albert; Marí-Dell'Olmo, Marc; Penelo, Eva; Bosque-Prous, Marina

    2016-06-14

    To examine the differences between Prevalence Ratio (PR) and Odds Ratio (OR) in a cross-sectional study and to provide tools to calculate PR using two statistical packages widely used in substance use research (STATA and R). We used cross-sectional data from 41,263 participants of 16 European countries participating in the Survey on Health, Ageing and Retirement in Europe (SHARE). The dependent variable, hazardous drinking, was calculated using the Alcohol Use Disorders Identification Test - Consumption (AUDIT-C). The main independent variable was gender. Other variables used were: age, educational level and country of residence. PR of hazardous drinking in men with relation to women was estimated using Mantel-Haenszel method, log-binomial regression models and poisson regression models with robust variance. These estimations were compared to the OR calculated using logistic regression models. Prevalence of hazardous drinkers varied among countries. Generally, men have higher prevalence of hazardous drinking than women [PR=1.43 (1.38-1.47)]. Estimated PR was identical independently of the method and the statistical package used. However, OR overestimated PR, depending on the prevalence of hazardous drinking in the country. In cross-sectional studies, where comparisons between countries with differences in the prevalence of the disease or condition are made, it is advisable to use PR instead of OR.

  1. Use of instantaneous streamflow measurements to improve regression estimates of index flow for the summer month of lowest streamflow in Michigan

    Science.gov (United States)

    Holtschlag, David J.

    2011-01-01

    In Michigan, index flow Q50 is a streamflow characteristic defined as the minimum of median flows for July, August, and September. The state of Michigan uses index flow estimates to help regulate large (greater than 100,000 gallons per day) water withdrawals to prevent adverse effects on characteristic fish populations. At sites where long-term streamgages are located, index flows are computed directly from continuous streamflow records as GageQ50. In an earlier study, a multiple-regression equation was developed to estimate index flows IndxQ50 at ungaged sites. The index equation explains about 94 percent of the variability of index flows at 147 (index) streamgages by use of six explanatory variables describing soil type, aquifer transmissivity, land cover, and precipitation characteristics. This report extends the results of the previous study, by use of Monte Carlo simulations, to evaluate alternative flow estimators, DiscQ50, IntgQ50, SiteQ50, and AugmQ50. The Monte Carlo simulations treated each of the available index streamgages, in turn, as a miscellaneous site where streamflow conditions are described by one or more instantaneous measurements of flow. In the simulations, instantaneous flows were approximated by daily mean flows at the corresponding site. All estimators use information that can be obtained from instantaneous flow measurements and contemporaneous daily mean flow data from nearby long-term streamgages. The efficacy of these estimators was evaluated over a set of measurement intensities in which the number of simulated instantaneous flow measurements ranged from 1 to 100 at a site. The discrete measurement estimator DiscQ50 is based on a simple linear regression developed between information on daily mean flows at five or more streamgages near the miscellaneous site and their corresponding GageQ50 index flows. The regression relation then was used to compute a DiscQ50 estimate at the miscellaneous site by use of the simulated instantaneous flow

  2. Species Composition at the Sub-Meter Level in Discontinuous Permafrost in Subarctic Sweden

    Science.gov (United States)

    Anderson, S. M.; Palace, M. W.; Layne, M.; Varner, R. K.; Crill, P. M.

    2013-12-01

    Northern latitudes are experiencing rapid warming. Wetlands underlain by permafrost are particularly vulnerable to warming which results in changes in vegetative cover. Specific species have been associated with greenhouse gas emissions therefore knowledge of species compositional shift allows for the systematic change and quantification of emissions and changes in such emissions. Species composition varies on the sub-meter scale based on topography and other microsite environmental parameters. This complexity and the need to scale vegetation to the landscape level proves vital in our estimation of carbon dioxide (CO2) and methane (CH4) emissions and dynamics. Stordalen Mire (68°21'N, 18°49'E) in Abisko and is located at the edge of discontinuous permafrost zone. This provides a unique opportunity to analyze multiple vegetation communities in a close proximity. To do this, we randomly selected 25 1x1 meter plots that were representative of five major cover types: Semi-wet, wet, hummock, tall graminoid, and tall shrub. We used a quadrat with 64 sub plots and measured areal percent cover for 24 species. We collected ground based remote sensing (RS) at each plot to determine species composition using an ADC-lite (near infrared, red, green) and GoPro (red, blue, green). We normalized each image based on a Teflon white chip placed in each image. Textural analysis was conducted on each image for entropy, angular second momentum, and lacunarity. A logistic regression was developed to examine vegetation cover types and remote sensing parameters. We used a multiple linear regression using forwards stepwise variable selection. We found statistical difference in species composition and diversity indices between vegetation cover types. In addition, we were able to build regression model to significantly estimate vegetation cover type as well as percent cover for specific key vegetative species. This ground-based remote sensing allows for quick quantification of vegetation

  3. Mathematical models for estimating earthquake casualties and damage cost through regression analysis using matrices

    International Nuclear Information System (INIS)

    Urrutia, J D; Bautista, L A; Baccay, E B

    2014-01-01

    The aim of this study was to develop mathematical models for estimating earthquake casualties such as death, number of injured persons, affected families and total cost of damage. To quantify the direct damages from earthquakes to human beings and properties given the magnitude, intensity, depth of focus, location of epicentre and time duration, the regression models were made. The researchers formulated models through regression analysis using matrices and used α = 0.01. The study considered thirty destructive earthquakes that hit the Philippines from the inclusive years 1968 to 2012. Relevant data about these said earthquakes were obtained from Philippine Institute of Volcanology and Seismology. Data on damages and casualties were gathered from the records of National Disaster Risk Reduction and Management Council. This study will be of great value in emergency planning, initiating and updating programs for earthquake hazard reduction in the Philippines, which is an earthquake-prone country.

  4. Regression models to estimate real-time concentrations of selected constituents in two tributaries to Lake Houston near Houston, Texas, 2005-07

    Science.gov (United States)

    Oden, Timothy D.; Asquith, William H.; Milburn, Matthew S.

    2009-01-01

    In December 2005, the U.S. Geological Survey in cooperation with the City of Houston, Texas, began collecting discrete water-quality samples for nutrients, total organic carbon, bacteria (total coliform and Escherichia coli), atrazine, and suspended sediment at two U.S. Geological Survey streamflow-gaging stations upstream from Lake Houston near Houston (08068500 Spring Creek near Spring, Texas, and 08070200 East Fork San Jacinto River near New Caney, Texas). The data from the discrete water-quality samples collected during 2005-07, in conjunction with monitored real-time data already being collected - physical properties (specific conductance, pH, water temperature, turbidity, and dissolved oxygen), streamflow, and rainfall - were used to develop regression models for predicting water-quality constituent concentrations for inflows to Lake Houston. Rainfall data were obtained from a rain gage monitored by Harris County Homeland Security and Emergency Management and colocated with the Spring Creek station. The leaps and bounds algorithm was used to find the best subsets of possible regression models (minimum residual sum of squares for a given number of variables). The potential explanatory or predictive variables included discharge (streamflow), specific conductance, pH, water temperature, turbidity, dissolved oxygen, rainfall, and time (to account for seasonal variations inherent in some water-quality data). The response variables at each site were nitrite plus nitrate nitrogen, total phosphorus, organic carbon, Escherichia coli, atrazine, and suspended sediment. The explanatory variables provide easily measured quantities as a means to estimate concentrations of the various constituents under investigation, with accompanying estimates of measurement uncertainty. Each regression equation can be used to estimate concentrations of a given constituent in real time. In conjunction with estimated concentrations, constituent loads were estimated by multiplying the

  5. Discontinuity formulas for multiparticle amplitudes

    International Nuclear Information System (INIS)

    Stapp, H.P.

    1976-03-01

    It is shown how discontinuity formulas for multiparticle scattering amplitudes are derived from unitarity and analyticity. The assumed analyticity property is the normal analytic structure, which was shown to be equivalent to the space-time macrocausality condition. The discontinuity formulas to be derived are the basis of multi-particle fixed-t dispersion relations

  6. Generic finite size scaling for discontinuous nonequilibrium phase transitions into absorbing states

    Science.gov (United States)

    de Oliveira, M. M.; da Luz, M. G. E.; Fiore, C. E.

    2015-12-01

    Based on quasistationary distribution ideas, a general finite size scaling theory is proposed for discontinuous nonequilibrium phase transitions into absorbing states. Analogously to the equilibrium case, we show that quantities such as response functions, cumulants, and equal area probability distributions all scale with the volume, thus allowing proper estimates for the thermodynamic limit. To illustrate these results, five very distinct lattice models displaying nonequilibrium transitions—to single and infinitely many absorbing states—are investigated. The innate difficulties in analyzing absorbing phase transitions are circumvented through quasistationary simulation methods. Our findings (allied to numerical studies in the literature) strongly point to a unifying discontinuous phase transition scaling behavior for equilibrium and this important class of nonequilibrium systems.

  7. Genetic Parameters for Body condition score, Body weigth, Milk yield and Fertility estimated using random regression models

    NARCIS (Netherlands)

    Berry, D.P.; Buckley, F.; Dillon, P.; Evans, R.D.; Rath, M.; Veerkamp, R.F.

    2003-01-01

    Genetic (co)variances between body condition score (BCS), body weight (BW), milk yield, and fertility were estimated using a random regression animal model extended to multivariate analysis. The data analyzed included 81,313 BCS observations, 91,937 BW observations, and 100,458 milk test-day yields

  8. Regression to Causality : Regression-style presentation influences causal attribution

    DEFF Research Database (Denmark)

    Bordacconi, Mats Joe; Larsen, Martin Vinæs

    2014-01-01

    of equivalent results presented as either regression models or as a test of two sample means. Our experiment shows that the subjects who were presented with results as estimates from a regression model were more inclined to interpret these results causally. Our experiment implies that scholars using regression...... models – one of the primary vehicles for analyzing statistical results in political science – encourage causal interpretation. Specifically, we demonstrate that presenting observational results in a regression model, rather than as a simple comparison of means, makes causal interpretation of the results...... more likely. Our experiment drew on a sample of 235 university students from three different social science degree programs (political science, sociology and economics), all of whom had received substantial training in statistics. The subjects were asked to compare and evaluate the validity...

  9. Regge calculus from discontinuous metrics

    International Nuclear Information System (INIS)

    Khatsymovsky, V.M.

    2003-01-01

    Regge calculus is considered as a particular case of the more general system where the linklengths of any two neighbouring 4-tetrahedra do not necessarily coincide on their common face. This system is treated as that one described by metric discontinuous on the faces. In the superspace of all discontinuous metrics the Regge calculus metrics form some hypersurface defined by continuity conditions. Quantum theory of the discontinuous metric system is assumed to be fixed somehow in the form of quantum measure on (the space of functionals on) the superspace. The problem of reducing this measure to the Regge hypersurface is addressed. The quantum Regge calculus measure is defined from a discontinuous metric measure by inserting the δ-function-like phase factor. The requirement that continuity conditions be imposed in a 'face-independent' way fixes this factor uniquely. The term 'face-independent' means that this factor depends only on the (hyper)plane spanned by the face, not on it's form and size. This requirement seems to be natural from the viewpoint of existence of the well-defined continuum limit maximally free of lattice artefacts

  10. Excursions in fluvial (dis)continuity

    Science.gov (United States)

    Grant, Gordon E.; O'Connor, James E.; Safran, Elizabeth

    2017-01-01

    Lurking below the twin concepts of connectivity and disconnectivity are their first, and in some ways, richer cousins: continuity and discontinuity. In this paper we explore how continuity and discontinuity represent fundamental and complementary perspectives in fluvial geomorphology, and how these perspectives inform and underlie our conceptions of connectivity in landscapes and rivers. We examine the historical roots of continuum and discontinuum thinking, and how much of our understanding of geomorphology rests on contrasting views of continuity and discontinuity. By continuum thinking we refer to a conception of geomorphic processes as well as geomorphic features that are expressed along continuous gradients without abrupt changes, transitions, or thresholds. Balance of forces, graded streams, and hydraulic geometry are all examples of this perspective. The continuum view has played a prominent role in diverse disciplinary fields, including ecology, paleontology, and evolutionary biology, in large part because it allows us to treat complex phenomena as orderly progressions and invoke or assume equilibrium processes that introduce order and prediction into our sciences.In contrast the discontinuous view is a distinct though complementary conceptual framework that incorporates non-uniform, non-progressive, and non-equilibrium thinking into understanding geomorphic processes and landscapes. We distinguish and discuss examples of three different ways in which discontinuous thinking can be expressed: 1) discontinuous spatial arrangements or singular events; 2) specific process domains generally associated with thresholds, either intrinsic or extrinsic; and 3) physical dynamics or changes in state, again often threshold-linked. In moving beyond the continuous perspective, a fertile set of ideas comes into focus: thresholds, non-equilibrium states, heterogeneity, catastrophe. The range of phenomena that is thereby opened up to scientific exploration similarly expands

  11. Green's function approach to neutron flux discontinuities

    International Nuclear Information System (INIS)

    Saad, E.A.; El-Wakil, S.A.

    1980-01-01

    The present work is devoted to the presentation of analytical method for the calculation of elastically and inelastically slowed down neutrons in an infinite non-absorbing medium. On the basis of the central limit theory (CLT) and the integral transform technique the slowing down equation including inelastic scattering, in terms of the Green function of elastic scattering, is solved. The Green function is decomposed according to the number of collisions. Placzec discontinuity associated with elastic scattering in addition to two discontinuities due to inelastic scattering are investigated. Numerical calculations for Fe 56 show that the elastic discontinuity produces about 41.8% change in the collision density whilst the ratio of the inelastic collision density discontinuity at qsub(o)sup(+) to the Placzec discontinuity at usub(o) + 1n 1/oc gives 55.7 percent change. (author)

  12. Neural networks with discontinuous/impact activations

    CERN Document Server

    Akhmet, Marat

    2014-01-01

    This book presents as its main subject new models in mathematical neuroscience. A wide range of neural networks models with discontinuities are discussed, including impulsive differential equations, differential equations with piecewise constant arguments, and models of mixed type. These models involve discontinuities, which are natural because huge velocities and short distances are usually observed in devices modeling the networks. A discussion of the models, appropriate for the proposed applications, is also provided. This book also: Explores questions related to the biological underpinning for models of neural networks\\ Considers neural networks modeling using differential equations with impulsive and piecewise constant argument discontinuities Provides all necessary mathematical basics for application to the theory of neural networks Neural Networks with Discontinuous/Impact Activations is an ideal book for researchers and professionals in the field of engineering mathematics that have an interest in app...

  13. Estimating the Counterfactual Impact of Conservation Programs on Land Cover Outcomes: The Role of Matching and Panel Regression Techniques

    Science.gov (United States)

    Jones, Kelly W.; Lewis, David J.

    2015-01-01

    Deforestation and conversion of native habitats continues to be the leading driver of biodiversity and ecosystem service loss. A number of conservation policies and programs are implemented—from protected areas to payments for ecosystem services (PES)—to deter these losses. Currently, empirical evidence on whether these approaches stop or slow land cover change is lacking, but there is increasing interest in conducting rigorous, counterfactual impact evaluations, especially for many new conservation approaches, such as PES and REDD, which emphasize additionality. In addition, several new, globally available and free high-resolution remote sensing datasets have increased the ease of carrying out an impact evaluation on land cover change outcomes. While the number of conservation evaluations utilizing ‘matching’ to construct a valid control group is increasing, the majority of these studies use simple differences in means or linear cross-sectional regression to estimate the impact of the conservation program using this matched sample, with relatively few utilizing fixed effects panel methods—an alternative estimation method that relies on temporal variation in the data. In this paper we compare the advantages and limitations of (1) matching to construct the control group combined with differences in means and cross-sectional regression, which control for observable forms of bias in program evaluation, to (2) fixed effects panel methods, which control for observable and time-invariant unobservable forms of bias, with and without matching to create the control group. We then use these four approaches to estimate forest cover outcomes for two conservation programs: a PES program in Northeastern Ecuador and strict protected areas in European Russia. In the Russia case we find statistically significant differences across estimators—due to the presence of unobservable bias—that lead to differences in conclusions about effectiveness. The Ecuador case

  14. Delirium Associated With Fluoxetine Discontinuation: A Case Report.

    Science.gov (United States)

    Fan, Kuang-Yuan; Liu, Hsing-Cheng

    Withdrawal symptoms on selective serotonin reuptake inhibitor (SSRI) discontinuation have raised clinical attention increasingly. However, delirium is rarely reported in the SSRI discontinuation syndrome. We report a case of delirium developing after fluoxetine discontinuation in a 65-year-old female patient with major depressive disorder. She experienced psychotic depression with limited response to treatment of fluoxetine 40 mg/d and quetiapine 100 mg/d for 3 months. After admission, we tapered fluoxetine gradually in 5 days because of its limited effect. However, delirious pictures developed 2 days after we stopped fluoxetine. Three days later, we added back fluoxetine 10 mg/d. Her delirious features gradually improved, and the clinical presentation turned into previous psychotic depression state. We gradually increased the medication to fluoxetine 60 mg/d and olanzapine 20 mg/d in the following 3 weeks. Her psychotic symptoms decreased, and there has been no delirious picture noted thereafter. Delirium associated with fluoxetine discontinuation is a much rarer complication in SSRI discontinuation syndrome. The symptoms of SSRI discontinuation syndrome may be attributable to a rapid decrease in serotonin availability. In general, the shorter the half-life of any medication, the greater the likelihood patients will experience discontinuation symptoms. Genetic vulnerability might be a potential factor to explain that SSRI discontinuation syndrome also occurred rapidly in people taking long-half-life fluoxetine. The genetic polymorphisms of both pharmacokinetic and pharmacodynamic pathways might be potentially associated with SSRI discontinuation syndrome.

  15. Estimating Penetration Resistance in Agricultural Soils of Ardabil Plain Using Artificial Neural Network and Regression Methods

    Directory of Open Access Journals (Sweden)

    Gholam Reza Sheykhzadeh

    2017-02-01

    Full Text Available Introduction: Penetration resistance is one of the criteria for evaluating soil compaction. It correlates with several soil properties such as vehicle trafficability, resistance to root penetration, seedling emergence, and soil compaction by farm machinery. Direct measurement of penetration resistance is time consuming and difficult because of high temporal and spatial variability. Therefore, many different regressions and artificial neural network pedotransfer functions have been proposed to estimate penetration resistance from readily available soil variables such as particle size distribution, bulk density (Db and gravimetric water content (θm. The lands of Ardabil Province are one of the main production regions of potato in Iran, thus, obtaining the soil penetration resistance in these regions help with the management of potato production. The objective of this research was to derive pedotransfer functions by using regression and artificial neural network to predict penetration resistance from some soil variations in the agricultural soils of Ardabil plain and to compare the performance of artificial neural network with regression models. Materials and methods: Disturbed and undisturbed soil samples (n= 105 were systematically taken from 0-10 cm soil depth with nearly 3000 m distance in the agricultural lands of the Ardabil plain ((lat 38°15' to 38°40' N, long 48°16' to 48°61' E. The contents of sand, silt and clay (hydrometer method, CaCO3 (titration method, bulk density (cylinder method, particle density (Dp (pychnometer method, organic carbon (wet oxidation method, total porosity(calculating from Db and Dp, saturated (θs and field soil water (θf using the gravimetric method were measured in the laboratory. Mean geometric diameter (dg and standard deviation (σg of soil particles were computed using the percentages of sand, silt and clay. Penetration resistance was measured in situ using cone penetrometer (analog model at 10

  16. Seismological evidence of the Hales discontinuity in northeast India

    Science.gov (United States)

    Anand, Aakash; Bora, Dipok K.; Borah, Kajaljyoti; Madhab Borgohain, Jayanta

    2018-04-01

    The crust and upper mantle shear wave velocity structure beneath the northeast India is estimated by joint inversion of Rayleigh wave group velocity and receiver function, calculated from teleseismic earthquakes data recorded at nine broadband seismic stations. The Assam valley and the Shillong-Mikir plateau are the two important tectonic blocks in the northeast India, which are surrounded by the Himalayan collision zone in the north, Indo-Burma subduction zone in the east and by the Bengal basin in the south. The joint inversion followed by forward modeling reveal crustal thicknesses of 30-34 km beneath the Shillong plateau, 36 km beneath the Mikir hills and 38-40 km beneath the Assam valley with an average shear wave velocity (Vs) of 3.4-3.5 km/s. The estimated low upper mantle shear wave velocity (Vsn) 4.2-4.3 km/s may be due to the rock composition or grain size or increased temperature and partial melt (<1%) in the upper mantle, or an effect of all. Also, we report for the first time, the existence of the Hales discontinuity at depths 56-74 km with Vs ∼4.4-4.6 km/s. Variable depth of the Hales discontinuity may be explained by the geotherm and/or addition of Cr3+ and Fe2+ in the spinel-garnet system.

  17. An ensemble Kalman filter for statistical estimation of physics constrained nonlinear regression models

    International Nuclear Information System (INIS)

    Harlim, John; Mahdi, Adam; Majda, Andrew J.

    2014-01-01

    A central issue in contemporary science is the development of nonlinear data driven statistical–dynamical models for time series of noisy partial observations from nature or a complex model. It has been established recently that ad-hoc quadratic multi-level regression models can have finite-time blow-up of statistical solutions and/or pathological behavior of their invariant measure. Recently, a new class of physics constrained nonlinear regression models were developed to ameliorate this pathological behavior. Here a new finite ensemble Kalman filtering algorithm is developed for estimating the state, the linear and nonlinear model coefficients, the model and the observation noise covariances from available partial noisy observations of the state. Several stringent tests and applications of the method are developed here. In the most complex application, the perfect model has 57 degrees of freedom involving a zonal (east–west) jet, two topographic Rossby waves, and 54 nonlinearly interacting Rossby waves; the perfect model has significant non-Gaussian statistics in the zonal jet with blocked and unblocked regimes and a non-Gaussian skewed distribution due to interaction with the other 56 modes. We only observe the zonal jet contaminated by noise and apply the ensemble filter algorithm for estimation. Numerically, we find that a three dimensional nonlinear stochastic model with one level of memory mimics the statistical effect of the other 56 modes on the zonal jet in an accurate fashion, including the skew non-Gaussian distribution and autocorrelation decay. On the other hand, a similar stochastic model with zero memory levels fails to capture the crucial non-Gaussian behavior of the zonal jet from the perfect 57-mode model

  18. Logistic quantile regression provides improved estimates for bounded avian counts: A case study of California Spotted Owl fledgling production

    Science.gov (United States)

    Cade, Brian S.; Noon, Barry R.; Scherer, Rick D.; Keane, John J.

    2017-01-01

    Counts of avian fledglings, nestlings, or clutch size that are bounded below by zero and above by some small integer form a discrete random variable distribution that is not approximated well by conventional parametric count distributions such as the Poisson or negative binomial. We developed a logistic quantile regression model to provide estimates of the empirical conditional distribution of a bounded discrete random variable. The logistic quantile regression model requires that counts are randomly jittered to a continuous random variable, logit transformed to bound them between specified lower and upper values, then estimated in conventional linear quantile regression, repeating the 3 steps and averaging estimates. Back-transformation to the original discrete scale relies on the fact that quantiles are equivariant to monotonic transformations. We demonstrate this statistical procedure by modeling 20 years of California Spotted Owl fledgling production (0−3 per territory) on the Lassen National Forest, California, USA, as related to climate, demographic, and landscape habitat characteristics at territories. Spotted Owl fledgling counts increased nonlinearly with decreasing precipitation in the early nesting period, in the winter prior to nesting, and in the prior growing season; with increasing minimum temperatures in the early nesting period; with adult compared to subadult parents; when there was no fledgling production in the prior year; and when percentage of the landscape surrounding nesting sites (202 ha) with trees ≥25 m height increased. Changes in production were primarily driven by changes in the proportion of territories with 2 or 3 fledglings. Average variances of the discrete cumulative distributions of the estimated fledgling counts indicated that temporal changes in climate and parent age class explained 18% of the annual variance in owl fledgling production, which was 34% of the total variance. Prior fledgling production explained as much of

  19. Efficient Smoothed Concomitant Lasso Estimation for High Dimensional Regression

    Science.gov (United States)

    Ndiaye, Eugene; Fercoq, Olivier; Gramfort, Alexandre; Leclère, Vincent; Salmon, Joseph

    2017-10-01

    In high dimensional settings, sparse structures are crucial for efficiency, both in term of memory, computation and performance. It is customary to consider ℓ 1 penalty to enforce sparsity in such scenarios. Sparsity enforcing methods, the Lasso being a canonical example, are popular candidates to address high dimension. For efficiency, they rely on tuning a parameter trading data fitting versus sparsity. For the Lasso theory to hold this tuning parameter should be proportional to the noise level, yet the latter is often unknown in practice. A possible remedy is to jointly optimize over the regression parameter as well as over the noise level. This has been considered under several names in the literature: Scaled-Lasso, Square-root Lasso, Concomitant Lasso estimation for instance, and could be of interest for uncertainty quantification. In this work, after illustrating numerical difficulties for the Concomitant Lasso formulation, we propose a modification we coined Smoothed Concomitant Lasso, aimed at increasing numerical stability. We propose an efficient and accurate solver leading to a computational cost no more expensive than the one for the Lasso. We leverage on standard ingredients behind the success of fast Lasso solvers: a coordinate descent algorithm, combined with safe screening rules to achieve speed efficiency, by eliminating early irrelevant features.

  20. Nonparametric Mixture of Regression Models.

    Science.gov (United States)

    Huang, Mian; Li, Runze; Wang, Shaoli

    2013-07-01

    Motivated by an analysis of US house price index data, we propose nonparametric finite mixture of regression models. We study the identifiability issue of the proposed models, and develop an estimation procedure by employing kernel regression. We further systematically study the sampling properties of the proposed estimators, and establish their asymptotic normality. A modified EM algorithm is proposed to carry out the estimation procedure. We show that our algorithm preserves the ascent property of the EM algorithm in an asymptotic sense. Monte Carlo simulations are conducted to examine the finite sample performance of the proposed estimation procedure. An empirical analysis of the US house price index data is illustrated for the proposed methodology.

  1. General practitioners' decisions about discontinuation of medication: an explorative study.

    Science.gov (United States)

    Nixon, Michael Simon; Vendelø, Morten Thanning

    2016-06-20

    Purpose - The purpose of this paper is to investigate how general practitioners' (GPs) decisions about discontinuation of medication are influenced by their institutional context. Design/methodology/approach - In total, 24 GPs were interviewed, three practices were observed and documents were collected. The Gioia methodology was used to analyse data, drawing on a theoretical framework that integrate the sensemaking perspective and institutional theory. Findings - Most GPs, who actively consider discontinuation, are reluctant to discontinue medication, because the safest course of action for GPs is to continue prescriptions, rather than discontinue them. The authors conclude that this is in part due to the ambiguity about the appropriateness of discontinuing medication, experienced by the GPs, and in part because the clinical guidelines do not encourage discontinuation of medication, as they offer GPs a weak frame for discontinuation. Three reasons for this are identified: the guidelines provide dominating triggers for prescribing, they provide weak priming for discontinuation as an option, and they underscore a cognitive constraint against discontinuation. Originality/value - The analysis offers new insights about decision making when discontinuing medication. It also offers one of the first examinations of how the institutional context embedding GPs influences their decisions about discontinuation. For policymakers interested in the discontinuation of medication, the findings suggest that de-stigmatising discontinuation on an institutional level may be beneficial, allowing GPs to better justify discontinuation in light of the ambiguity they experience.

  2. Regression analysis with categorized regression calibrated exposure: some interesting findings

    Directory of Open Access Journals (Sweden)

    Hjartåker Anette

    2006-07-01

    Full Text Available Abstract Background Regression calibration as a method for handling measurement error is becoming increasingly well-known and used in epidemiologic research. However, the standard version of the method is not appropriate for exposure analyzed on a categorical (e.g. quintile scale, an approach commonly used in epidemiologic studies. A tempting solution could then be to use the predicted continuous exposure obtained through the regression calibration method and treat it as an approximation to the true exposure, that is, include the categorized calibrated exposure in the main regression analysis. Methods We use semi-analytical calculations and simulations to evaluate the performance of the proposed approach compared to the naive approach of not correcting for measurement error, in situations where analyses are performed on quintile scale and when incorporating the original scale into the categorical variables, respectively. We also present analyses of real data, containing measures of folate intake and depression, from the Norwegian Women and Cancer study (NOWAC. Results In cases where extra information is available through replicated measurements and not validation data, regression calibration does not maintain important qualities of the true exposure distribution, thus estimates of variance and percentiles can be severely biased. We show that the outlined approach maintains much, in some cases all, of the misclassification found in the observed exposure. For that reason, regression analysis with the corrected variable included on a categorical scale is still biased. In some cases the corrected estimates are analytically equal to those obtained by the naive approach. Regression calibration is however vastly superior to the naive method when applying the medians of each category in the analysis. Conclusion Regression calibration in its most well-known form is not appropriate for measurement error correction when the exposure is analyzed on a

  3. Stable and high order accurate difference methods for the elastic wave equation in discontinuous media

    KAUST Repository

    Duru, Kenneth

    2014-12-01

    © 2014 Elsevier Inc. In this paper, we develop a stable and systematic procedure for numerical treatment of elastic waves in discontinuous and layered media. We consider both planar and curved interfaces where media parameters are allowed to be discontinuous. The key feature is the highly accurate and provably stable treatment of interfaces where media discontinuities arise. We discretize in space using high order accurate finite difference schemes that satisfy the summation by parts rule. Conditions at layer interfaces are imposed weakly using penalties. By deriving lower bounds of the penalty strength and constructing discrete energy estimates we prove time stability. We present numerical experiments in two space dimensions to illustrate the usefulness of the proposed method for simulations involving typical interface phenomena in elastic materials. The numerical experiments verify high order accuracy and time stability.

  4. Discontinuous Galerkin Method for Hyperbolic Conservation Laws

    KAUST Repository

    Mousikou, Ioanna

    2016-11-11

    Hyperbolic conservation laws form a special class of partial differential equations. They describe phenomena that involve conserved quantities and their solutions show discontinuities which reflect the formation of shock waves. We consider one-dimensional systems of hyperbolic conservation laws and produce approximations using finite difference, finite volume and finite element methods. Due to stability issues of classical finite element methods for hyperbolic conservation laws, we study the discontinuous Galerkin method, which was recently introduced. The method involves completely discontinuous basis functions across each element and it can be considered as a combination of finite volume and finite element methods. We illustrate the implementation of discontinuous Galerkin method using Legendre polynomials, in case of scalar equations and in case of quasi-linear systems, and we review important theoretical results about stability and convergence of the method. The applications of finite volume and discontinuous Galerkin methods to linear and non-linear scalar equations, as well as to the system of elastodynamics, are exhibited.

  5. Discontinuous Galerkin Method for Hyperbolic Conservation Laws

    KAUST Repository

    Mousikou, Ioanna

    2016-01-01

    Hyperbolic conservation laws form a special class of partial differential equations. They describe phenomena that involve conserved quantities and their solutions show discontinuities which reflect the formation of shock waves. We consider one-dimensional systems of hyperbolic conservation laws and produce approximations using finite difference, finite volume and finite element methods. Due to stability issues of classical finite element methods for hyperbolic conservation laws, we study the discontinuous Galerkin method, which was recently introduced. The method involves completely discontinuous basis functions across each element and it can be considered as a combination of finite volume and finite element methods. We illustrate the implementation of discontinuous Galerkin method using Legendre polynomials, in case of scalar equations and in case of quasi-linear systems, and we review important theoretical results about stability and convergence of the method. The applications of finite volume and discontinuous Galerkin methods to linear and non-linear scalar equations, as well as to the system of elastodynamics, are exhibited.

  6. Integrating address geocoding, land use regression, and spatiotemporal geostatistical estimation for groundwater tetrachloroethylene.

    Science.gov (United States)

    Messier, Kyle P; Akita, Yasuyuki; Serre, Marc L

    2012-03-06

    Geographic information systems (GIS) based techniques are cost-effective and efficient methods used by state agencies and epidemiology researchers for estimating concentration and exposure. However, budget limitations have made statewide assessments of contamination difficult, especially in groundwater media. Many studies have implemented address geocoding, land use regression, and geostatistics independently, but this is the first to examine the benefits of integrating these GIS techniques to address the need of statewide exposure assessments. A novel framework for concentration exposure is introduced that integrates address geocoding, land use regression (LUR), below detect data modeling, and Bayesian Maximum Entropy (BME). A LUR model was developed for tetrachloroethylene that accounts for point sources and flow direction. We then integrate the LUR model into the BME method as a mean trend while also modeling below detects data as a truncated Gaussian probability distribution function. We increase available PCE data 4.7 times from previously available databases through multistage geocoding. The LUR model shows significant influence of dry cleaners at short ranges. The integration of the LUR model as mean trend in BME results in a 7.5% decrease in cross validation mean square error compared to BME with a constant mean trend.

  7. The structure of rotational discontinuities

    International Nuclear Information System (INIS)

    Neugebauer, M.

    1989-01-01

    This study examines the structures of a set of rotational discontinuities detected in the solar wind by the ISEE-3 spacecraft. It is found that the complexity of the structure increases as the angle θ between the propagation vector k and the magnetic field decreases. For rotational discontinuities that propagate at a large angle to the field with an ion (left-hand) sense of rotation, the magnetic hodograms tend to be flattened, in agreement with prior numerical simulations. When θ is large, angular overshoots are often observed at one or both ends of the discontinuity. When the propagation is nearly parallel to the field (i.e., when θ is small), many different types of structure are seen, ranging from straight lines, the S-shaped curves, to complex, disorganized shapes

  8. Regression analysis and transfer function in estimating the parameters of central pulse waves from brachial pulse wave.

    Science.gov (United States)

    Chai Rui; Li Si-Man; Xu Li-Sheng; Yao Yang; Hao Li-Ling

    2017-07-01

    This study mainly analyzed the parameters such as ascending branch slope (A_slope), dicrotic notch height (Hn), diastolic area (Ad) and systolic area (As) diastolic blood pressure (DBP), systolic blood pressure (SBP), pulse pressure (PP), subendocardial viability ratio (SEVR), waveform parameter (k), stroke volume (SV), cardiac output (CO) and peripheral resistance (RS) of central pulse wave invasively and non-invasively measured. These parameters extracted from the central pulse wave invasively measured were compared with the parameters measured from the brachial pulse waves by a regression model and a transfer function model. The accuracy of the parameters which were estimated by the regression model and the transfer function model was compared too. Our findings showed that in addition to the k value, the above parameters of the central pulse wave and the brachial pulse wave invasively measured had positive correlation. Both the regression model parameters including A_slope, DBP, SEVR and the transfer function model parameters had good consistency with the parameters invasively measured, and they had the same effect of consistency. The regression equations of the three parameters were expressed by Y'=a+bx. The SBP, PP, SV, CO of central pulse wave could be calculated through the regression model, but their accuracies were worse than that of transfer function model.

  9. Discontinuation Decision in Assisted Reproductive Techniques

    Directory of Open Access Journals (Sweden)

    Ashraf Moini

    2009-01-01

    Full Text Available Background: In vitro fertilization (IVF and intra cytoplasmic sperm injection (ICSI are recognizedas established and increasingly successful forms of treatment for infertility, yet significant numbersof couples discontinue treatment without achieving a live birth. This study aims to identify majorfactors that influence the decision to discontinue IVF/ICSI treatments.Materials and Methods: We studied the data of 338 couples who discontinued their infertilitytreatments after three cycles; based on medical records and phone contact. The main measure wasthe reason for stopping their treatments.Results: Economical problems were cited by 212 couples (62.7%, as their mean income wassignificantly less than other couples (p<0.0001. Lack of success was reported as a reason by229 (67.8%, from whom 165 (72% also had economical problems. Achieving independent-ART pregnancy was the reason for discontinuation in 20 (5.9% couples. Psychological stress,depression and anxiety were reported as other cessation factors by 169 (50%, 148 (43.8% and 182(53.8% couples, respectively.Conclusion: This survey suggests that the most common reasons for assisted reproductivetechnique (ART discontinuation after three cycles are: prior unsuccessful cycles, economicaland psychological problems. Therefore, the substantial proportion of couples could benefit frompsychological intervention, increasing awareness of ART outcomes and health funding to copemore adequately with failed treatments.

  10. Functional data analysis of generalized regression quantiles

    KAUST Repository

    Guo, Mengmeng; Zhou, Lan; Huang, Jianhua Z.; Hä rdle, Wolfgang Karl

    2013-01-01

    Generalized regression quantiles, including the conditional quantiles and expectiles as special cases, are useful alternatives to the conditional means for characterizing a conditional distribution, especially when the interest lies in the tails. We develop a functional data analysis approach to jointly estimate a family of generalized regression quantiles. Our approach assumes that the generalized regression quantiles share some common features that can be summarized by a small number of principal component functions. The principal component functions are modeled as splines and are estimated by minimizing a penalized asymmetric loss measure. An iterative least asymmetrically weighted squares algorithm is developed for computation. While separate estimation of individual generalized regression quantiles usually suffers from large variability due to lack of sufficient data, by borrowing strength across data sets, our joint estimation approach significantly improves the estimation efficiency, which is demonstrated in a simulation study. The proposed method is applied to data from 159 weather stations in China to obtain the generalized quantile curves of the volatility of the temperature at these stations. © 2013 Springer Science+Business Media New York.

  11. Functional data analysis of generalized regression quantiles

    KAUST Repository

    Guo, Mengmeng

    2013-11-05

    Generalized regression quantiles, including the conditional quantiles and expectiles as special cases, are useful alternatives to the conditional means for characterizing a conditional distribution, especially when the interest lies in the tails. We develop a functional data analysis approach to jointly estimate a family of generalized regression quantiles. Our approach assumes that the generalized regression quantiles share some common features that can be summarized by a small number of principal component functions. The principal component functions are modeled as splines and are estimated by minimizing a penalized asymmetric loss measure. An iterative least asymmetrically weighted squares algorithm is developed for computation. While separate estimation of individual generalized regression quantiles usually suffers from large variability due to lack of sufficient data, by borrowing strength across data sets, our joint estimation approach significantly improves the estimation efficiency, which is demonstrated in a simulation study. The proposed method is applied to data from 159 weather stations in China to obtain the generalized quantile curves of the volatility of the temperature at these stations. © 2013 Springer Science+Business Media New York.

  12. Forecasting exchange rates: a robust regression approach

    OpenAIRE

    Preminger, Arie; Franck, Raphael

    2005-01-01

    The least squares estimation method as well as other ordinary estimation method for regression models can be severely affected by a small number of outliers, thus providing poor out-of-sample forecasts. This paper suggests a robust regression approach, based on the S-estimation method, to construct forecasting models that are less sensitive to data contamination by outliers. A robust linear autoregressive (RAR) and a robust neural network (RNN) models are estimated to study the predictabil...

  13. Clinician support and psychosocial risk factors associated with breastfeeding discontinuation.

    Science.gov (United States)

    Taveras, Elsie M; Capra, Angela M; Braveman, Paula A; Jensvold, Nancy G; Escobar, Gabriel J; Lieu, Tracy A

    2003-07-01

    Breastfeeding rates fall short of goals set in Healthy People 2010 and other national recommendations. The current, national breastfeeding continuation rate of 29% at 6 months lags behind the Healthy People 2010 goal of 50%. The objective of this study was to evaluate associations between breastfeeding discontinuation at 2 and 12 weeks postpartum and clinician support, maternal physical and mental health status, workplace issues, and other factors amenable to intervention. A prospective cohort study was conducted of low-risk mothers and infants who were in a health maintenance organization and enrolled in a randomized, controlled trial of home visits. Mothers were interviewed in person at 1 to 2 days postpartum and by telephone at 2 and 12 weeks. Logistic regression modeling was performed to assess the independent effects of the predictors of interest, adjusting for sociodemographic and other confounding variables. Of the 1163 mother-newborn pairs in the cohort, 1007 (87%) initiated breastfeeding, 872 (75%) were breastfeeding at the 2-week interview, and 646 (55%) were breastfeeding at the 12-week interview. In the final multivariate models, breastfeeding discontinuation at 2 weeks was associated with lack of confidence in ability to breastfeed at the 1- to 2-day interview (odds ratio [OR]: 2.8; 95% confidence interval [CI]: 1.02-7.6), early breastfeeding problems (OR: 1.5; 95% CI: 1.1-1.97), Asian race/ethnicity (OR: 2.6; 95% CI: 1.1-5.7), and lower maternal education (OR: 1.5; 95% CI: 1.2-1.9). Mothers were much less likely to discontinue breastfeeding at 12 weeks postpartum if they reported (during the 12-week interview) having received encouragement from their clinician to breastfeed (OR: 0.6; 95% CI: 0.4-0.8). Breastfeeding discontinuation at 12 weeks was also associated with demographic factors and maternal depressive symptoms (OR: 1.18; 95% CI: 1.01-1.37) and returning to work or school by 12 weeks postpartum (OR: 2.4; 95% CI: 1.8-3.3). Our results indicate

  14. Moderation analysis using a two-level regression model.

    Science.gov (United States)

    Yuan, Ke-Hai; Cheng, Ying; Maxwell, Scott

    2014-10-01

    Moderation analysis is widely used in social and behavioral research. The most commonly used model for moderation analysis is moderated multiple regression (MMR) in which the explanatory variables of the regression model include product terms, and the model is typically estimated by least squares (LS). This paper argues for a two-level regression model in which the regression coefficients of a criterion variable on predictors are further regressed on moderator variables. An algorithm for estimating the parameters of the two-level model by normal-distribution-based maximum likelihood (NML) is developed. Formulas for the standard errors (SEs) of the parameter estimates are provided and studied. Results indicate that, when heteroscedasticity exists, NML with the two-level model gives more efficient and more accurate parameter estimates than the LS analysis of the MMR model. When error variances are homoscedastic, NML with the two-level model leads to essentially the same results as LS with the MMR model. Most importantly, the two-level regression model permits estimating the percentage of variance of each regression coefficient that is due to moderator variables. When applied to data from General Social Surveys 1991, NML with the two-level model identified a significant moderation effect of race on the regression of job prestige on years of education while LS with the MMR model did not. An R package is also developed and documented to facilitate the application of the two-level model.

  15. Regression calibration with more surrogates than mismeasured variables

    KAUST Repository

    Kipnis, Victor

    2012-06-29

    In a recent paper (Weller EA, Milton DK, Eisen EA, Spiegelman D. Regression calibration for logistic regression with multiple surrogates for one exposure. Journal of Statistical Planning and Inference 2007; 137: 449-461), the authors discussed fitting logistic regression models when a scalar main explanatory variable is measured with error by several surrogates, that is, a situation with more surrogates than variables measured with error. They compared two methods of adjusting for measurement error using a regression calibration approximate model as if it were exact. One is the standard regression calibration approach consisting of substituting an estimated conditional expectation of the true covariate given observed data in the logistic regression. The other is a novel two-stage approach when the logistic regression is fitted to multiple surrogates, and then a linear combination of estimated slopes is formed as the estimate of interest. Applying estimated asymptotic variances for both methods in a single data set with some sensitivity analysis, the authors asserted superiority of their two-stage approach. We investigate this claim in some detail. A troubling aspect of the proposed two-stage method is that, unlike standard regression calibration and a natural form of maximum likelihood, the resulting estimates are not invariant to reparameterization of nuisance parameters in the model. We show, however, that, under the regression calibration approximation, the two-stage method is asymptotically equivalent to a maximum likelihood formulation, and is therefore in theory superior to standard regression calibration. However, our extensive finite-sample simulations in the practically important parameter space where the regression calibration model provides a good approximation failed to uncover such superiority of the two-stage method. We also discuss extensions to different data structures.

  16. Regression calibration with more surrogates than mismeasured variables

    KAUST Repository

    Kipnis, Victor; Midthune, Douglas; Freedman, Laurence S.; Carroll, Raymond J.

    2012-01-01

    In a recent paper (Weller EA, Milton DK, Eisen EA, Spiegelman D. Regression calibration for logistic regression with multiple surrogates for one exposure. Journal of Statistical Planning and Inference 2007; 137: 449-461), the authors discussed fitting logistic regression models when a scalar main explanatory variable is measured with error by several surrogates, that is, a situation with more surrogates than variables measured with error. They compared two methods of adjusting for measurement error using a regression calibration approximate model as if it were exact. One is the standard regression calibration approach consisting of substituting an estimated conditional expectation of the true covariate given observed data in the logistic regression. The other is a novel two-stage approach when the logistic regression is fitted to multiple surrogates, and then a linear combination of estimated slopes is formed as the estimate of interest. Applying estimated asymptotic variances for both methods in a single data set with some sensitivity analysis, the authors asserted superiority of their two-stage approach. We investigate this claim in some detail. A troubling aspect of the proposed two-stage method is that, unlike standard regression calibration and a natural form of maximum likelihood, the resulting estimates are not invariant to reparameterization of nuisance parameters in the model. We show, however, that, under the regression calibration approximation, the two-stage method is asymptotically equivalent to a maximum likelihood formulation, and is therefore in theory superior to standard regression calibration. However, our extensive finite-sample simulations in the practically important parameter space where the regression calibration model provides a good approximation failed to uncover such superiority of the two-stage method. We also discuss extensions to different data structures.

  17. Quantile Regression With Measurement Error

    KAUST Repository

    Wei, Ying

    2009-08-27

    Regression quantiles can be substantially biased when the covariates are measured with error. In this paper we propose a new method that produces consistent linear quantile estimation in the presence of covariate measurement error. The method corrects the measurement error induced bias by constructing joint estimating equations that simultaneously hold for all the quantile levels. An iterative EM-type estimation algorithm to obtain the solutions to such joint estimation equations is provided. The finite sample performance of the proposed method is investigated in a simulation study, and compared to the standard regression calibration approach. Finally, we apply our methodology to part of the National Collaborative Perinatal Project growth data, a longitudinal study with an unusual measurement error structure. © 2009 American Statistical Association.

  18. Estimating the prevalence of 26 health-related indicators at neighbourhood level in the Netherlands using structured additive regression.

    Science.gov (United States)

    van de Kassteele, Jan; Zwakhals, Laurens; Breugelmans, Oscar; Ameling, Caroline; van den Brink, Carolien

    2017-07-01

    Local policy makers increasingly need information on health-related indicators at smaller geographic levels like districts or neighbourhoods. Although more large data sources have become available, direct estimates of the prevalence of a health-related indicator cannot be produced for neighbourhoods for which only small samples or no samples are available. Small area estimation provides a solution, but unit-level models for binary-valued outcomes that can handle both non-linear effects of the predictors and spatially correlated random effects in a unified framework are rarely encountered. We used data on 26 binary-valued health-related indicators collected on 387,195 persons in the Netherlands. We associated the health-related indicators at the individual level with a set of 12 predictors obtained from national registry data. We formulated a structured additive regression model for small area estimation. The model captured potential non-linear relations between the predictors and the outcome through additive terms in a functional form using penalized splines and included a term that accounted for spatially correlated heterogeneity between neighbourhoods. The registry data were used to predict individual outcomes which in turn are aggregated into higher geographical levels, i.e. neighbourhoods. We validated our method by comparing the estimated prevalences with observed prevalences at the individual level and by comparing the estimated prevalences with direct estimates obtained by weighting methods at municipality level. We estimated the prevalence of the 26 health-related indicators for 415 municipalities, 2599 districts and 11,432 neighbourhoods in the Netherlands. We illustrate our method on overweight data and show that there are distinct geographic patterns in the overweight prevalence. Calibration plots show that the estimated prevalences agree very well with observed prevalences at the individual level. The estimated prevalences agree reasonably well with the

  19. The number of subjects per variable required in linear regression analyses.

    Science.gov (United States)

    Austin, Peter C; Steyerberg, Ewout W

    2015-06-01

    To determine the number of independent variables that can be included in a linear regression model. We used a series of Monte Carlo simulations to examine the impact of the number of subjects per variable (SPV) on the accuracy of estimated regression coefficients and standard errors, on the empirical coverage of estimated confidence intervals, and on the accuracy of the estimated R(2) of the fitted model. A minimum of approximately two SPV tended to result in estimation of regression coefficients with relative bias of less than 10%. Furthermore, with this minimum number of SPV, the standard errors of the regression coefficients were accurately estimated and estimated confidence intervals had approximately the advertised coverage rates. A much higher number of SPV were necessary to minimize bias in estimating the model R(2), although adjusted R(2) estimates behaved well. The bias in estimating the model R(2) statistic was inversely proportional to the magnitude of the proportion of variation explained by the population regression model. Linear regression models require only two SPV for adequate estimation of regression coefficients, standard errors, and confidence intervals. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  20. Metronomic capecitabine as second-line treatment for hepatocellular carcinoma after sorafenib discontinuation.

    Science.gov (United States)

    Trevisani, Franco; Brandi, Giovanni; Garuti, Francesca; Barbera, Maria Aurelia; Tortora, Raffaella; Casadei Gardini, Andrea; Granito, Alessandro; Tovoli, Francesco; De Lorenzo, Stefania; Inghilesi, Andrea Lorenzo; Foschi, Francesco Giuseppe; Bernardi, Mauro; Marra, Fabio; Sacco, Rodolfo; Di Costanzo, Giovan Giuseppe

    2018-02-01

    Metronomic capecitabine (MC) is a well-tolerated systemic treatment showing promising results in one retrospective study, as second-line therapy after sorafenib failure, in patients with hepatocellular carcinoma (HCC). 117 patients undergoing MC were compared to 112 patients, eligible for this treatment, but undergoing best supportive care (BSC) after sorafenib discontinuation for toxicity or HCC progression. The two groups were compared for demographic and clinical features. A multivariate regression analysis was conducted to detect independent prognostic factors. To balance confounding factors between the two groups, a propensity score model based on independent prognosticators (performance status, neoplastic thrombosis, causes of sorafenib discontinuation and pre-sorafenib treatment) was performed. Patients undergoing MC showed better performance status, lower tumor burden, lower prevalence of portal vein thrombosis, and better cancer stage. Median (95% CI) post-sorafenib survival (PSS) was longer in MC than in BSC patients [9.5 (7.5-11.6) vs 5.0 (4.2-5.7) months (p < 0.001)]. Neoplastic thrombosis, cause of sorafenib discontinuation, pre-sorafenib treatment and MC were independent prognosticators. The benefit of capecitabine was confirmed in patients after matching with propensity score [PSS: 9.9 (6.8-12.9) vs. 5.8 (4.8-6.8) months, (p = 0.001)]. MC lowered the mortality risk by about 40%. MC achieved better results in patients who stopped sorafenib for adverse events than in those who progressed during it [PSS: 17.3 (10.5-24.1) vs. 7.8 (5.2-10.1) months, (p = 0.035)]. Treatment toxicity was low and easily manageable with dose modulation. MC may be an efficient and safe second-line systemic therapy for HCC patients who discontinued sorafenib for toxicity or tumor progression.

  1. Multiple linear regression to estimate time-frequency electrophysiological responses in single trials.

    Science.gov (United States)

    Hu, L; Zhang, Z G; Mouraux, A; Iannetti, G D

    2015-05-01

    Transient sensory, motor or cognitive event elicit not only phase-locked event-related potentials (ERPs) in the ongoing electroencephalogram (EEG), but also induce non-phase-locked modulations of ongoing EEG oscillations. These modulations can be detected when single-trial waveforms are analysed in the time-frequency domain, and consist in stimulus-induced decreases (event-related desynchronization, ERD) or increases (event-related synchronization, ERS) of synchrony in the activity of the underlying neuronal populations. ERD and ERS reflect changes in the parameters that control oscillations in neuronal networks and, depending on the frequency at which they occur, represent neuronal mechanisms involved in cortical activation, inhibition and binding. ERD and ERS are commonly estimated by averaging the time-frequency decomposition of single trials. However, their trial-to-trial variability that can reflect physiologically-important information is lost by across-trial averaging. Here, we aim to (1) develop novel approaches to explore single-trial parameters (including latency, frequency and magnitude) of ERP/ERD/ERS; (2) disclose the relationship between estimated single-trial parameters and other experimental factors (e.g., perceived intensity). We found that (1) stimulus-elicited ERP/ERD/ERS can be correctly separated using principal component analysis (PCA) decomposition with Varimax rotation on the single-trial time-frequency distributions; (2) time-frequency multiple linear regression with dispersion term (TF-MLRd) enhances the signal-to-noise ratio of ERP/ERD/ERS in single trials, and provides an unbiased estimation of their latency, frequency, and magnitude at single-trial level; (3) these estimates can be meaningfully correlated with each other and with other experimental factors at single-trial level (e.g., perceived stimulus intensity and ERP magnitude). The methods described in this article allow exploring fully non-phase-locked stimulus-induced cortical

  2. Time-varying effect moderation using the structural nested mean model: estimation using inverse-weighted regression with residuals

    Science.gov (United States)

    Almirall, Daniel; Griffin, Beth Ann; McCaffrey, Daniel F.; Ramchand, Rajeev; Yuen, Robert A.; Murphy, Susan A.

    2014-01-01

    This article considers the problem of examining time-varying causal effect moderation using observational, longitudinal data in which treatment, candidate moderators, and possible confounders are time varying. The structural nested mean model (SNMM) is used to specify the moderated time-varying causal effects of interest in a conditional mean model for a continuous response given time-varying treatments and moderators. We present an easy-to-use estimator of the SNMM that combines an existing regression-with-residuals (RR) approach with an inverse-probability-of-treatment weighting (IPTW) strategy. The RR approach has been shown to identify the moderated time-varying causal effects if the time-varying moderators are also the sole time-varying confounders. The proposed IPTW+RR approach provides estimators of the moderated time-varying causal effects in the SNMM in the presence of an additional, auxiliary set of known and measured time-varying confounders. We use a small simulation experiment to compare IPTW+RR versus the traditional regression approach and to compare small and large sample properties of asymptotic versus bootstrap estimators of the standard errors for the IPTW+RR approach. This article clarifies the distinction between time-varying moderators and time-varying confounders. We illustrate the methodology in a case study to assess if time-varying substance use moderates treatment effects on future substance use. PMID:23873437

  3. Finite element and discontinuous Galerkin methods for transient wave equations

    CERN Document Server

    Cohen, Gary

    2017-01-01

    This monograph presents numerical methods for solving transient wave equations (i.e. in time domain). More precisely, it provides an overview of continuous and discontinuous finite element methods for these equations, including their implementation in physical models, an extensive description of 2D and 3D elements with different shapes, such as prisms or pyramids, an analysis of the accuracy of the methods and the study of the Maxwell’s system and the important problem of its spurious free approximations. After recalling the classical models, i.e. acoustics, linear elastodynamics and electromagnetism and their variational formulations, the authors present a wide variety of finite elements of different shapes useful for the numerical resolution of wave equations. Then, they focus on the construction of efficient continuous and discontinuous Galerkin methods and study their accuracy by plane wave techniques and a priori error estimates. A chapter is devoted to the Maxwell’s system and the important problem ...

  4. Trapped particles at a magnetic discontinuity

    Science.gov (United States)

    Stern, D. P.

    1972-01-01

    At a tangential discontinuity between two constant magnetic fields a layer of trapped particles can exist, this work examines the conditions under which the current carried by such particles tends to maintain the discontinuity. Three cases are examined. If the discontinuity separates aligned vacuum fields, the only requirement is that they be antiparallel. With arbitrary relative orientations, the field must have equal intensities on both sides. Finally, with a guiding center plasma on both sides, the condition reduces to a relation which is also derivable from hydromagnetic theory. Arguments are presented for the occurrence of such trapped modes in the magnetopause and for the non-existence of specular particle reflection.

  5. An integrated fuzzy regression algorithm for energy consumption estimation with non-stationary data: A case study of Iran

    Energy Technology Data Exchange (ETDEWEB)

    Azadeh, A; Seraj, O [Department of Industrial Engineering and Research Institute of Energy Management and Planning, Center of Excellence for Intelligent-Based Experimental Mechanics, College of Engineering, University of Tehran, P.O. Box 11365-4563 (Iran); Saberi, M [Department of Industrial Engineering, University of Tafresh (Iran); Institute for Digital Ecosystems and Business Intelligence, Curtin University of Technology, Perth (Australia)

    2010-06-15

    This study presents an integrated fuzzy regression and time series framework to estimate and predict electricity demand for seasonal and monthly changes in electricity consumption especially in developing countries such as China and Iran with non-stationary data. Furthermore, it is difficult to model uncertain behavior of energy consumption with only conventional fuzzy regression (FR) or time series and the integrated algorithm could be an ideal substitute for such cases. At First, preferred Time series model is selected from linear or nonlinear models. For this, after selecting preferred Auto Regression Moving Average (ARMA) model, Mcleod-Li test is applied to determine nonlinearity condition. When, nonlinearity condition is satisfied, the preferred nonlinear model is selected and defined as preferred time series model. At last, the preferred model from fuzzy regression and time series model is selected by the Granger-Newbold. Also, the impact of data preprocessing on the fuzzy regression performance is considered. Monthly electricity consumption of Iran from March 1994 to January 2005 is considered as the case of this study. The superiority of the proposed algorithm is shown by comparing its results with other intelligent tools such as Genetic Algorithm (GA) and Artificial Neural Network (ANN). (author)

  6. Increased incidence of antiretroviral drug discontinuation among patients with viremic hepatitis C virus coinfection and high hyaluronic acid, a marker of liver fibrosis

    DEFF Research Database (Denmark)

    Grint, D.; Peters, L.; Rockstroh, J. K.

    2014-01-01

    HCV/HIV coinfected patients. Methods: EuroSIDA patients taking combination antiretroviral therapy were included. Poisson regression identified factors associated with antiretroviral treatment discontinuation. Results: A total of 9535 HIV-positive patients with known HCV status were included (6939...

  7. Discontinuity effects in dynamically loaded tilting pad journal bearings

    DEFF Research Database (Denmark)

    Thomsen, Kim; Klit, Peder; Vølund, Anders

    2011-01-01

    This paper describes two discontinuity effects that can occur when modelling radial tilting pad bearings subjected to high dynamic loads. The first effect to be treated is a pressure build-up discontinuity effect. The second effect is a contact-related discontinuity that disappears when a contact...... force is included in the theoretical model. Methods for avoiding the pressure build-up discontinuity effect are proposed....

  8. Impact of regression methods on improved effects of soil structure on soil water retention estimates

    Science.gov (United States)

    Nguyen, Phuong Minh; De Pue, Jan; Le, Khoa Van; Cornelis, Wim

    2015-06-01

    Increasing the accuracy of pedotransfer functions (PTFs), an indirect method for predicting non-readily available soil features such as soil water retention characteristics (SWRC), is of crucial importance for large scale agro-hydrological modeling. Adding significant predictors (i.e., soil structure), and implementing more flexible regression algorithms are among the main strategies of PTFs improvement. The aim of this study was to investigate whether the improved effect of categorical soil structure information on estimating soil-water content at various matric potentials, which has been reported in literature, could be enduringly captured by regression techniques other than the usually applied linear regression. Two data mining techniques, i.e., Support Vector Machines (SVM), and k-Nearest Neighbors (kNN), which have been recently introduced as promising tools for PTF development, were utilized to test if the incorporation of soil structure will improve PTF's accuracy under a context of rather limited training data. The results show that incorporating descriptive soil structure information, i.e., massive, structured and structureless, as grouping criterion can improve the accuracy of PTFs derived by SVM approach in the range of matric potential of -6 to -33 kPa (average RMSE decreased up to 0.005 m3 m-3 after grouping, depending on matric potentials). The improvement was primarily attributed to the outperformance of SVM-PTFs calibrated on structureless soils. No improvement was obtained with kNN technique, at least not in our study in which the data set became limited in size after grouping. Since there is an impact of regression techniques on the improved effect of incorporating qualitative soil structure information, selecting a proper technique will help to maximize the combined influence of flexible regression algorithms and soil structure information on PTF accuracy.

  9. Twelve-month discontinuation rates of levonorgestrel intrauterine system 13.5 mg and subdermal etonogestrel implant in women aged 18-44: A retrospective claims database analysis.

    Science.gov (United States)

    Law, Amy; Liao, Laura; Lin, Jay; Yaldo, Avin; Lynen, Richard

    2018-04-21

    To investigate the 12-month discontinuation rates of levonorgestrel intrauterine system 13.5 mg (LNG-IUS 13.5) and subdermal etonogestrel (ENG) implant in the US. We identified women aged 18-44 who had an insertion of LNG-IUS 13.5 or ENG implant from the MarketScan Commercial claims database (7/1/2013-9/30/2014). Women were required to have 12 months of continuous insurance coverage prior to the insertion (baseline) and at least 12-months after (follow-up). Discontinuation was defined as presence of an insurance claim for pregnancy-related services, hysterectomy, female sterilization, a claim for another contraceptive method, or removal of the index contraceptive without re-insertion within 30 days. Using Cox regression we examined the potential impact of ENG implant vs. LNG-IUS 13.5 on the likelihood for discontinuation after controlling for patient characteristics. A total of 3680 (mean age: 25.4 years) LNG-IUS 13.5 and 23,770 (mean age: 24.6 years) ENG implant users met the selection criteria. Prior to insertion, 56.6% of LNG-IUS 13.5 and 42.1% of ENG implant users had used contraceptives, with oral contraceptives being most common (LNG-IUS 13.5: 42.1%; ENG implant: 28.5%). Among users of LNG-IUS 13.5 and ENG implant, rates of discontinuation were similar during the 12-month follow-up (LNG-IUS 13.5: 24.9%; ENG implant: 24.0%). Regression results showed that women using LNG-IUS 13.5 vs. ENG implant had similar likelihood for discontinuation (hazard ratio: 0.97, 95% confidence interval: 0.90-1.05, p=.41). In the real-world US setting, women aged 18-44 using LNG-IUS 13.5 and ENG implant have similar discontinuation rates after 12 months. In the United States, women aged 18-44 using levonorgestrel intrauterine system (13.5 mg) and subdermal etonogestrel implant have similar discontinuation rates after 12 months. Copyright © 2018 Elsevier Inc. All rights reserved.

  10. Discontinuities during UV writing of waveguides

    DEFF Research Database (Denmark)

    Svalgaard, Mikael; Harpøth, Anders; Andersen, Marc

    2005-01-01

    UV writing of waveguides can be hampered by discontinuities where the index change process suddenly shuts down. We show that thermal effects may account for this behaviour.......UV writing of waveguides can be hampered by discontinuities where the index change process suddenly shuts down. We show that thermal effects may account for this behaviour....

  11. A structured sparse regression method for estimating isoform expression level from multi-sample RNA-seq data.

    Science.gov (United States)

    Zhang, L; Liu, X J

    2016-06-03

    With the rapid development of next-generation high-throughput sequencing technology, RNA-seq has become a standard and important technique for transcriptome analysis. For multi-sample RNA-seq data, the existing expression estimation methods usually deal with each single-RNA-seq sample, and ignore that the read distributions are consistent across multiple samples. In the current study, we propose a structured sparse regression method, SSRSeq, to estimate isoform expression using multi-sample RNA-seq data. SSRSeq uses a non-parameter model to capture the general tendency of non-uniformity read distribution for all genes across multiple samples. Additionally, our method adds a structured sparse regularization, which not only incorporates the sparse specificity between a gene and its corresponding isoform expression levels, but also reduces the effects of noisy reads, especially for lowly expressed genes and isoforms. Four real datasets were used to evaluate our method on isoform expression estimation. Compared with other popular methods, SSRSeq reduced the variance between multiple samples, and produced more accurate isoform expression estimations, and thus more meaningful biological interpretations.

  12. Trend Estimation and Regression Analysis in Climatological Time Series: An Application of Structural Time Series Models and the Kalman Filter.

    Science.gov (United States)

    Visser, H.; Molenaar, J.

    1995-05-01

    The detection of trends in climatological data has become central to the discussion on climate change due to the enhanced greenhouse effect. To prove detection, a method is needed (i) to make inferences on significant rises or declines in trends, (ii) to take into account natural variability in climate series, and (iii) to compare output from GCMs with the trends in observed climate data. To meet these requirements, flexible mathematical tools are needed. A structural time series model is proposed with which a stochastic trend, a deterministic trend, and regression coefficients can be estimated simultaneously. The stochastic trend component is described using the class of ARIMA models. The regression component is assumed to be linear. However, the regression coefficients corresponding with the explanatory variables may be time dependent to validate this assumption. The mathematical technique used to estimate this trend-regression model is the Kaiman filter. The main features of the filter are discussed.Examples of trend estimation are given using annual mean temperatures at a single station in the Netherlands (1706-1990) and annual mean temperatures at Northern Hemisphere land stations (1851-1990). The inclusion of explanatory variables is shown by regressing the latter temperature series on four variables: Southern Oscillation index (SOI), volcanic dust index (VDI), sunspot numbers (SSN), and a simulated temperature signal, induced by increasing greenhouse gases (GHG). In all analyses, the influence of SSN on global temperatures is found to be negligible. The correlations between temperatures and SOI and VDI appear to be negative. For SOI, this correlation is significant, but for VDI it is not, probably because of a lack of volcanic eruptions during the sample period. The relation between temperatures and GHG is positive, which is in agreement with the hypothesis of a warming climate because of increasing levels of greenhouse gases. The prediction performance of

  13. Robust mislabel logistic regression without modeling mislabel probabilities.

    Science.gov (United States)

    Hung, Hung; Jou, Zhi-Yu; Huang, Su-Yun

    2018-03-01

    Logistic regression is among the most widely used statistical methods for linear discriminant analysis. In many applications, we only observe possibly mislabeled responses. Fitting a conventional logistic regression can then lead to biased estimation. One common resolution is to fit a mislabel logistic regression model, which takes into consideration of mislabeled responses. Another common method is to adopt a robust M-estimation by down-weighting suspected instances. In this work, we propose a new robust mislabel logistic regression based on γ-divergence. Our proposal possesses two advantageous features: (1) It does not need to model the mislabel probabilities. (2) The minimum γ-divergence estimation leads to a weighted estimating equation without the need to include any bias correction term, that is, it is automatically bias-corrected. These features make the proposed γ-logistic regression more robust in model fitting and more intuitive for model interpretation through a simple weighting scheme. Our method is also easy to implement, and two types of algorithms are included. Simulation studies and the Pima data application are presented to demonstrate the performance of γ-logistic regression. © 2017, The International Biometric Society.

  14. Motion of Charged Particles near Magnetic Field Discontinuities

    International Nuclear Information System (INIS)

    Dodin, I.Y.; Fisch, N.J.

    2000-01-01

    The motion of charged particles in slowly changing magnetic fields exhibits adiabatic invariance even in the presence of abrupt magnetic discontinuities. Particles near discontinuities in magnetic fields, what we call ''boundary particles'', are constrained to remain near an arbitrarily fractured boundary even as the particle drifts along the discontinuity. A new adiabatic invariant applies to the motion of these particles

  15. Determination of Mantle Discontinuity Depths beneath the South Pacific Superswell As Inferred Using Data From Broadband OBS Array

    Science.gov (United States)

    Suetsugu, D.; Shiobara, H.; Sugioka, H.; Kanazawa, T.; Fukao, Y.

    2005-12-01

    discontinuity depths were estimated to be 403-431 km over the Superswell region, which are not substantially different from the global average considering the estimation error of 10 km. The 660-km discontinuity depths were also determined to be 654-674 km, close to the global average, at most of the stations. Data from a station near the Society hot spot, however, provide an anomalously shallow depth of 623 km, indicating a presence of a local hot anomaly at the bottom of the mantle transition zone beneath near the Society hot spot. Taking into consideration a possible effect of velocity anomalies on the depth estimation, the shallow anomaly is significant. The present result suggests that the thermal anomalies are not obvious in the Superswell-scale, but present locally beneath the Society hot spot.

  16. Quantile regression theory and applications

    CERN Document Server

    Davino, Cristina; Vistocco, Domenico

    2013-01-01

    A guide to the implementation and interpretation of Quantile Regression models This book explores the theory and numerous applications of quantile regression, offering empirical data analysis as well as the software tools to implement the methods. The main focus of this book is to provide the reader with a comprehensivedescription of the main issues concerning quantile regression; these include basic modeling, geometrical interpretation, estimation and inference for quantile regression, as well as issues on validity of the model, diagnostic tools. Each methodological aspect is explored and

  17. Estimation of a Reactor Core Power Peaking Factor Using Support Vector Regression and Uncertainty Analysis

    International Nuclear Information System (INIS)

    Bae, In Ho; Naa, Man Gyun; Lee, Yoon Joon; Park, Goon Cherl

    2009-01-01

    The monitoring of detailed 3-dimensional (3D) reactor core power distribution is a prerequisite in the operation of nuclear power reactors to ensure that various safety limits imposed on the LPD and DNBR, are not violated during nuclear power reactor operation. The LPD and DNBR should be calculated in order to perform the two major functions of the core protection calculator system (CPCS) and the core operation limit supervisory system (COLSS). The LPD at the hottest part of a hot fuel rod, which is related to the power peaking factor (PPF, F q ), is more important than the LPD at any other position in a reactor core. The LPD needs to be estimated accurately to prevent nuclear fuel rods from melting. In this study, support vector regression (SVR) and uncertainty analysis have been applied to estimation of reactor core power peaking factor

  18. Using Multiple and Logistic Regression to Estimate the Median WillCost and Probability of Cost and Schedule Overrun for Program Managers

    Science.gov (United States)

    2017-03-23

    Logistic Regression to Estimate the Median Will-Cost and Probability of Cost and Schedule Overrun for Program Managers Ryan C. Trudelle, B.S...not the other. We are able to give logistic regression models to program managers that identify several program characteristics for either...considered acceptable. We recommend the use of our logistic models as a tool to manage a portfolio of programs in order to gain potential elusive

  19. On the use of a regression model for trend estimates from ground-based atmospheric observations in the Southern hemisphere

    CSIR Research Space (South Africa)

    Bencherif, H

    2010-09-01

    Full Text Available The present reports on the use of a multi-regression model adapted at Reunion University for temperature and ozone trend estimates. Depending on the location of the observing site, the studied geophysical signal is broken down in form of a sum...

  20. Signal integrity analysis on discontinuous microstrip line

    International Nuclear Information System (INIS)

    Qiao, Qingyang; Dai, Yawen; Chen, Zipeng

    2013-01-01

    In high speed PCB design, microstirp lines were used to control the impedance, however, the discontinuous microstrip line can cause signal integrity problems. In this paper, we use the transmission line theory to study the characteristics of microstrip lines. Research results indicate that the discontinuity such as truncation, gap and size change result in the problems such as radiation, reflection, delay and ground bounce. We change the discontinuities to distributed parameter circuits, analysed the steady-state response and transient response and the phase delay. The transient response cause radiation and voltage jump.

  1. Comparison of Regression Analysis and Transfer Function in Estimating the Parameters of Central Pulse Waves from Brachial Pulse Wave.

    Science.gov (United States)

    Chai, Rui; Xu, Li-Sheng; Yao, Yang; Hao, Li-Ling; Qi, Lin

    2017-01-01

    This study analyzed ascending branch slope (A_slope), dicrotic notch height (Hn), diastolic area (Ad) and systolic area (As) diastolic blood pressure (DBP), systolic blood pressure (SBP), pulse pressure (PP), subendocardial viability ratio (SEVR), waveform parameter (k), stroke volume (SV), cardiac output (CO), and peripheral resistance (RS) of central pulse wave invasively and non-invasively measured. Invasively measured parameters were compared with parameters measured from brachial pulse waves by regression model and transfer function model. Accuracy of parameters estimated by regression and transfer function model, was compared too. Findings showed that k value, central pulse wave and brachial pulse wave parameters invasively measured, correlated positively. Regression model parameters including A_slope, DBP, SEVR, and transfer function model parameters had good consistency with parameters invasively measured. They had same effect of consistency. SBP, PP, SV, and CO could be calculated through the regression model, but their accuracies were worse than that of transfer function model.

  2. Accountability Accentuates Interindividual-Intergroup Discontinuity by Enforcing Parochialism

    OpenAIRE

    Wildschut, T.; Van Horen, F.; Hart, C.

    2015-01-01

    Interindividual-intergroup discontinuity is the tendency for relations between groups to be more competitive than relations between individuals. We examined whether the discontinuity effect arises in part because group members experience normative pressure to favor the ingroup (parochialism). Building on the notion that accountability enhances normative pressure, we hypothesized that the discontinuity effect would be larger when accountability is present (compared to absent). A prisoner’s dil...

  3. Discontinuities and the magnetospheric phenomena

    International Nuclear Information System (INIS)

    Rajaram, R.; Kalra, G.L.; Tandon, J.N.

    1978-01-01

    Wave coupling at contact discontinuities has an important bearing on the transmission of waves from the solar wind into the magnetosphere across the cusp region of the solar wind-magnetosphere boundary and on the propagation of geomagnetic pulsations in the polar exosphere. Keeping this in view, the problems of wave coupling across a contact discontinuity in a collisionless plasma, described by a set of double adiabatic fluid equations, is examined. The magnetic field is taken normal to the interface and it is shown that total reflection is not possible for any angle of incidence. The Alfven and the magneto-acoustic waves are not coupled. The transmission is most efficient for small density discontinuities. Inhibition of the transmission of the Alfven wave by the sharp density gradients above the F2-peak in the polar exosphere appears to account for the decrease in the pulsation amplitude, on the ground, as the poles are approached from the auroral zone. (author)

  4. Influence diagnostics in meta-regression model.

    Science.gov (United States)

    Shi, Lei; Zuo, ShanShan; Yu, Dalei; Zhou, Xiaohua

    2017-09-01

    This paper studies the influence diagnostics in meta-regression model including case deletion diagnostic and local influence analysis. We derive the subset deletion formulae for the estimation of regression coefficient and heterogeneity variance and obtain the corresponding influence measures. The DerSimonian and Laird estimation and maximum likelihood estimation methods in meta-regression are considered, respectively, to derive the results. Internal and external residual and leverage measure are defined. The local influence analysis based on case-weights perturbation scheme, responses perturbation scheme, covariate perturbation scheme, and within-variance perturbation scheme are explored. We introduce a method by simultaneous perturbing responses, covariate, and within-variance to obtain the local influence measure, which has an advantage of capable to compare the influence magnitude of influential studies from different perturbations. An example is used to illustrate the proposed methodology. Copyright © 2017 John Wiley & Sons, Ltd.

  5. Strategies for discontinuation of proton pump inhibitors

    DEFF Research Database (Denmark)

    Haastrup, Peter; Paulsen, Maja S; Begtrup, Luise M

    2014-01-01

    PURPOSE: Proton pump inhibitors (PPIs) are considered to be overprescribed. Consensus on how to attempt discontinuation is, however, lacking. We therefore conducted a systematic review of clinical studies on discontinuation of PPIs. METHODS: Systematic review based on clinical studies investigating...

  6. On the stability of rotational discontinuities and intermediate shocks

    International Nuclear Information System (INIS)

    Lee, L.C.; Huang, L.; Chao, J.K.

    1989-01-01

    The stability of rotational discontinuities and intermediate shocks is studied based on a hybrid simulation code. The simulation results show that rotational discontinuities are stable and intermediate shocks are not stationary. Intermediate shocks tend to evolve to rotational discontinuities and waves. The authors employ several different initial profiles for the magnetic field in the transition region and find that the final structure of the discontinuities or shocks is not sensitive to the initial magnetic field profile. The present results are different from those obtained from the resistive MHD simulations. Furthermore, their study indicates that the kinetic effect of particles plays an important role in the structure and stability of rotational discontinuities and intermediate shocks

  7. Optimal age of commencing and discontinuing thiazide therapy to protect against fractures

    DEFF Research Database (Denmark)

    Kruse, C; Eiken, P; Vestergaard, P

    2016-01-01

    subjects. Ten-year crude and adjusted age-grouped hazard ratios (HRs) of fracture occurrence were stratified by age of commencing thiazides compared to non-exposure. Separate analyses were done on Anatomical Therapeutic Chemical Classification System (ATC) codes C03AA and C03AA + C03AB compiled. Ten...... occurrence. INTRODUCTION: The purpose of this study was to retrospectively examine the optimal age for commencing and discontinuing thiazide therapy to protect from osteoporotic fractures. METHODS: A population-based, retrospective matched cohort study was done using national data of 2.93 million Danish......-year crude HRs of fracture occurrence for discontinuing vs. continuing thiazides were estimated and stratified by age for the two groups. RESULTS: For C03AB alone (97.1 % of thiazide prescriptions), adjusted 10-year HRs of fracture occurrence were significantly increased for thiazide commencement after age...

  8. riskRegression

    DEFF Research Database (Denmark)

    Ozenne, Brice; Sørensen, Anne Lyngholm; Scheike, Thomas

    2017-01-01

    In the presence of competing risks a prediction of the time-dynamic absolute risk of an event can be based on cause-specific Cox regression models for the event and the competing risks (Benichou and Gail, 1990). We present computationally fast and memory optimized C++ functions with an R interface...... for predicting the covariate specific absolute risks, their confidence intervals, and their confidence bands based on right censored time to event data. We provide explicit formulas for our implementation of the estimator of the (stratified) baseline hazard function in the presence of tied event times. As a by...... functionals. The software presented here is implemented in the riskRegression package....

  9. Discontinuation and non-publication of randomised clinical trials supported by the main public funding body in Switzerland: a retrospective cohort study.

    Science.gov (United States)

    Amstutz, Alain; Schandelmaier, Stefan; Frei, Roy; Surina, Jakub; Agarwal, Arnav; Olu, Kelechi Kalu; Alturki, Reem; Von Niederhäusern, Belinda; Von Elm, Erik; Briel, Matthias

    2017-08-01

    The Swiss National Science Foundation (SNSF) promotes academic excellence through competitive selection of study proposals and rigorous evaluation of feasibility, but completion status and publication history of SNSF-supported randomised clinical trials (RCTs) remain unclear. The main objectives were to review all healthcare RCTs supported by the SNSF for trial discontinuation and non-publication, to investigate potential risk factors for trial discontinuation due to poor recruitment and non-publication, and to compare findings to other Swiss RCTs not supported by the SNSF. We established a retrospective cohort of all SNSF-supported RCTs for which recruitment and funding had ended in 2015 or earlier. For each RCT, two investigators independently searched corresponding publications in electronic databases. In addition, we approached all principal investigators to ask for additional publications and information about trial discontinuation. Teams of two investigators independently extracted details about study design, recruitment of participants, outcomes, analysis and sample size from the original proposal and, if available, from trial registries and publications. We used multivariable regression analysis to explore potential risk factors associated with discontinuation due to poor recruitment and with non-publication, and to compare our results with data from a previous cohort of Swiss RCTs not supported by the SNSF. We included 101 RCTs supported by the SNSF between 1986 and 2015. Eighty-seven (86%) principal investigators responded to our survey. Overall, 69 (68%) RCTs were completed, 26 (26%) RCTs were prematurely discontinued (all due to slow recruitment) and the completion status remained unclear for 6 (6%) RCTs. For analysing publication status, we excluded 4 RCTs for which follow-up was still ongoing and 9 for which manuscripts were still in preparation. Of the remaining 88 RCTs, 53 (60%) were published as full articles in peer-reviewed journals

  10. Immunosuppressive Drug Discontinuation in Noninfectious Uveitis From Real-Life Clinical Practice: A Survival Analysis.

    Science.gov (United States)

    Abásolo, Lydia; Rosales, Zulema; Díaz-Valle, David; Gómez-Gómez, Alejandro; Peña-Blanco, Rayma C; Prieto-García, Ángela; Benítez-Del-Castillo, José Manuel; Pato, Esperanza; García-Feijoo, Julián; Fernández-Gutiérrez, Benjamín; Rodriguez-Rodriguez, Luis

    2016-09-01

    To assess in uveitis patients the rate of immunosuppressive drug (ISD) discontinuation in real-life clinical practice, comparing this rate among ISDs. Longitudinal retrospective cohort study. We included uveitis patients attending a tertiary eye referral center from Madrid (Spain) between 1989 and 2015, prescribed any ISDs (cyclosporine, methotrexate, azathioprine, anti-TNF drugs, or others). Our main outcome was discontinuation of all ISDs owing to clinical efficacy, inefficacy, adverse drug reaction (ADR), and other medical causes. Discontinuation rates (DRs) per 100 patient-years were estimated. Variables associated with specific-cause discontinuations were analyzed using Cox bivariate and multivariate models. We analyzed 110 patients with 263 treatment courses and 665.2 patient-years of observation. Cyclosporine (66.4%), methotrexate (47.3%), azathioprine (30.9%), and anti-TNFs (30.9%) were the most frequently used ISDs. Treatment was suspended in 136 cases (mostly owing to clinical efficacy [38.2%], inefficacy [26.5%], and ADRs [22.8%]). All-cause DR with 95% confidence interval was 20.4 [17.3-24.2]. Retention rates at 1 and 10 years were 74% and 16%, respectively. In the multivariate analysis, combined treatment exhibited higher DRs owing to clinical efficacy than other ISDs in monotherapy. Conversely, nonbiologic combination therapy with azathioprine exhibited the highest DR owing to ADRs. Clinical efficacy was the most frequent cause for ISD discontinuation, followed by inefficacy and ADRs. DR owing to efficacy was higher for combination therapy. Furthermore, nonbiologic combination therapy with azathioprine was associated with a higher DR owing to ADRs. Copyright © 2016 Elsevier Inc. All rights reserved.

  11. Background stratified Poisson regression analysis of cohort data.

    Science.gov (United States)

    Richardson, David B; Langholz, Bryan

    2012-03-01

    Background stratified Poisson regression is an approach that has been used in the analysis of data derived from a variety of epidemiologically important studies of radiation-exposed populations, including uranium miners, nuclear industry workers, and atomic bomb survivors. We describe a novel approach to fit Poisson regression models that adjust for a set of covariates through background stratification while directly estimating the radiation-disease association of primary interest. The approach makes use of an expression for the Poisson likelihood that treats the coefficients for stratum-specific indicator variables as 'nuisance' variables and avoids the need to explicitly estimate the coefficients for these stratum-specific parameters. Log-linear models, as well as other general relative rate models, are accommodated. This approach is illustrated using data from the Life Span Study of Japanese atomic bomb survivors and data from a study of underground uranium miners. The point estimate and confidence interval obtained from this 'conditional' regression approach are identical to the values obtained using unconditional Poisson regression with model terms for each background stratum. Moreover, it is shown that the proposed approach allows estimation of background stratified Poisson regression models of non-standard form, such as models that parameterize latency effects, as well as regression models in which the number of strata is large, thereby overcoming the limitations of previously available statistical software for fitting background stratified Poisson regression models.

  12. Study Heterogeneity and Estimation of Prevalence of Primary Aldosteronism: A Systematic Review and Meta-Regression Analysis.

    Science.gov (United States)

    Käyser, Sabine C; Dekkers, Tanja; Groenewoud, Hans J; van der Wilt, Gert Jan; Carel Bakx, J; van der Wel, Mark C; Hermus, Ad R; Lenders, Jacques W; Deinum, Jaap

    2016-07-01

    For health care planning and allocation of resources, realistic estimation of the prevalence of primary aldosteronism is necessary. Reported prevalences of primary aldosteronism are highly variable, possibly due to study heterogeneity. Our objective was to identify and explain heterogeneity in studies that aimed to establish the prevalence of primary aldosteronism in hypertensive patients. PubMed, EMBASE, Web of Science, Cochrane Library, and reference lists from January 1, 1990, to January 31, 2015, were used as data sources. Description of an adult hypertensive patient population with confirmed diagnosis of primary aldosteronism was included in this study. Dual extraction and quality assessment were the forms of data extraction. Thirty-nine studies provided data on 42 510 patients (nine studies, 5896 patients from primary care). Prevalence estimates varied from 3.2% to 12.7% in primary care and from 1% to 29.8% in referral centers. Heterogeneity was too high to establish point estimates (I(2) = 57.6% in primary care; 97.1% in referral centers). Meta-regression analysis showed higher prevalences in studies 1) published after 2000, 2) from Australia, 3) aimed at assessing prevalence of secondary hypertension, 4) that were retrospective, 5) that selected consecutive patients, and 6) not using a screening test. All studies had minor or major flaws. This study demonstrates that it is pointless to claim low or high prevalence of primary aldosteronism based on published reports. Because of the significant impact of a diagnosis of primary aldosteronism on health care resources and the necessary facilities, our findings urge for a prevalence study whose design takes into account the factors identified in the meta-regression analysis.

  13. Ridge Regression Signal Processing

    Science.gov (United States)

    Kuhl, Mark R.

    1990-01-01

    The introduction of the Global Positioning System (GPS) into the National Airspace System (NAS) necessitates the development of Receiver Autonomous Integrity Monitoring (RAIM) techniques. In order to guarantee a certain level of integrity, a thorough understanding of modern estimation techniques applied to navigational problems is required. The extended Kalman filter (EKF) is derived and analyzed under poor geometry conditions. It was found that the performance of the EKF is difficult to predict, since the EKF is designed for a Gaussian environment. A novel approach is implemented which incorporates ridge regression to explain the behavior of an EKF in the presence of dynamics under poor geometry conditions. The basic principles of ridge regression theory are presented, followed by the derivation of a linearized recursive ridge estimator. Computer simulations are performed to confirm the underlying theory and to provide a comparative analysis of the EKF and the recursive ridge estimator.

  14. Logistic regression for dichotomized counts.

    Science.gov (United States)

    Preisser, John S; Das, Kalyan; Benecha, Habtamu; Stamm, John W

    2016-12-01

    Sometimes there is interest in a dichotomized outcome indicating whether a count variable is positive or zero. Under this scenario, the application of ordinary logistic regression may result in efficiency loss, which is quantifiable under an assumed model for the counts. In such situations, a shared-parameter hurdle model is investigated for more efficient estimation of regression parameters relating to overall effects of covariates on the dichotomous outcome, while handling count data with many zeroes. One model part provides a logistic regression containing marginal log odds ratio effects of primary interest, while an ancillary model part describes the mean count of a Poisson or negative binomial process in terms of nuisance regression parameters. Asymptotic efficiency of the logistic model parameter estimators of the two-part models is evaluated with respect to ordinary logistic regression. Simulations are used to assess the properties of the models with respect to power and Type I error, the latter investigated under both misspecified and correctly specified models. The methods are applied to data from a randomized clinical trial of three toothpaste formulations to prevent incident dental caries in a large population of Scottish schoolchildren. © The Author(s) 2014.

  15. Financially troubled El Paso discontinues more nonutility operations

    International Nuclear Information System (INIS)

    Anon.

    1990-01-01

    As part of a plan to phase out its nonutility businesses, El Paso Electric Company will discontinue its remaining PascoTex Corporation operations, consisting of the manufacture of specialty steel products, and company management released revised estimates of losses to be incurred during the disposal period. Recently El Paso announced it would also bow out of most of its nonutility real estate operations. Increased operating expenses, principally at El Paso's Palo Verde station nuclear facility, have also impacted the bottom line. All three units at Palo Verde were out of service at the time of this writing. The good news was that energy sales have increased

  16. Estimation of the laser cutting operating cost by support vector regression methodology

    Science.gov (United States)

    Jović, Srđan; Radović, Aleksandar; Šarkoćević, Živče; Petković, Dalibor; Alizamir, Meysam

    2016-09-01

    Laser cutting is a popular manufacturing process utilized to cut various types of materials economically. The operating cost is affected by laser power, cutting speed, assist gas pressure, nozzle diameter and focus point position as well as the workpiece material. In this article, the process factors investigated were: laser power, cutting speed, air pressure and focal point position. The aim of this work is to relate the operating cost to the process parameters mentioned above. CO2 laser cutting of stainless steel of medical grade AISI316L has been investigated. The main goal was to analyze the operating cost through the laser power, cutting speed, air pressure, focal point position and material thickness. Since the laser operating cost is a complex, non-linear task, soft computing optimization algorithms can be used. Intelligent soft computing scheme support vector regression (SVR) was implemented. The performance of the proposed estimator was confirmed with the simulation results. The SVR results are then compared with artificial neural network and genetic programing. According to the results, a greater improvement in estimation accuracy can be achieved through the SVR compared to other soft computing methodologies. The new optimization methods benefit from the soft computing capabilities of global optimization and multiobjective optimization rather than choosing a starting point by trial and error and combining multiple criteria into a single criterion.

  17. Regional regression equations for the estimation of selected monthly low-flow duration and frequency statistics at ungaged sites on streams in New Jersey

    Science.gov (United States)

    Watson, Kara M.; McHugh, Amy R.

    2014-01-01

    Regional regression equations were developed for estimating monthly flow-duration and monthly low-flow frequency statistics for ungaged streams in Coastal Plain and non-coastal regions of New Jersey for baseline and current land- and water-use conditions. The equations were developed to estimate 87 different streamflow statistics, which include the monthly 99-, 90-, 85-, 75-, 50-, and 25-percentile flow-durations of the minimum 1-day daily flow; the August–September 99-, 90-, and 75-percentile minimum 1-day daily flow; and the monthly 7-day, 10-year (M7D10Y) low-flow frequency. These 87 streamflow statistics were computed for 41 continuous-record streamflow-gaging stations (streamgages) with 20 or more years of record and 167 low-flow partial-record stations in New Jersey with 10 or more streamflow measurements. The regression analyses used to develop equations to estimate selected streamflow statistics were performed by testing the relation between flow-duration statistics and low-flow frequency statistics for 32 basin characteristics (physical characteristics, land use, surficial geology, and climate) at the 41 streamgages and 167 low-flow partial-record stations. The regression analyses determined drainage area, soil permeability, average April precipitation, average June precipitation, and percent storage (water bodies and wetlands) were the significant explanatory variables for estimating the selected flow-duration and low-flow frequency statistics. Streamflow estimates were computed for two land- and water-use conditions in New Jersey—land- and water-use during the baseline period of record (defined as the years a streamgage had little to no change in development and water use) and current land- and water-use conditions (1989–2008)—for each selected station using data collected through water year 2008. The baseline period of record is representative of a period when the basin was unaffected by change in development. The current period is

  18. Socio-Economic Differentials in Contraceptive Discontinuation in India

    Directory of Open Access Journals (Sweden)

    Kiran Agrahari

    2016-05-01

    Full Text Available Fertility divergence amid declining in use of modern contraception in many states of India needs urgent research and programmatic attention. Although utilization of antenatal, natal, and post-natal care has shown spectacular increase in post National Rural Health Mission (NRHM period, the contraceptive use had shown a declining trend. Using the calendar data from the National Family Health Survey–3, this article examines the reasons of contraceptive discontinuation among spacing method users by socio-economic groups in India. Bivariate and multivariate analyses and life table discontinuation rates are used in the analyses. Results suggest that about half of the pill users, two fifths of the condom users, one third of traditional method users, and one fifth of IUD users discontinue a method in first 12 months of use. However, the discontinuation of all three modern spacing methods declines in subsequent period (within 12-36 months. The probability of method failure was highest among traditional method users and higher among poor and less educated that may lead to unwanted/mistimed birth. Although discontinuation of condom declines with economic status, it does not show any large variation for pill users. The contraceptive discontinuation was significantly associated with duration of use, age, parity, contraceptive method, religion, and contraceptive intention. Based on these findings, it is suggested that follow-up services to modern spacing method users, increasing counseling for spacing method users, motivating the traditional method user to use modern spacing method, and improving the overall quality of family planning services can reduce the discontinuation of spacing method.

  19. Computed statistics at streamgages, and methods for estimating low-flow frequency statistics and development of regional regression equations for estimating low-flow frequency statistics at ungaged locations in Missouri

    Science.gov (United States)

    Southard, Rodney E.

    2013-01-01

    The weather and precipitation patterns in Missouri vary considerably from year to year. In 2008, the statewide average rainfall was 57.34 inches and in 2012, the statewide average rainfall was 30.64 inches. This variability in precipitation and resulting streamflow in Missouri underlies the necessity for water managers and users to have reliable streamflow statistics and a means to compute select statistics at ungaged locations for a better understanding of water availability. Knowledge of surface-water availability is dependent on the streamflow data that have been collected and analyzed by the U.S. Geological Survey for more than 100 years at approximately 350 streamgages throughout Missouri. The U.S. Geological Survey, in cooperation with the Missouri Department of Natural Resources, computed streamflow statistics at streamgages through the 2010 water year, defined periods of drought and defined methods to estimate streamflow statistics at ungaged locations, and developed regional regression equations to compute selected streamflow statistics at ungaged locations. Streamflow statistics and flow durations were computed for 532 streamgages in Missouri and in neighboring States of Missouri. For streamgages with more than 10 years of record, Kendall’s tau was computed to evaluate for trends in streamflow data. If trends were detected, the variable length method was used to define the period of no trend. Water years were removed from the dataset from the beginning of the record for a streamgage until no trend was detected. Low-flow frequency statistics were then computed for the entire period of record and for the period of no trend if 10 or more years of record were available for each analysis. Three methods are presented for computing selected streamflow statistics at ungaged locations. The first method uses power curve equations developed for 28 selected streams in Missouri and neighboring States that have multiple streamgages on the same streams. Statistical

  20. Precision Interval Estimation of the Response Surface by Means of an Integrated Algorithm of Neural Network and Linear Regression

    Science.gov (United States)

    Lo, Ching F.

    1999-01-01

    The integration of Radial Basis Function Networks and Back Propagation Neural Networks with the Multiple Linear Regression has been accomplished to map nonlinear response surfaces over a wide range of independent variables in the process of the Modem Design of Experiments. The integrated method is capable to estimate the precision intervals including confidence and predicted intervals. The power of the innovative method has been demonstrated by applying to a set of wind tunnel test data in construction of response surface and estimation of precision interval.

  1. Penalized estimation for competing risks regression with applications to high-dimensional covariates

    DEFF Research Database (Denmark)

    Ambrogi, Federico; Scheike, Thomas H.

    2016-01-01

    of competing events. The direct binomial regression model of Scheike and others (2008. Predicting cumulative incidence probability by direct binomial regression. Biometrika 95: (1), 205-220) is reformulated in a penalized framework to possibly fit a sparse regression model. The developed approach is easily...... Research 19: (1), 29-51), the research regarding competing risks is less developed (Binder and others, 2009. Boosting for high-dimensional time-to-event data with competing risks. Bioinformatics 25: (7), 890-896). The aim of this work is to consider how to do penalized regression in the presence...... implementable using existing high-performance software to do penalized regression. Results from simulation studies are presented together with an application to genomic data when the endpoint is progression-free survival. An R function is provided to perform regularized competing risks regression according...

  2. Automatic yield-line analysis of slabs using discontinuity layout optimization.

    Science.gov (United States)

    Gilbert, Matthew; He, Linwei; Smith, Colin C; Le, Canh V

    2014-08-08

    The yield-line method of analysis is a long established and extremely effective means of estimating the maximum load sustainable by a slab or plate. However, although numerous attempts to automate the process of directly identifying the critical pattern of yield-lines have been made over the past few decades, to date none has proved capable of reliably analysing slabs of arbitrary geometry. Here, it is demonstrated that the discontinuity layout optimization (DLO) procedure can successfully be applied to such problems. The procedure involves discretization of the problem using nodes inter-connected by potential yield-line discontinuities, with the critical layout of these then identified using linear programming. The procedure is applied to various benchmark problems, demonstrating that highly accurate solutions can be obtained, and showing that DLO provides a truly systematic means of directly and reliably automatically identifying yield-line patterns. Finally, since the critical yield-line patterns for many problems are found to be quite complex in form, a means of automatically simplifying these is presented.

  3. Fluid flow in a porous medium with transverse permeability discontinuity

    Science.gov (United States)

    Pavlovskaya, Galina E.; Meersmann, Thomas; Jin, Chunyu; Rigby, Sean P.

    2018-04-01

    Magnetic resonance imaging (MRI) velocimetry methods are used to study fully developed axially symmetric fluid flow in a model porous medium of cylindrical symmetry with a transverse permeability discontinuity. Spatial mapping of fluid flow results in radial velocity profiles. High spatial resolution of these profiles allows estimating the slip in velocities at the boundary with a permeability discontinuity zone in a sample. The profiles are compared to theoretical velocity fields for a fully developed axially symmetric flow in a cylinder derived from the Beavers-Joseph [G. S. Beavers and D. D. Joseph, J. Fluid Mech. 30, 197 (1967), 10.1017/S0022112067001375] and Brinkman [H. C. Brinkman, Appl. Sci. Res. A 1, 27 (1947), 10.1007/BF02120313] models. Velocity fields are also computed using pore-scale lattice Boltzmann modeling (LBM) where the assumption about the boundary could be omitted. Both approaches give good agreement between theory and experiment, though LBM velocity fields follow the experiment more closely. This work shows great promise for MRI velocimetry methods in addressing the boundary behavior of fluids in opaque heterogeneous porous media.

  4. Background stratified Poisson regression analysis of cohort data

    International Nuclear Information System (INIS)

    Richardson, David B.; Langholz, Bryan

    2012-01-01

    Background stratified Poisson regression is an approach that has been used in the analysis of data derived from a variety of epidemiologically important studies of radiation-exposed populations, including uranium miners, nuclear industry workers, and atomic bomb survivors. We describe a novel approach to fit Poisson regression models that adjust for a set of covariates through background stratification while directly estimating the radiation-disease association of primary interest. The approach makes use of an expression for the Poisson likelihood that treats the coefficients for stratum-specific indicator variables as 'nuisance' variables and avoids the need to explicitly estimate the coefficients for these stratum-specific parameters. Log-linear models, as well as other general relative rate models, are accommodated. This approach is illustrated using data from the Life Span Study of Japanese atomic bomb survivors and data from a study of underground uranium miners. The point estimate and confidence interval obtained from this 'conditional' regression approach are identical to the values obtained using unconditional Poisson regression with model terms for each background stratum. Moreover, it is shown that the proposed approach allows estimation of background stratified Poisson regression models of non-standard form, such as models that parameterize latency effects, as well as regression models in which the number of strata is large, thereby overcoming the limitations of previously available statistical software for fitting background stratified Poisson regression models. (orig.)

  5. Random regression models to estimate genetic parameters for milk production of Guzerat cows using orthogonal Legendre polynomials

    Directory of Open Access Journals (Sweden)

    Maria Gabriela Campolina Diniz Peixoto

    2014-05-01

    Full Text Available The objective of this work was to compare random regression models for the estimation of genetic parameters for Guzerat milk production, using orthogonal Legendre polynomials. Records (20,524 of test-day milk yield (TDMY from 2,816 first-lactation Guzerat cows were used. TDMY grouped into 10-monthly classes were analyzed for additive genetic effect and for environmental and residual permanent effects (random effects, whereas the contemporary group, calving age (linear and quadratic effects and mean lactation curve were analized as fixed effects. Trajectories for the additive genetic and permanent environmental effects were modeled by means of a covariance function employing orthogonal Legendre polynomials ranging from the second to the fifth order. Residual variances were considered in one, four, six, or ten variance classes. The best model had six residual variance classes. The heritability estimates for the TDMY records varied from 0.19 to 0.32. The random regression model that used a second-order Legendre polynomial for the additive genetic effect, and a fifth-order polynomial for the permanent environmental effect is adequate for comparison by the main employed criteria. The model with a second-order Legendre polynomial for the additive genetic effect, and that with a fourth-order for the permanent environmental effect could also be employed in these analyses.

  6. Evaluation of Ordinary Least Square (OLS) and Geographically Weighted Regression (GWR) for Water Quality Monitoring: A Case Study for the Estimation of Salinity

    Science.gov (United States)

    Nazeer, Majid; Bilal, Muhammad

    2018-04-01

    Landsat-5 Thematic Mapper (TM) dataset have been used to estimate salinity in the coastal area of Hong Kong. Four adjacent Landsat TM images were used in this study, which was atmospherically corrected using the Second Simulation of the Satellite Signal in the Solar Spectrum (6S) radiative transfer code. The atmospherically corrected images were further used to develop models for salinity using Ordinary Least Square (OLS) regression and Geographically Weighted Regression (GWR) based on in situ data of October 2009. Results show that the coefficient of determination ( R 2) of 0.42 between the OLS estimated and in situ measured salinity is much lower than that of the GWR model, which is two times higher ( R 2 = 0.86). It indicates that the GWR model has more ability than the OLS regression model to predict salinity and show its spatial heterogeneity better. It was observed that the salinity was high in Deep Bay (north-western part of Hong Kong) which might be due to the industrial waste disposal, whereas the salinity was estimated to be constant (32 practical salinity units) towards the open sea.

  7. Fragility estimation for seismically isolated nuclear structures by high confidence low probability of failure values and bi-linear regression

    International Nuclear Information System (INIS)

    Carausu, A.

    1996-01-01

    A method for the fragility estimation of seismically isolated nuclear power plant structure is proposed. The relationship between the ground motion intensity parameter (e.g. peak ground velocity or peak ground acceleration) and the response of isolated structures is expressed in terms of a bi-linear regression line, whose coefficients are estimated by the least-square method in terms of available data on seismic input and structural response. The notion of high confidence low probability of failure (HCLPF) value is also used for deriving compound fragility curves for coupled subsystems. (orig.)

  8. Continuous and discontinuous transitions to synchronization.

    Science.gov (United States)

    Wang, Chaoqing; Garnier, Nicolas B

    2016-11-01

    We describe how the transition to synchronization in a system of globally coupled Stuart-Landau oscillators changes from continuous to discontinuous when the nature of the coupling is moved from diffusive to reactive. We explain this drastic qualitative change as resulting from the co-existence of a particular synchronized macrostate together with the trivial incoherent macrostate, in a range of parameter values for which the latter is linearly stable. In contrast to the paradigmatic Kuramoto model, this particular state observed at the synchronization transition contains a finite, non-vanishing number of synchronized oscillators, which results in a discontinuous transition. We consider successively two situations where either a fully synchronized state or a partially synchronized state exists at the transition. Thermodynamic limit and finite size effects are briefly discussed, as well as connections with recently observed discontinuous transitions.

  9. Actor Bonds in Situations of Discontinuous Business Activities

    DEFF Research Database (Denmark)

    Skaates, Maria Anne

    2000-01-01

    Demand in many industrial buying situations, e.g. project purchases or procurement related to virtual organizations, is discontinuous. In situations of discontinuity, networks are often more of an ad hos informational and social nature, as strong activity and resource links are not present....... Furthermore the governance structure of markets characterized by discontinuous business activities is either that of the "socially constructed market" (Skaates, 2000) or that of the (socially constructed) network (Håkansson and Johanson, 1993). Additionally relationships and actor bonds vary substantially...

  10. A gentle introduction to quantile regression for ecologists

    Science.gov (United States)

    Cade, B.S.; Noon, B.R.

    2003-01-01

    Quantile regression is a way to estimate the conditional quantiles of a response variable distribution in the linear model that provides a more complete view of possible causal relationships between variables in ecological processes. Typically, all the factors that affect ecological processes are not measured and included in the statistical models used to investigate relationships between variables associated with those processes. As a consequence, there may be a weak or no predictive relationship between the mean of the response variable (y) distribution and the measured predictive factors (X). Yet there may be stronger, useful predictive relationships with other parts of the response variable distribution. This primer relates quantile regression estimates to prediction intervals in parametric error distribution regression models (eg least squares), and discusses the ordering characteristics, interval nature, sampling variation, weighting, and interpretation of the estimates for homogeneous and heterogeneous regression models.

  11. Rotational discontinuities and the structure of the magnetopause

    International Nuclear Information System (INIS)

    Swift, D.W.; Lee, L.C.

    1983-01-01

    Symmetric and asymmetric rotational discontinuities are studied by means of a one-dimensional computer simulation and by single-particle trajectory calculations. The numerical simulations show the symmetric rotation to be stable for both ion and electron senses of rotation with a thickness of the order of a few ion gyroradii when the rotation angle of the tangential field is 180 0 or less. Larger rotation angles tend to be unstable. In an expansive discontinuity, when the magnetic field on the downstream side of the discontinuity is larger, an expanding transition layer separating the high-field from a low-field region develops on the downstream side, and a symmetric rotational discontinuity forms at the upstream edge. The implication of these results for magnetopause structure and energy flow through the magnetopause is described

  12. Bayesian Travel Time Inversion adopting Gaussian Process Regression

    Science.gov (United States)

    Mauerberger, S.; Holschneider, M.

    2017-12-01

    A major application in seismology is the determination of seismic velocity models. Travel time measurements are putting an integral constraint on the velocity between source and receiver. We provide insight into travel time inversion from a correlation-based Bayesian point of view. Therefore, the concept of Gaussian process regression is adopted to estimate a velocity model. The non-linear travel time integral is approximated by a 1st order Taylor expansion. A heuristic covariance describes correlations amongst observations and a priori model. That approach enables us to assess a proxy of the Bayesian posterior distribution at ordinary computational costs. No multi dimensional numeric integration nor excessive sampling is necessary. Instead of stacking the data, we suggest to progressively build the posterior distribution. Incorporating only a single evidence at a time accounts for the deficit of linearization. As a result, the most probable model is given by the posterior mean whereas uncertainties are described by the posterior covariance.As a proof of concept, a synthetic purely 1d model is addressed. Therefore a single source accompanied by multiple receivers is considered on top of a model comprising a discontinuity. We consider travel times of both phases - direct and reflected wave - corrupted by noise. Left and right of the interface are assumed independent where the squared exponential kernel serves as covariance.

  13. [Discontinuation of depression treatment from the perspective of suicide prevention].

    Science.gov (United States)

    Cho, Yoshinori

    2012-01-01

    It is assumed that discontinuation of treatment for depression may increase the risk of suicide. A population-based register study in Denmark did not find a lower risk among people over age 50 who followed treatment in comparison with those who discontinued treatment with antidepressants at an early stage. This result, however, does not allow us to think superficially that early discontinuation of treatment does not increase the risk of suicide. It is because the study has limitations without information of such as psychiatric diagnoses, severity of the depressed state, and reasons of discontinuation. It is safe for clinicians to aim at preventing discontinuation of treatment. Particularly, in Japan and South Korea where there is a sociocultural climate of tolerability for suicide, suicide can occur in milder depressed state and discontinuation of treatment should be taken more seriously than in Western countries.

  14. SEPARATION PHENOMENA LOGISTIC REGRESSION

    Directory of Open Access Journals (Sweden)

    Ikaro Daniel de Carvalho Barreto

    2014-03-01

    Full Text Available This paper proposes an application of concepts about the maximum likelihood estimation of the binomial logistic regression model to the separation phenomena. It generates bias in the estimation and provides different interpretations of the estimates on the different statistical tests (Wald, Likelihood Ratio and Score and provides different estimates on the different iterative methods (Newton-Raphson and Fisher Score. It also presents an example that demonstrates the direct implications for the validation of the model and validation of variables, the implications for estimates of odds ratios and confidence intervals, generated from the Wald statistics. Furthermore, we present, briefly, the Firth correction to circumvent the phenomena of separation.

  15. The Use of Nonparametric Kernel Regression Methods in Econometric Production Analysis

    DEFF Research Database (Denmark)

    Czekaj, Tomasz Gerard

    and nonparametric estimations of production functions in order to evaluate the optimal firm size. The second paper discusses the use of parametric and nonparametric regression methods to estimate panel data regression models. The third paper analyses production risk, price uncertainty, and farmers' risk preferences...... within a nonparametric panel data regression framework. The fourth paper analyses the technical efficiency of dairy farms with environmental output using nonparametric kernel regression in a semiparametric stochastic frontier analysis. The results provided in this PhD thesis show that nonparametric......This PhD thesis addresses one of the fundamental problems in applied econometric analysis, namely the econometric estimation of regression functions. The conventional approach to regression analysis is the parametric approach, which requires the researcher to specify the form of the regression...

  16. Time to discontinuation of atypical versus typical antipsychotics in the naturalistic treatment of schizophrenia

    Directory of Open Access Journals (Sweden)

    Swartz Marvin

    2006-02-01

    Full Text Available Abstract Background There is an ongoing debate over whether atypical antipsychotics are more effective than typical antipsychotics in the treatment of schizophrenia. This naturalistic study compares atypical and typical antipsychotics on time to all-cause medication discontinuation, a recognized index of medication effectiveness in the treatment of schizophrenia. Methods We used data from a large, 3-year, observational, non-randomized, multisite study of schizophrenia, conducted in the U.S. between 7/1997 and 9/2003. Patients who were initiated on oral atypical antipsychotics (clozapine, olanzapine, risperidone, quetiapine, or ziprasidone or oral typical antipsychotics (low, medium, or high potency were compared on time to all-cause medication discontinuation for 1 year following initiation. Treatment group comparisons were based on treatment episodes using 3 statistical approaches (Kaplan-Meier survival analysis, Cox Proportional Hazards regression model, and propensity score-adjusted bootstrap resampling methods. To further assess the robustness of the findings, sensitivity analyses were performed, including the use of (a only 1 medication episode for each patient, the one with which the patient was treated first, and (b all medication episodes, including those simultaneously initiated on more than 1 antipsychotic. Results Mean time to all-cause medication discontinuation was longer on atypical (N = 1132, 256.3 days compared to typical antipsychotics (N = 534, 197.2 days; p Conclusion In the usual care of schizophrenia patients, time to medication discontinuation for any cause appears significantly longer for atypical than typical antipsychotics regardless of the typical antipsychotic potency level. Findings were primarily driven by clozapine and olanzapine, and to a lesser extent by risperidone. Furthermore, only clozapine and olanzapine therapy showed consistently and significantly longer treatment duration compared to perphenazine, a medium

  17. Estimation of residual stress in welding of dissimilar metals at nuclear power plants using cascaded support vetor regression

    Energy Technology Data Exchange (ETDEWEB)

    Koo, Young Do; Yoo, Kwae Hwan; Na, Man Gyun [Dept. of Nuclear Engineering, Chosun University, Gwangju (Korea, Republic of)

    2017-06-15

    Residual stress is a critical element in determining the integrity of parts and the lifetime of welded structures. It is necessary to estimate the residual stress of a welding zone because residual stress is a major reason for the generation of primary water stress corrosion cracking in nuclear power plants. That is, it is necessary to estimate the distribution of the residual stress in welding of dissimilar metals under manifold welding conditions. In this study, a cascaded support vector regression (CSVR) model was presented to estimate the residual stress of a welding zone. The CSVR model was serially and consecutively structured in terms of SVR modules. Using numerical data obtained from finite element analysis by a subtractive clustering method, learning data that explained the characteristic behavior of the residual stress of a welding zone were selected to optimize the proposed model. The results suggest that the CSVR model yielded a better estimation performance when compared with a classic SVR model.

  18. Risk of discontinuation of Advanced Therapy Medicinal Products clinical trials.

    Science.gov (United States)

    Hanna, Eve; Rémuzat, Cecile; Auquier, Pascal; Toumi, Mondher

    2016-01-01

    Advanced therapy medicinal products (ATMPs) constitute a class of innovative products that encompasses gene therapy, somatic cell therapy, and tissue-engineered products (TEP). There is an increased investment of commercial and non-commercial sponsors in this field and a growing number of ATMPs randomized clinical trials (RCT) and patients enrolled in such trials. RCT generate data to prove the efficacy of a new therapy, but the discontinuation of RCTs wastes scarce resources. Our objective is to identify the number and characteristics of discontinued ATMPs trials in order to evaluate the rate of discontinuation. We searched for ATMPs trials conducted between 1999 to June 2015 using three databases, which are Clinicaltrials.gov, the International Clinical Trials Registry Platform (ICTRP), and the EU Drug Regulating Authorities Clinical Trials (EudraCT). We selected the ATMPs trials after elimination of the duplicates. We identified the disease areas and the sponsors as commercial or non-commercial organizations. We classified ATMPs by type and trial status, that is, ongoing, completed, terminated, discontinued, and prematurely ended. Then, we calculated the rate of discontinuation. Between 1999 and June 2015, 143 withdrawn, terminated, or prematurely ended ATMPs clinical trials were identified. Between 1999 and June 2013, 474 ongoing and completed clinical trials were identified. Therefore, the rate of discontinuation of ATMPs trials is 23.18%, similar to that for non-ATMPs drugs in development. The probability of discontinuation is, respectively, 27.35, 16.28, and 16.34% for cell therapies, gene therapies, and TEP. The highest discontinuation rate is for oncology (43%), followed by cardiology (19.2%). It is almost the same for commercial and non-commercial sponsors; therefore, the discontinuation reason may not be financially driven. No failure risk rate per development phase is available for ATMPs. The discontinuation rate may prove helpful when assessing the

  19. 41 CFR 109-39.105 - Discontinuance or curtailment of service.

    Science.gov (United States)

    2010-07-01

    ... AVIATION, TRANSPORTATION, AND MOTOR VEHICLES 39-INTERAGENCY FLEET MANAGEMENT SYSTEMS 39.1-Establishment, Modification, and Discontinuance of Interagency Fleet Management Systems § 109-39.105 Discontinuance or... 41 Public Contracts and Property Management 3 2010-07-01 2010-07-01 false Discontinuance or...

  20. What happens when people discontinue taking medications? Lessons from COMBINE.

    Science.gov (United States)

    Stout, Robert L; Braciszewski, Jordan M; Subbaraman, Meenakshi Sabina; Kranzler, Henry R; O'Malley, Stephanie S; Falk, Daniel

    2014-12-01

    We use intensive longitudinal data methods to illuminate processes affecting patients' drinking in relation to the discontinuation of medications within an alcohol treatment study. Although previous work has focused on broad measures of medication adherence, we focus on dynamic changes in drinking both before and after patients discontinue. We conducted secondary data analyses using the COMBINE (Combined Pharmacotherapies and Behavioral Interventions for Alcohol Dependence) study, focused on participants who discontinued medications prior to the planned end of treatment. Using an interrupted time-series analysis, we analysed drinking in the weeks before and after discontinuation and also studied outcomes at the end of the COMBINE follow-up. Unites States of America. We describe the subsample of COMBINE participants who discontinued medications (n = 450), and compare them with those who were medication-adherent (n = 559) and with those who discontinued but had substantial missing data (n = 217). The primary outcomes were percentage of days abstinent (PDA) and percentage of heavy drinking days (PHDD). Medication adherence data were used to approximate the date of discontinuation. For many patients, an increase in drinking began weeks before discontinuation (PDA: F(1,4803) = 19.07, P < 0.001; PHDD: F(1,4804) = 8.58, P = 0.003) then escalated at discontinuation (PDA: F(1,446) = 5.05, P = 0.025; PHDD: F(1,446) = 4.52, P = 0.034). Among other effects, the amount of change was moderated by the reason for discontinuation (e.g. adverse event; PDA: F(2,4803) = 3.85, P = 0.021; PHDD: F(2,4804) = 5.36, P = 0.005) and also whether it occurred in the first or second half of treatment (PDA: F(1,4803) = 5.23, P = 0.022; PHDD: F(1,4804) = 8.79, P = 0.003). A patient's decision to stop taking medications during alcohol treatment appears to take place during a weeks-long process of disengagement from treatment. Patients who discontinue medications early in treatment or without

  1. Discontinuous Petrov–Galerkin method with optimal test functions for thin-body problems in solid mechanics

    KAUST Repository

    Niemi, Antti H.

    2011-02-01

    We study the applicability of the discontinuous Petrov-Galerkin (DPG) variational framework for thin-body problems in structural mechanics. Our numerical approach is based on discontinuous piecewise polynomial finite element spaces for the trial functions and approximate, local computation of the corresponding \\'optimal\\' test functions. In the Timoshenko beam problem, the proposed method is shown to provide the best approximation in an energy-type norm which is equivalent to the L2-norm for all the unknowns, uniformly with respect to the thickness parameter. The same formulation remains valid also for the asymptotic Euler-Bernoulli solution. As another one-dimensional model problem we consider the modelling of the so called basic edge effect in shell deformations. In particular, we derive a special norm for the test space which leads to a robust method in terms of the shell thickness. Finally, we demonstrate how a posteriori error estimator arising directly from the discontinuous variational framework can be utilized to generate an optimal hp-mesh for resolving the boundary layer. © 2010 Elsevier B.V.

  2. Area Regge calculus and discontinuous metrics

    International Nuclear Information System (INIS)

    Wainwright, Chris; Williams, Ruth M

    2004-01-01

    Taking the triangle areas as independent variables in the theory of Regge calculus can lead to ambiguities in the edge lengths, which can be interpreted as discontinuities in the metric. We construct solutions to area Regge calculus using a triangulated lattice and find that on a spacelike or timelike hypersurface no such discontinuity can arise. On a null hypersurface however, we can have such a situation and the resulting metric can be interpreted as a so-called refractive wave

  3. Estimated prevalence of halitosis: a systematic review and meta-regression analysis.

    Science.gov (United States)

    Silva, Manuela F; Leite, Fábio R M; Ferreira, Larissa B; Pola, Natália M; Scannapieco, Frank A; Demarco, Flávio F; Nascimento, Gustavo G

    2018-01-01

    This study aims to conduct a systematic review to determine the prevalence of halitosis in adolescents and adults. Electronic searches were performed using four different databases without restrictions: PubMed, Scopus, Web of Science, and SciELO. Population-based observational studies that provided data about the prevalence of halitosis in adolescents and adults were included. Additionally, meta-analyses, meta-regression, and sensitivity analyses were conducted to synthesize the evidence. A total of 584 articles were initially found and considered for title and abstract evaluation. Thirteen articles met inclusion criteria. The combined prevalence of halitosis was found to be 31.8% (95% CI 24.6-39.0%). Methodological aspects such as the year of publication and the socioeconomic status of the country where the study was conducted seemed to influence the prevalence of halitosis. Our results demonstrated that the estimated prevalence of halitosis was 31.8%, with high heterogeneity between studies. The results suggest a worldwide trend towards a rise in halitosis prevalence. Given the high prevalence of halitosis and its complex etiology, dental professionals should be aware of their roles in halitosis prevention and treatment.

  4. Physics-based process model approach for detecting discontinuity during friction stir welding

    Energy Technology Data Exchange (ETDEWEB)

    Shrivastava, Amber; Pfefferkorn, Frank E.; Duffie, Neil A.; Ferrier, Nicola J.; Smith, Christopher B.; Malukhin, Kostya; Zinn, Michael

    2015-02-12

    The goal of this work is to develop a method for detecting the creation of discontinuities during friction stir welding. This in situ weld monitoring method could significantly reduce the need for post-process inspection. A process force model and a discontinuity force model were created based on the state-of-the-art understanding of flow around an friction stir welding (FSW) tool. These models are used to predict the FSW forces and size of discontinuities formed in the weld. Friction stir welds with discontinuities and welds without discontinuities were created, and the differences in force dynamics were observed. In this paper, discontinuities were generated by reducing the tool rotation frequency and increasing the tool traverse speed in order to create "cold" welds. Experimental force data for welds with discontinuities and welds without discontinuities compared favorably with the predicted forces. The model currently overpredicts the discontinuity size.

  5. Accountability Accentuates Interindividual-Intergroup Discontinuity by Enforcing Parochialism

    NARCIS (Netherlands)

    Wildschut, T.; Van Horen, F.; Hart, C.

    2015-01-01

    Interindividual-intergroup discontinuity is the tendency for relations between groups to be more competitive than relations between individuals. We examined whether the discontinuity effect arises in part because group members experience normative pressure to favor the ingroup (parochialism).

  6. Linear Multivariable Regression Models for Prediction of Eddy Dissipation Rate from Available Meteorological Data

    Science.gov (United States)

    MCKissick, Burnell T. (Technical Monitor); Plassman, Gerald E.; Mall, Gerald H.; Quagliano, John R.

    2005-01-01

    Linear multivariable regression models for predicting day and night Eddy Dissipation Rate (EDR) from available meteorological data sources are defined and validated. Model definition is based on a combination of 1997-2000 Dallas/Fort Worth (DFW) data sources, EDR from Aircraft Vortex Spacing System (AVOSS) deployment data, and regression variables primarily from corresponding Automated Surface Observation System (ASOS) data. Model validation is accomplished through EDR predictions on a similar combination of 1994-1995 Memphis (MEM) AVOSS and ASOS data. Model forms include an intercept plus a single term of fixed optimal power for each of these regression variables; 30-minute forward averaged mean and variance of near-surface wind speed and temperature, variance of wind direction, and a discrete cloud cover metric. Distinct day and night models, regressing on EDR and the natural log of EDR respectively, yield best performance and avoid model discontinuity over day/night data boundaries.

  7. A novel Gaussian process regression model for state-of-health estimation of lithium-ion battery using charging curve

    Science.gov (United States)

    Yang, Duo; Zhang, Xu; Pan, Rui; Wang, Yujie; Chen, Zonghai

    2018-04-01

    The state-of-health (SOH) estimation is always a crucial issue for lithium-ion batteries. In order to provide an accurate and reliable SOH estimation, a novel Gaussian process regression (GPR) model based on charging curve is proposed in this paper. Different from other researches where SOH is commonly estimated by cycle life, in this work four specific parameters extracted from charging curves are used as inputs of the GPR model instead of cycle numbers. These parameters can reflect the battery aging phenomenon from different angles. The grey relational analysis method is applied to analyze the relational grade between selected features and SOH. On the other hand, some adjustments are made in the proposed GPR model. Covariance function design and the similarity measurement of input variables are modified so as to improve the SOH estimate accuracy and adapt to the case of multidimensional input. Several aging data from NASA data repository are used for demonstrating the estimation effect by the proposed method. Results show that the proposed method has high SOH estimation accuracy. Besides, a battery with dynamic discharging profile is used to verify the robustness and reliability of this method.

  8. Accountability Accentuates Interindividual-Intergroup Discontinuity by Enforcing Parochialism.

    Science.gov (United States)

    Wildschut, Tim; van Horen, Femke; Hart, Claire

    2015-01-01

    Interindividual-intergroup discontinuity is the tendency for relations between groups to be more competitive than relations between individuals. We examined whether the discontinuity effect arises in part because group members experience normative pressure to favor the ingroup (parochialism). Building on the notion that accountability enhances normative pressure, we hypothesized that the discontinuity effect would be larger when accountability is present (compared to absent). A prisoner's dilemma game experiment supported this prediction. Specifically, intergroup (compared to interindividual) interaction activated an injunctive ingroup-favoring norm, and accountability enhanced the influence of this norm on competitive behavior.

  9. A Simple Stochastic Differential Equation with Discontinuous Drift

    DEFF Research Database (Denmark)

    Simonsen, Maria; Leth, John-Josef; Schiøler, Henrik

    2013-01-01

    In this paper we study solutions to stochastic differential equations (SDEs) with discontinuous drift. We apply two approaches: The Euler-Maruyama method and the Fokker-Planck equation and show that a candidate density function based on the Euler-Maruyama method approximates a candidate density...... function based on the stationary Fokker-Planck equation. Furthermore, we introduce a smooth function which approximates the discontinuous drift and apply the Euler-Maruyama method and the Fokker-Planck equation with this input. The point of departure for this work is a particular SDE with discontinuous...

  10. Estimation of snowpack matching ground-truth data and MODIS satellite-based observations by using regression kriging

    Science.gov (United States)

    Juan Collados-Lara, Antonio; Pardo-Iguzquiza, Eulogio; Pulido-Velazquez, David

    2016-04-01

    The estimation of Snow Water Equivalent (SWE) is essential for an appropriate assessment of the available water resources in Alpine catchment. The hydrologic regime in these areas is dominated by the storage of water in the snowpack, which is discharged to rivers throughout the melt season. An accurate estimation of the resources will be necessary for an appropriate analysis of the system operation alternatives using basin scale management models. In order to obtain an appropriate estimation of the SWE we need to know the spatial distribution snowpack and snow density within the Snow Cover Area (SCA). Data for these snow variables can be extracted from in-situ point measurements and air-borne/space-borne remote sensing observations. Different interpolation and simulation techniques have been employed for the estimation of the cited variables. In this paper we propose to estimate snowpack from a reduced number of ground-truth data (1 or 2 campaigns per year with 23 observation point from 2000-2014) and MODIS satellite-based observations in the Sierra Nevada Mountain (Southern Spain). Regression based methodologies has been used to study snowpack distribution using different kind of explicative variables: geographic, topographic, climatic. 40 explicative variables were considered: the longitude, latitude, altitude, slope, eastness, northness, radiation, maximum upwind slope and some mathematical transformation of each of them [Ln(v), (v)^-1; (v)^2; (v)^0.5). Eight different structure of regression models have been tested (combining 1, 2, 3 or 4 explicative variables). Y=B0+B1Xi (1); Y=B0+B1XiXj (2); Y=B0+B1Xi+B2Xj (3); Y=B0+B1Xi+B2XjXl (4); Y=B0+B1XiXk+B2XjXl (5); Y=B0+B1Xi+B2Xj+B3Xl (6); Y=B0+B1Xi+B2Xj+B3XlXk (7); Y=B0+B1Xi+B2Xj+B3Xl+B4Xk (8). Where: Y is the snow depth; (Xi, Xj, Xl, Xk) are the prediction variables (any of the 40 variables); (B0, B1, B2, B3) are the coefficients to be estimated. The ground data are employed to calibrate the multiple regressions. In

  11. Estimating severity of sideways fall using a generic multi linear regression model based on kinematic input variables.

    Science.gov (United States)

    van der Zijden, A M; Groen, B E; Tanck, E; Nienhuis, B; Verdonschot, N; Weerdesteyn, V

    2017-03-21

    Many research groups have studied fall impact mechanics to understand how fall severity can be reduced to prevent hip fractures. Yet, direct impact force measurements with force plates are restricted to a very limited repertoire of experimental falls. The purpose of this study was to develop a generic model for estimating hip impact forces (i.e. fall severity) in in vivo sideways falls without the use of force plates. Twelve experienced judokas performed sideways Martial Arts (MA) and Block ('natural') falls on a force plate, both with and without a mat on top. Data were analyzed to determine the hip impact force and to derive 11 selected (subject-specific and kinematic) variables. Falls from kneeling height were used to perform a stepwise regression procedure to assess the effects of these input variables and build the model. The final model includes four input variables, involving one subject-specific measure and three kinematic variables: maximum upper body deceleration, body mass, shoulder angle at the instant of 'maximum impact' and maximum hip deceleration. The results showed that estimated and measured hip impact forces were linearly related (explained variances ranging from 46 to 63%). Hip impact forces of MA falls onto the mat from a standing position (3650±916N) estimated by the final model were comparable with measured values (3698±689N), even though these data were not used for training the model. In conclusion, a generic linear regression model was developed that enables the assessment of fall severity through kinematic measures of sideways falls, without using force plates. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Optimal convergence of discontinuous Galerkin methods for continuum modeling of supply chain networks

    KAUST Repository

    Zhang, Shuhua; Sun, Shuyu; Yang, Hongtao

    2014-01-01

    A discontinuous Galerkin method is considered to simulate materials flow in a supply chain network problem which is governed by a system of conservation laws. By means of a novel interpolation and superclose analysis technique, the optimal and superconvergence error estimates are established under two physically meaningful assumptions on the connectivity matrix. Numerical examples are presented to validate the theoretical results. © 2014 Elsevier Ltd. All rights reserved.

  13. Optimal convergence of discontinuous Galerkin methods for continuum modeling of supply chain networks

    KAUST Repository

    Zhang, Shuhua

    2014-09-01

    A discontinuous Galerkin method is considered to simulate materials flow in a supply chain network problem which is governed by a system of conservation laws. By means of a novel interpolation and superclose analysis technique, the optimal and superconvergence error estimates are established under two physically meaningful assumptions on the connectivity matrix. Numerical examples are presented to validate the theoretical results. © 2014 Elsevier Ltd. All rights reserved.

  14. Linear regression in astronomy. I

    Science.gov (United States)

    Isobe, Takashi; Feigelson, Eric D.; Akritas, Michael G.; Babu, Gutti Jogesh

    1990-01-01

    Five methods for obtaining linear regression fits to bivariate data with unknown or insignificant measurement errors are discussed: ordinary least-squares (OLS) regression of Y on X, OLS regression of X on Y, the bisector of the two OLS lines, orthogonal regression, and 'reduced major-axis' regression. These methods have been used by various researchers in observational astronomy, most importantly in cosmic distance scale applications. Formulas for calculating the slope and intercept coefficients and their uncertainties are given for all the methods, including a new general form of the OLS variance estimates. The accuracy of the formulas was confirmed using numerical simulations. The applicability of the procedures is discussed with respect to their mathematical properties, the nature of the astronomical data under consideration, and the scientific purpose of the regression. It is found that, for problems needing symmetrical treatment of the variables, the OLS bisector performs significantly better than orthogonal or reduced major-axis regression.

  15. Mixed Frequency Data Sampling Regression Models: The R Package midasr

    Directory of Open Access Journals (Sweden)

    Eric Ghysels

    2016-08-01

    Full Text Available When modeling economic relationships it is increasingly common to encounter data sampled at different frequencies. We introduce the R package midasr which enables estimating regression models with variables sampled at different frequencies within a MIDAS regression framework put forward in work by Ghysels, Santa-Clara, and Valkanov (2002. In this article we define a general autoregressive MIDAS regression model with multiple variables of different frequencies and show how it can be specified using the familiar R formula interface and estimated using various optimization methods chosen by the researcher. We discuss how to check the validity of the estimated model both in terms of numerical convergence and statistical adequacy of a chosen regression specification, how to perform model selection based on a information criterion, how to assess forecasting accuracy of the MIDAS regression model and how to obtain a forecast aggregation of different MIDAS regression models. We illustrate the capabilities of the package with a simulated MIDAS regression model and give two empirical examples of application of MIDAS regression.

  16. Semisupervised Clustering by Iterative Partition and Regression with Neuroscience Applications

    Directory of Open Access Journals (Sweden)

    Guoqi Qian

    2016-01-01

    Full Text Available Regression clustering is a mixture of unsupervised and supervised statistical learning and data mining method which is found in a wide range of applications including artificial intelligence and neuroscience. It performs unsupervised learning when it clusters the data according to their respective unobserved regression hyperplanes. The method also performs supervised learning when it fits regression hyperplanes to the corresponding data clusters. Applying regression clustering in practice requires means of determining the underlying number of clusters in the data, finding the cluster label of each data point, and estimating the regression coefficients of the model. In this paper, we review the estimation and selection issues in regression clustering with regard to the least squares and robust statistical methods. We also provide a model selection based technique to determine the number of regression clusters underlying the data. We further develop a computing procedure for regression clustering estimation and selection. Finally, simulation studies are presented for assessing the procedure, together with analyzing a real data set on RGB cell marking in neuroscience to illustrate and interpret the method.

  17. Cytogenetics observation and radiation influence evaluation of exposed persons in a discontinuous radiation exposure event

    International Nuclear Information System (INIS)

    Chen Ying; Liu Xiulin; Yang Guoshan; Ge Shili; Jin Cuizhen; Yao Bo

    2003-01-01

    The cytogenetics results and dose estimation of exposed and related persons in an discontinuous radiation exposure event were reported in this paper. According to dicentrics + ring and micronucleus results combined with clinical data, slight (middle) degree of subacute radiation symptom of the victim was diagnosed. A part of 52 examined persons were exposed to radiation in a certain degree

  18. Regression models for explaining and predicting concentrations of organochlorine pesticides in fish from streams in the United States

    Science.gov (United States)

    Nowell, Lisa H.; Crawford, Charles G.; Gilliom, Robert J.; Nakagaki, Naomi; Stone, Wesley W.; Thelin, Gail; Wolock, David M.

    2009-01-01

    Empirical regression models were developed for estimating concentrations of dieldrin, total chlordane, and total DDT in whole fish from U.S. streams. Models were based on pesticide concentrations measured in whole fish at 648 stream sites nationwide (1992-2001) as part of the U.S. Geological Survey's National Water Quality Assessment Program. Explanatory variables included fish lipid content, estimates (or surrogates) representing historical agricultural and urban sources, watershed characteristics, and geographic location. Models were developed using Tobit regression methods appropriate for data with censoring. Typically, the models explain approximately 50 to 70% of the variability in pesticide concentrations measured in whole fish. The models were used to predict pesticide concentrations in whole fish for streams nationwide using the U.S. Environmental Protection Agency's River Reach File 1 and to estimate the probability that whole-fish concentrations exceed benchmarks for protection of fish-eating wildlife. Predicted concentrations were highest for dieldrin in the Corn Belt, Texas, and scattered urban areas; for total chlordane in the Corn Belt, Texas, the Southeast, and urbanized Northeast; and for total DDT in the Southeast, Texas, California, and urban areas nationwide. The probability of exceeding wildlife benchmarks for dieldrin and chlordane was predicted to be low for most U.S. streams. The probability of exceeding wildlife benchmarks for total DDT is higher but varies depending on the fish taxon and on the benchmark used. Because the models in the present study are based on fish data collected during the 1990s and organochlorine pesticide residues in the environment continue to decline decades after their uses were discontinued, these models may overestimate present-day pesticide concentrations in fish. ?? 2009 SETAC.

  19. A longitudinal examination of alcohol pharmacotherapy adoption in substance use disorder treatment programs: patterns of sustainability and discontinuation.

    Science.gov (United States)

    Abraham, Amanda J; Knudsen, Hannah K; Roman, Paul M

    2011-07-01

    The objectives of this study were to (a) identify the patterns of disulfiram (Antabuse) and tablet naltrexone (Revia) adoption over a 48-month period in a nationally representative sample of privately funded programs that deliver substance use disorder treatment; (b) examine predictors of sustainability, later adoption, discontinuation, and nonadoption of disulfiram and tablet naltrexone; and (c) measure reasons for medication discontinuation. Two waves of data were collected via face-to-face structured interviews with 223 program administrators. These data demonstrated that adoption of medications for alcohol use disorders (AUDs) was a dynamic process. Although nonadoption was the most common pattern, approximately 20% of programs sustained use of the AUD medications and 30% experienced organizational change in adoption over the study period. Bivariate multinomial logistic regression models revealed that organizational characteristics were associated with sustainability including location in a hospital setting, program size, accreditation, revenues from private insurance, referrals from the criminal justice system, number of medical staff, and use of selective serotonin reuptake inhibitors at baseline. Two patterns of discontinuation were found: Programs either discontinued use of all substance use disorder medications or replaced disulfiram/tablet naltrexone with a newer AUD medication. These findings suggest that adoption of AUD medications may be positively affected by pressure from accreditation bodies, partnering with primary care physicians, medication-specific training for medical staff, greater availability of resources to cover the costs associated with prescribing AUD medications, and amending criminal justice contracts to include support for AUD medication use.

  20. Patient cost-sharing, socioeconomic status, and children’s health care utilization

    DEFF Research Database (Denmark)

    Paul, Alexander; Nilsson, Anton

    2018-01-01

    This paper estimates the effect of cost-sharing on the demand for children’s and adolescents’ use of medical care. We use a large population-wide registry dataset including detailed information on contacts with the health care system as well as family income. Two different estimation strategies...... caused by factors other than cost-sharing. We find that when care is free of charge, individuals increase their number of doctor visits by 5–10%. Effects are similar in middle childhood and adolescence, and are driven by those from low-income families. The differences across income groups cannot...... are used: regression discontinuity design exploiting age thresholds above which fees are charged, and difference- in-differences models exploiting policy changes. We also estimate combined regression discontinuity difference-in-differences models that take into account discontinuities around age thresholds...

  1. Estimation of parameters of constant elasticity of substitution production functional model

    Science.gov (United States)

    Mahaboob, B.; Venkateswarlu, B.; Sankar, J. Ravi

    2017-11-01

    Nonlinear model building has become an increasing important powerful tool in mathematical economics. In recent years the popularity of applications of nonlinear models has dramatically been rising up. Several researchers in econometrics are very often interested in the inferential aspects of nonlinear regression models [6]. The present research study gives a distinct method of estimation of more complicated and highly nonlinear model viz Constant Elasticity of Substitution (CES) production functional model. Henningen et.al [5] proposed three solutions to avoid serious problems when estimating CES functions in 2012 and they are i) removing discontinuities by using the limits of the CES function and its derivative. ii) Circumventing large rounding errors by local linear approximations iii) Handling ill-behaved objective functions by a multi-dimensional grid search. Joel Chongeh et.al [7] discussed the estimation of the impact of capital and labour inputs to the gris output agri-food products using constant elasticity of substitution production function in Tanzanian context. Pol Antras [8] presented new estimates of the elasticity of substitution between capital and labour using data from the private sector of the U.S. economy for the period 1948-1998.

  2. Accountability Accentuates Interindividual—Intergroup Discontinuity by Enforcing Parochialism

    Directory of Open Access Journals (Sweden)

    Tim eWildschut

    2015-11-01

    Full Text Available Interindividual-intergroup discontinuity is the tendency for relations between groups to be more competitive than relations between individuals. We examined whether the discontinuity effect arises in part because group members experience normative pressure to favor the ingroup (parochialism. Building on the notion that accountability enhances normative pressure, we hypothesized that the discontinuity effect would be larger when accountability is present (compared to absent. A prisoner’s dilemma game experiment supported this prediction. Specifically, intergroup (compared to interindividual interaction activated an injunctive ingroup-favoring norm, and accountability enhanced the influence of this norm on competitive behavior.

  3. Adaptive Finite Element Methods for Elliptic Problems with Discontinuous Coefficients

    KAUST Repository

    Bonito, Andrea; DeVore, Ronald A.; Nochetto, Ricardo H.

    2013-01-01

    Elliptic PDEs with discontinuous diffusion coefficients occur in application domains such as diffusions through porous media, electromagnetic field propagation on heterogeneous media, and diffusion processes on rough surfaces. The standard approach to numerically treating such problems using finite element methods is to assume that the discontinuities lie on the boundaries of the cells in the initial triangulation. However, this does not match applications where discontinuities occur on curves, surfaces, or manifolds, and could even be unknown beforehand. One of the obstacles to treating such discontinuity problems is that the usual perturbation theory for elliptic PDEs assumes bounds for the distortion of the coefficients in the L∞ norm and this in turn requires that the discontinuities are matched exactly when the coefficients are approximated. We present a new approach based on distortion of the coefficients in an Lq norm with q < ∞ which therefore does not require the exact matching of the discontinuities. We then use this new distortion theory to formulate new adaptive finite element methods (AFEMs) for such discontinuity problems. We show that such AFEMs are optimal in the sense of distortion versus number of computations, and report insightful numerical results supporting our analysis. © 2013 Societ y for Industrial and Applied Mathematics.

  4. Significance testing in ridge regression for genetic data

    Directory of Open Access Journals (Sweden)

    De Iorio Maria

    2011-09-01

    Full Text Available Abstract Background Technological developments have increased the feasibility of large scale genetic association studies. Densely typed genetic markers are obtained using SNP arrays, next-generation sequencing technologies and imputation. However, SNPs typed using these methods can be highly correlated due to linkage disequilibrium among them, and standard multiple regression techniques fail with these data sets due to their high dimensionality and correlation structure. There has been increasing interest in using penalised regression in the analysis of high dimensional data. Ridge regression is one such penalised regression technique which does not perform variable selection, instead estimating a regression coefficient for each predictor variable. It is therefore desirable to obtain an estimate of the significance of each ridge regression coefficient. Results We develop and evaluate a test of significance for ridge regression coefficients. Using simulation studies, we demonstrate that the performance of the test is comparable to that of a permutation test, with the advantage of a much-reduced computational cost. We introduce the p-value trace, a plot of the negative logarithm of the p-values of ridge regression coefficients with increasing shrinkage parameter, which enables the visualisation of the change in p-value of the regression coefficients with increasing penalisation. We apply the proposed method to a lung cancer case-control data set from EPIC, the European Prospective Investigation into Cancer and Nutrition. Conclusions The proposed test is a useful alternative to a permutation test for the estimation of the significance of ridge regression coefficients, at a much-reduced computational cost. The p-value trace is an informative graphical tool for evaluating the results of a test of significance of ridge regression coefficients as the shrinkage parameter increases, and the proposed test makes its production computationally feasible.

  5. A Solution to Separation and Multicollinearity in Multiple Logistic Regression.

    Science.gov (United States)

    Shen, Jianzhao; Gao, Sujuan

    2008-10-01

    In dementia screening tests, item selection for shortening an existing screening test can be achieved using multiple logistic regression. However, maximum likelihood estimates for such logistic regression models often experience serious bias or even non-existence because of separation and multicollinearity problems resulting from a large number of highly correlated items. Firth (1993, Biometrika, 80(1), 27-38) proposed a penalized likelihood estimator for generalized linear models and it was shown to reduce bias and the non-existence problems. The ridge regression has been used in logistic regression to stabilize the estimates in cases of multicollinearity. However, neither solves the problems for each other. In this paper, we propose a double penalized maximum likelihood estimator combining Firth's penalized likelihood equation with a ridge parameter. We present a simulation study evaluating the empirical performance of the double penalized likelihood estimator in small to moderate sample sizes. We demonstrate the proposed approach using a current screening data from a community-based dementia study.

  6. Depth-weighted robust multivariate regression with application to sparse data

    KAUST Repository

    Dutta, Subhajit; Genton, Marc G.

    2017-01-01

    A robust method for multivariate regression is developed based on robust estimators of the joint location and scatter matrix of the explanatory and response variables using the notion of data depth. The multivariate regression estimator possesses desirable affine equivariance properties, achieves the best breakdown point of any affine equivariant estimator, and has an influence function which is bounded in both the response as well as the predictor variable. To increase the efficiency of this estimator, a re-weighted estimator based on robust Mahalanobis distances of the residual vectors is proposed. In practice, the method is more stable than existing methods that are constructed using subsamples of the data. The resulting multivariate regression technique is computationally feasible, and turns out to perform better than several popular robust multivariate regression methods when applied to various simulated data as well as a real benchmark data set. When the data dimension is quite high compared to the sample size it is still possible to use meaningful notions of data depth along with the corresponding depth values to construct a robust estimator in a sparse setting.

  7. Depth-weighted robust multivariate regression with application to sparse data

    KAUST Repository

    Dutta, Subhajit

    2017-04-05

    A robust method for multivariate regression is developed based on robust estimators of the joint location and scatter matrix of the explanatory and response variables using the notion of data depth. The multivariate regression estimator possesses desirable affine equivariance properties, achieves the best breakdown point of any affine equivariant estimator, and has an influence function which is bounded in both the response as well as the predictor variable. To increase the efficiency of this estimator, a re-weighted estimator based on robust Mahalanobis distances of the residual vectors is proposed. In practice, the method is more stable than existing methods that are constructed using subsamples of the data. The resulting multivariate regression technique is computationally feasible, and turns out to perform better than several popular robust multivariate regression methods when applied to various simulated data as well as a real benchmark data set. When the data dimension is quite high compared to the sample size it is still possible to use meaningful notions of data depth along with the corresponding depth values to construct a robust estimator in a sparse setting.

  8. Linear regression and the normality assumption.

    Science.gov (United States)

    Schmidt, Amand F; Finan, Chris

    2017-12-16

    Researchers often perform arbitrary outcome transformations to fulfill the normality assumption of a linear regression model. This commentary explains and illustrates that in large data settings, such transformations are often unnecessary, and worse may bias model estimates. Linear regression assumptions are illustrated using simulated data and an empirical example on the relation between time since type 2 diabetes diagnosis and glycated hemoglobin levels. Simulation results were evaluated on coverage; i.e., the number of times the 95% confidence interval included the true slope coefficient. Although outcome transformations bias point estimates, violations of the normality assumption in linear regression analyses do not. The normality assumption is necessary to unbiasedly estimate standard errors, and hence confidence intervals and P-values. However, in large sample sizes (e.g., where the number of observations per variable is >10) violations of this normality assumption often do not noticeably impact results. Contrary to this, assumptions on, the parametric model, absence of extreme observations, homoscedasticity, and independency of the errors, remain influential even in large sample size settings. Given that modern healthcare research typically includes thousands of subjects focusing on the normality assumption is often unnecessary, does not guarantee valid results, and worse may bias estimates due to the practice of outcome transformations. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. 39 CFR 241.3 - Discontinuance of post offices.

    Science.gov (United States)

    2010-07-01

    ... CLASSIFICATION, AND DISCONTINUANCE § 241.3 Discontinuance of post offices. (a) Introduction—(1) Coverage. This... justify in sufficient detail to Postal Service management and affected customers the proposed service... inspection during normal business hours at each post office where the Final Determination is posted for 30...

  10. Development and Application of Watershed Regressions for Pesticides (WARP) for Estimating Atrazine Concentration Distributions in Streams

    Science.gov (United States)

    Larson, Steven J.; Crawford, Charles G.; Gilliom, Robert J.

    2004-01-01

    Regression models were developed for predicting atrazine concentration distributions in rivers and streams, using the Watershed Regressions for Pesticides (WARP) methodology. Separate regression equations were derived for each of nine percentiles of the annual distribution of atrazine concentrations and for the annual time-weighted mean atrazine concentration. In addition, seasonal models were developed for two specific periods of the year--the high season, when the highest atrazine concentrations are expected in streams, and the low season, when concentrations are expected to be low or undetectable. Various nationally available watershed parameters were used as explanatory variables, including atrazine use intensity, soil characteristics, hydrologic parameters, climate and weather variables, land use, and agricultural management practices. Concentration data from 112 river and stream stations sampled as part of the U.S. Geological Survey's National Water-Quality Assessment and National Stream Quality Accounting Network Programs were used for computing the concentration percentiles and mean concentrations used as the response variables in regression models. Tobit regression methods, using maximum likelihood estimation, were used for developing the models because some of the concentration values used for the response variables were censored (reported as less than a detection threshold). Data from 26 stations not used for model development were used for model validation. The annual models accounted for 62 to 77 percent of the variability in concentrations among the 112 model development stations. Atrazine use intensity (the amount of atrazine used in the watershed divided by watershed area) was the most important explanatory variable in all models, but additional watershed parameters significantly increased the amount of variability explained by the models. Predicted concentrations from all 10 models were within a factor of 10 of the observed concentrations at most

  11. Estimating Engineering and Manufacturing Development Cost Risk Using Logistic and Multiple Regression

    National Research Council Canada - National Science Library

    Bielecki, John

    2003-01-01

    .... Previous research has demonstrated the use of a two-step logistic and multiple regression methodology to predicting cost growth produces desirable results versus traditional single-step regression...

  12. Bias in regression coefficient estimates upon different treatments of ...

    African Journals Online (AJOL)

    MS and PW consistently overestimated the population parameter. EM and RI, on the other hand, tended to consistently underestimate the population parameter under non-monotonic pattern. Keywords: Missing data, bias, regression, percent missing, non-normality, missing pattern > East African Journal of Statistics Vol.

  13. Kendall-Theil Robust Line (KTRLine--version 1.0)-A Visual Basic Program for Calculating and Graphing Robust Nonparametric Estimates of Linear-Regression Coefficients Between Two Continuous Variables

    Science.gov (United States)

    Granato, Gregory E.

    2006-01-01

    The Kendall-Theil Robust Line software (KTRLine-version 1.0) is a Visual Basic program that may be used with the Microsoft Windows operating system to calculate parameters for robust, nonparametric estimates of linear-regression coefficients between two continuous variables. The KTRLine software was developed by the U.S. Geological Survey, in cooperation with the Federal Highway Administration, for use in stochastic data modeling with local, regional, and national hydrologic data sets to develop planning-level estimates of potential effects of highway runoff on the quality of receiving waters. The Kendall-Theil robust line was selected because this robust nonparametric method is resistant to the effects of outliers and nonnormality in residuals that commonly characterize hydrologic data sets. The slope of the line is calculated as the median of all possible pairwise slopes between points. The intercept is calculated so that the line will run through the median of input data. A single-line model or a multisegment model may be specified. The program was developed to provide regression equations with an error component for stochastic data generation because nonparametric multisegment regression tools are not available with the software that is commonly used to develop regression models. The Kendall-Theil robust line is a median line and, therefore, may underestimate total mass, volume, or loads unless the error component or a bias correction factor is incorporated into the estimate. Regression statistics such as the median error, the median absolute deviation, the prediction error sum of squares, the root mean square error, the confidence interval for the slope, and the bias correction factor for median estimates are calculated by use of nonparametric methods. These statistics, however, may be used to formulate estimates of mass, volume, or total loads. The program is used to read a two- or three-column tab-delimited input file with variable names in the first row and

  14. Solutions of the Wheeler-Feynman equations with discontinuous velocities.

    Science.gov (United States)

    de Souza, Daniel Câmara; De Luca, Jayme

    2015-01-01

    We generalize Wheeler-Feynman electrodynamics with a variational boundary value problem for continuous boundary segments that might include velocity discontinuity points. Critical-point orbits must satisfy the Euler-Lagrange equations of the action functional at most points, which are neutral differential delay equations (the Wheeler-Feynman equations of motion). At velocity discontinuity points, critical-point orbits must satisfy the Weierstrass-Erdmann continuity conditions for the partial momenta and the partial energies. We study a special setup having the shortest time-separation between the (infinite-dimensional) boundary segments, for which case the critical-point orbit can be found using a two-point boundary problem for an ordinary differential equation. For this simplest setup, we prove that orbits can have discontinuous velocities. We construct a numerical method to solve the Wheeler-Feynman equations together with the Weierstrass-Erdmann conditions and calculate some numerical orbits with discontinuous velocities. We also prove that the variational boundary value problem has a unique solution depending continuously on boundary data, if the continuous boundary segments have velocity discontinuities along a reduced local space.

  15. The optimal time of discontinuing methimazole before radioiodine therapy

    International Nuclear Information System (INIS)

    Moosavi, Z.; Zakavi, R.

    2001-01-01

    Hyperthyroidism is a common disease and one of the best methods for its treatment is radioiodine therapy with Treatment with antithyroid drugs brings patients to euthyroidism before radioiodine therapy. Antithyroid drugs should be discontinued before radioiodine therapy to increase thyroid uptake. The purpose of this study was to determine the optimal time of methimazole discontinuation. One hundred eighty four patients, who were referred for radioiodine therapy were classified in 3 groups according to the duration of methimazole discontinuation before thyroid uptake (RAIU) measurement. Group 1,2 and 3 were patients who discontinued methimazole (48-72 h rs), (72-120 h rs) and more than 120 h rs before RAIU measurement, respectively. Mean thyroid uptake in group 1, 2 and 3 was (64±151.1%), (60.1±14.1%) and (59.3±12.8), respectively. No significant difference was noted in thyroid uptake between these groups (F= 1.83, P<0.16). This study shows that 48-72 h rs of methimazole discontinuation before radioiodine therapy is enough and longer term abstention is not associated with higher uptake

  16. Updated Design Standards and Guidance from the What Works Clearinghouse: Regression Discontinuity Designs and Cluster Designs

    Science.gov (United States)

    Cole, Russell; Deke, John; Seftor, Neil

    2016-01-01

    The What Works Clearinghouse (WWC) maintains design standards to identify rigorous, internally valid education research. As education researchers advance new methodologies, the WWC must revise its standards to include an assessment of the new designs. Recently, the WWC has revised standards for two emerging study designs: regression discontinuity…

  17. Soliton shock wave fronts and self-similar discontinuities in dispersion hydrodynamics

    International Nuclear Information System (INIS)

    Gurevich, A.V.; Meshcherkin, A.P.

    1987-01-01

    Nonlinear flows in nondissipative dispersion hydrodynamics are examined. It is demonstrated that in order to describe such flows it is necessary to incorporate a new concept: a special discontinuity called a ''self-similar'' discontinuity consisting of a nondissipative shock wave and a powerful slow wave discontinuity in regular hydrodynamics. The ''self similar discontinuity'' expands linearly over time. It is demonstrated that this concept may be introduced in a solution to Euler equations. The boundary conditions of the ''self similar discontinuity'' that allow closure of Euler equations for dispersion hydrodynamics are formulated, i.e., those that replace the shock adiabatic curve of standard dissipative hydrodynamics. The structure of the soliton front and of the trailing edge of the shock wave is investigated. A classification and complete solution are given to the problem of the decay of random initial discontinuities in the hydrodynamics of highly nonisothermic plasma. A solution is derived to the problem of the decay of initial discontinuities in the hydrodynamics of magnetized plasma. It is demonstrated that in this plasma, a feature of current density arises at the point of soliton inversion

  18. Top Incomes, Heavy Tails, and Rank-Size Regressions

    Directory of Open Access Journals (Sweden)

    Christian Schluter

    2018-03-01

    Full Text Available In economics, rank-size regressions provide popular estimators of tail exponents of heavy-tailed distributions. We discuss the properties of this approach when the tail of the distribution is regularly varying rather than strictly Pareto. The estimator then over-estimates the true value in the leading parametric income models (so the upper income tail is less heavy than estimated, which leads to test size distortions and undermines inference. For practical work, we propose a sensitivity analysis based on regression diagnostics in order to assess the likely impact of the distortion. The methods are illustrated using data on top incomes in the UK.

  19. Minimizers with discontinuous velocities for the electromagnetic variational method

    International Nuclear Information System (INIS)

    De Luca, Jayme

    2010-01-01

    The electromagnetic two-body problem has neutral differential delay equations of motion that, for generic boundary data, can have solutions with discontinuous derivatives. If one wants to use these neutral differential delay equations with arbitrary boundary data, solutions with discontinuous derivatives must be expected and allowed. Surprisingly, Wheeler-Feynman electrodynamics has a boundary value variational method for which minimizer trajectories with discontinuous derivatives are also expected, as we show here. The variational method defines continuous trajectories with piecewise defined velocities and accelerations, and electromagnetic fields defined by the Euler-Lagrange equations on trajectory points. Here we use the piecewise defined minimizers with the Lienard-Wierchert formulas to define generalized electromagnetic fields almost everywhere (but on sets of points of zero measure where the advanced/retarded velocities and/or accelerations are discontinuous). Along with this generalization we formulate the generalized absorber hypothesis that the far fields vanish asymptotically almost everywhere and show that localized orbits with far fields vanishing almost everywhere must have discontinuous velocities on sewing chains of breaking points. We give the general solution for localized orbits with vanishing far fields by solving a (linear) neutral differential delay equation for these far fields. We discuss the physics of orbits with discontinuous derivatives stressing the differences to the variational methods of classical mechanics and the existence of a spinorial four-current associated with the generalized variational electrodynamics.

  20. A Note on Penalized Regression Spline Estimation in the Secondary Analysis of Case-Control Data

    KAUST Repository

    Gazioglu, Suzan

    2013-05-25

    Primary analysis of case-control studies focuses on the relationship between disease (D) and a set of covariates of interest (Y, X). A secondary application of the case-control study, often invoked in modern genetic epidemiologic association studies, is to investigate the interrelationship between the covariates themselves. The task is complicated due to the case-control sampling, and to avoid the biased sampling that arises from the design, it is typical to use the control data only. In this paper, we develop penalized regression spline methodology that uses all the data, and improves precision of estimation compared to using only the controls. A simulation study and an empirical example are used to illustrate the methodology.

  1. A Note on Penalized Regression Spline Estimation in the Secondary Analysis of Case-Control Data

    KAUST Repository

    Gazioglu, Suzan; Wei, Jiawei; Jennings, Elizabeth M.; Carroll, Raymond J.

    2013-01-01

    Primary analysis of case-control studies focuses on the relationship between disease (D) and a set of covariates of interest (Y, X). A secondary application of the case-control study, often invoked in modern genetic epidemiologic association studies, is to investigate the interrelationship between the covariates themselves. The task is complicated due to the case-control sampling, and to avoid the biased sampling that arises from the design, it is typical to use the control data only. In this paper, we develop penalized regression spline methodology that uses all the data, and improves precision of estimation compared to using only the controls. A simulation study and an empirical example are used to illustrate the methodology.

  2. Considering a non-polynomial basis for local kernel regression problem

    Science.gov (United States)

    Silalahi, Divo Dharma; Midi, Habshah

    2017-01-01

    A common used as solution for local kernel nonparametric regression problem is given using polynomial regression. In this study, we demonstrated the estimator and properties using maximum likelihood estimator for a non-polynomial basis such B-spline to replacing the polynomial basis. This estimator allows for flexibility in the selection of a bandwidth and a knot. The best estimator was selected by finding an optimal bandwidth and knot through minimizing the famous generalized validation function.

  3. Discontinuation risk comparison among 'real-world' newly anticoagulated atrial fibrillation patients

    DEFF Research Database (Denmark)

    Lip, Gregory Y H; Pan, Xianying; Kamble, Shital

    2018-01-01

    Discontinuation of oral anticoagulants may expose non-valvular atrial fibrillation (NVAF) patients to an increased risk of stroke. This study describes the real-world discontinuation rates and compared the risk of drug discontinuation among NVAF patients initiating apixaban, warfarin, dabigatran,...

  4. Seismic wave propagation in fractured media: A discontinuous Galerkin approach

    KAUST Repository

    De Basabe, Jonás D.

    2011-01-01

    We formulate and implement a discontinuous Galekin method for elastic wave propagation that allows for discontinuities in the displacement field to simulate fractures or faults using the linear- slip model. We show numerical results using a 2D model with one linear- slip discontinuity and different frequencies. The results show a good agreement with analytic solutions. © 2011 Society of Exploration Geophysicists.

  5. Mixture of Regression Models with Single-Index

    OpenAIRE

    Xiang, Sijia; Yao, Weixin

    2016-01-01

    In this article, we propose a class of semiparametric mixture regression models with single-index. We argue that many recently proposed semiparametric/nonparametric mixture regression models can be considered special cases of the proposed model. However, unlike existing semiparametric mixture regression models, the new pro- posed model can easily incorporate multivariate predictors into the nonparametric components. Backfitting estimates and the corresponding algorithms have been proposed for...

  6. The MIDAS Touch: Mixed Data Sampling Regression Models

    OpenAIRE

    Ghysels, Eric; Santa-Clara, Pedro; Valkanov, Rossen

    2004-01-01

    We introduce Mixed Data Sampling (henceforth MIDAS) regression models. The regressions involve time series data sampled at different frequencies. Technically speaking MIDAS models specify conditional expectations as a distributed lag of regressors recorded at some higher sampling frequencies. We examine the asymptotic properties of MIDAS regression estimation and compare it with traditional distributed lag models. MIDAS regressions have wide applicability in macroeconomics and �nance.

  7. Derivative discontinuity with localized Hartree-Fock potential

    Energy Technology Data Exchange (ETDEWEB)

    Nazarov, V. U. [Research Center for Applied Sciences, Academia Sinica, Taipei 11529, Taiwan (China); Vignale, G. [Department of Physics, University of Missouri-Columbia, Columbia, Missouri 65211 (United States)

    2015-08-14

    The localized Hartree-Fock potential has proven to be a computationally efficient alternative to the optimized effective potential, preserving the numerical accuracy of the latter and respecting the exact properties of being self-interaction free and having the correct −1/r asymptotics. In this paper we extend the localized Hartree-Fock potential to fractional particle numbers and observe that it yields derivative discontinuities in the energy as required by the exact theory. The discontinuities are numerically close to those of the computationally more demanding Hartree-Fock method. Our potential enjoys a “direct-energy” property, whereby the energy of the system is given by the sum of the single-particle eigenvalues multiplied by the corresponding occupation numbers. The discontinuities c{sub ↑} and c{sub ↓} of the spin-components of the potential at integer particle numbers N{sub ↑} and N{sub ↓} satisfy the condition c{sub ↑}N{sub ↑} + c{sub ↓}N{sub ↓} = 0. Thus, joining the family of effective potentials which support a derivative discontinuity, but being considerably easier to implement, the localized Hartree-Fock potential becomes a powerful tool in the broad area of applications in which the fundamental gap is an issue.

  8. Convergent piecewise affine systems : analysis and design Part II: discontinuous case

    NARCIS (Netherlands)

    Pavlov, A.V.; Pogromski, A.Y.; Wouw, van de N.; Nijmeijer, H.; Rooda, J.E.

    2005-01-01

    In this paper convergence properties of piecewise affine (PWA) systems with discontinuous right-hand sides are studied. It is shown that for discontinuous PWA systems existence of a common quadratic Lyapunov function is not sufficient for convergence. For discontinuous bimodal PWA systems necessary

  9. Logistic quantile regression provides improved estimates for bounded avian counts: a case study of California Spotted Owl fledgling production

    Science.gov (United States)

    Brian S. Cade; Barry R. Noon; Rick D. Scherer; John J. Keane

    2017-01-01

    Counts of avian fledglings, nestlings, or clutch size that are bounded below by zero and above by some small integer form a discrete random variable distribution that is not approximated well by conventional parametric count distributions such as the Poisson or negative binomial. We developed a logistic quantile regression model to provide estimates of the empirical...

  10. Estimating carbon and showing impacts of drought using satellite data in regression-tree models

    Science.gov (United States)

    Boyte, Stephen; Wylie, Bruce K.; Howard, Danny; Dahal, Devendra; Gilmanov, Tagir G.

    2018-01-01

    Integrating spatially explicit biogeophysical and remotely sensed data into regression-tree models enables the spatial extrapolation of training data over large geographic spaces, allowing a better understanding of broad-scale ecosystem processes. The current study presents annual gross primary production (GPP) and annual ecosystem respiration (RE) for 2000–2013 in several short-statured vegetation types using carbon flux data from towers that are located strategically across the conterminous United States (CONUS). We calculate carbon fluxes (annual net ecosystem production [NEP]) for each year in our study period, which includes 2012 when drought and higher-than-normal temperatures influence vegetation productivity in large parts of the study area. We present and analyse carbon flux dynamics in the CONUS to better understand how drought affects GPP, RE, and NEP. Model accuracy metrics show strong correlation coefficients (r) (r ≥ 94%) between training and estimated data for both GPP and RE. Overall, average annual GPP, RE, and NEP are relatively constant throughout the study period except during 2012 when almost 60% less carbon is sequestered than normal. These results allow us to conclude that this modelling method effectively estimates carbon dynamics through time and allows the exploration of impacts of meteorological anomalies and vegetation types on carbon dynamics.

  11. A Technique for Estimating Intensity of Emotional Expressions and Speaking Styles in Speech Based on Multiple-Regression HSMM

    Science.gov (United States)

    Nose, Takashi; Kobayashi, Takao

    In this paper, we propose a technique for estimating the degree or intensity of emotional expressions and speaking styles appearing in speech. The key idea is based on a style control technique for speech synthesis using a multiple regression hidden semi-Markov model (MRHSMM), and the proposed technique can be viewed as the inverse of the style control. In the proposed technique, the acoustic features of spectrum, power, fundamental frequency, and duration are simultaneously modeled using the MRHSMM. We derive an algorithm for estimating explanatory variables of the MRHSMM, each of which represents the degree or intensity of emotional expressions and speaking styles appearing in acoustic features of speech, based on a maximum likelihood criterion. We show experimental results to demonstrate the ability of the proposed technique using two types of speech data, simulated emotional speech and spontaneous speech with different speaking styles. It is found that the estimated values have correlation with human perception.

  12. On concurvity in nonlinear and nonparametric regression models

    Directory of Open Access Journals (Sweden)

    Sonia Amodio

    2014-12-01

    Full Text Available When data are affected by multicollinearity in the linear regression framework, then concurvity will be present in fitting a generalized additive model (GAM. The term concurvity describes nonlinear dependencies among the predictor variables. As collinearity results in inflated variance of the estimated regression coefficients in the linear regression model, the result of the presence of concurvity leads to instability of the estimated coefficients in GAMs. Even if the backfitting algorithm will always converge to a solution, in case of concurvity the final solution of the backfitting procedure in fitting a GAM is influenced by the starting functions. While exact concurvity is highly unlikely, approximate concurvity, the analogue of multicollinearity, is of practical concern as it can lead to upwardly biased estimates of the parameters and to underestimation of their standard errors, increasing the risk of committing type I error. We compare the existing approaches to detect concurvity, pointing out their advantages and drawbacks, using simulated and real data sets. As a result, this paper will provide a general criterion to detect concurvity in nonlinear and non parametric regression models.

  13. Satellite rainfall retrieval by logistic regression

    Science.gov (United States)

    Chiu, Long S.

    1986-01-01

    The potential use of logistic regression in rainfall estimation from satellite measurements is investigated. Satellite measurements provide covariate information in terms of radiances from different remote sensors.The logistic regression technique can effectively accommodate many covariates and test their significance in the estimation. The outcome from the logistical model is the probability that the rainrate of a satellite pixel is above a certain threshold. By varying the thresholds, a rainrate histogram can be obtained, from which the mean and the variant can be estimated. A logistical model is developed and applied to rainfall data collected during GATE, using as covariates the fractional rain area and a radiance measurement which is deduced from a microwave temperature-rainrate relation. It is demonstrated that the fractional rain area is an important covariate in the model, consistent with the use of the so-called Area Time Integral in estimating total rain volume in other studies. To calibrate the logistical model, simulated rain fields generated by rainfield models with prescribed parameters are needed. A stringent test of the logistical model is its ability to recover the prescribed parameters of simulated rain fields. A rain field simulation model which preserves the fractional rain area and lognormality of rainrates as found in GATE is developed. A stochastic regression model of branching and immigration whose solutions are lognormally distributed in some asymptotic limits has also been developed.

  14. Rock discontinuity surface roughness variation with scale

    Science.gov (United States)

    Bitenc, Maja; Kieffer, D. Scott; Khoshelham, Kourosh

    2017-04-01

    ABSTRACT: Rock discontinuity surface roughness refers to local departures of the discontinuity surface from planarity and is an important factor influencing the shear resistance. In practice, the Joint Roughness Coefficient (JRC) roughness parameter is commonly relied upon and input to a shear strength criterion such as developed by Barton and Choubey [1977]. The estimation of roughness by JRC is hindered firstly by the subjective nature of visually comparing the joint profile to the ten standard profiles. Secondly, when correlating the standard JRC values and other objective measures of roughness, the roughness idealization is limited to a 2D profile of 10 cm length. With the advance of measuring technologies that provide accurate and high resolution 3D data of surface topography on different scales, new 3D roughness parameters have been developed. A desirable parameter is one that describes rock surface geometry as well as the direction and scale dependency of roughness. In this research a 3D roughness parameter developed by Grasselli [2001] and adapted by Tatone and Grasselli [2009] is adopted. It characterizes surface topography as the cumulative distribution of local apparent inclination of asperities with respect to the shear strength (analysis) direction. Thus, the 3D roughness parameter describes the roughness amplitude and anisotropy (direction dependency), but does not capture the scale properties. In different studies the roughness scale-dependency has been attributed to data resolution or size of the surface joint (see a summary of researches in [Tatone and Grasselli, 2012]). Clearly, the lower resolution results in lower roughness. On the other hand, have the investigations of surface size effect produced conflicting results. While some studies have shown a decrease in roughness with increasing discontinuity size (negative scale effect), others have shown the existence of positive scale effects, or both positive and negative scale effects. We

  15. The APT model as reduced-rank regression

    NARCIS (Netherlands)

    Bekker, P.A.; Dobbelstein, P.; Wansbeek, T.J.

    Integrating the two steps of an arbitrage pricing theory (APT) model leads to a reduced-rank regression (RRR) model. So the results on RRR can be used to estimate APT models, making estimation very simple. We give a succinct derivation of estimation of RRR, derive the asymptotic variance of RRR

  16. Hierarchy of sedimentary discontinuity surfaces and condensed beds from the middle Paleozoic of eastern North America: Implications for cratonic sequence stratigraphy

    Science.gov (United States)

    McLaughlin, P.I.; Brett, Carlton E.; Wilson, M.A.

    2008-01-01

    Sedimentological analyses of middle Paleozoic epeiric sea successions in North America suggest a hierarchy of discontinuity surfaces and condensed beds of increasing complexity. Simple firmgrounds and hardgrounds, which are comparatively ephemeral features, form the base of the hierarchy. Composite hardgrounds, reworked concretions, authigenic mineral crusts and monomictic intraformational conglomerates indicate more complex histories. Polymictic intraformational conglomerates, ironstones and phosphorites form the most complex discontinuity surfaces and condensed beds. Complexity of discontinuities is closely linked to depositional environments duration of sediment starvation and degree of reworking which in turn show a relationship to stratigraphic cyclicity. A model of cratonic sequence stratigraphy is generated by combining data on the complexity and lateral distribution of discontinuities in the context of facies successions. Lowstand, early transgressive and late transgressive systems tracts are representative of sea-level rise. Early and late transgressive systems tracts are separated by the maximum starvation surface (typically a polymictic intraformational conglomerate or condensed phosphorite), deposited during the peak rate of sea-level rise. Conversely the maximum flooding surface, representing the highest stand of sea level, is marked by little to no break in sedimentation. The highstand and falling stage systems tracts are deposited during relative sea-level fall. They are separated by the forced-regression surface, a thin discontinuity surface or condensed bed developed during the most rapid rate of sea-level fall. The lowest stand of sea level is marked by the sequence boundary. In subaerially exposed areas it is occasionally modified as a rockground or composite hardground.

  17. Stacking and discontinuous buffers in capillary zone electrophoresis.

    Science.gov (United States)

    Shihabi, Z K

    2000-08-01

    Discontinuous buffers for capillary zone electrophoresis (CZE) can be used under less rigid conditions compared to those for isotachophoresis for stacking. They can be prepared simply by modifying the sample itself, either by addition of small inorganic ions, low conductivity diluents, or both, and also by adjusting its pH, meanwhile injecting a large volume on the capillary. Zwitterionic and organic-based buffers such as triethanolamine and tris(hydroxymethyl)aminomethane (Tris) are well suited for stacking due to their low conductivity, provided the buffer is discontinuous as demonstrated here. A simple mechanism based on discontinuous buffers is described to explain many of the observed stacking types in CZE, pointing out the many similarities to transient isotachophoresis.

  18. A Simulation Investigation of Principal Component Regression.

    Science.gov (United States)

    Allen, David E.

    Regression analysis is one of the more common analytic tools used by researchers. However, multicollinearity between the predictor variables can cause problems in using the results of regression analyses. Problems associated with multicollinearity include entanglement of relative influences of variables due to reduced precision of estimation,…

  19. Fuzzy multinomial logistic regression analysis: A multi-objective programming approach

    Science.gov (United States)

    Abdalla, Hesham A.; El-Sayed, Amany A.; Hamed, Ramadan

    2017-05-01

    Parameter estimation for multinomial logistic regression is usually based on maximizing the likelihood function. For large well-balanced datasets, Maximum Likelihood (ML) estimation is a satisfactory approach. Unfortunately, ML can fail completely or at least produce poor results in terms of estimated probabilities and confidence intervals of parameters, specially for small datasets. In this study, a new approach based on fuzzy concepts is proposed to estimate parameters of the multinomial logistic regression. The study assumes that the parameters of multinomial logistic regression are fuzzy. Based on the extension principle stated by Zadeh and Bárdossy's proposition, a multi-objective programming approach is suggested to estimate these fuzzy parameters. A simulation study is used to evaluate the performance of the new approach versus Maximum likelihood (ML) approach. Results show that the new proposed model outperforms ML in cases of small datasets.

  20. Predictors of adalimumab drug survival in psoriasis differ by reason for discontinuation: long-term results from the Bio-CAPTURE registry.

    Science.gov (United States)

    van den Reek, J M P A; Tummers, M; Zweegers, J; Seyger, M M B; van Lümig, P P M; Driessen, R J B; van de Kerkhof, P C M; Kievit, W; de Jong, E M G J

    2015-03-01

    Drug survival is an indicator for treatment success; insight in predictors associated with drug survival is important. To analyse the long-term drug survival for adalimumab in patients with psoriasis treated in daily practice and (II) to identify predictors of prolonged drug survival for adalimumab split for different reasons of discontinuation. Data were extracted from a prospective psoriasis cohort and analysed using Kaplan-Meier survival curves split for reasons of discontinuation. Baseline predictors associated with longer drug survival were identified using multivariate Cox-regression analysis. One hundred and sixteen patients were included with a total of 208 patient-years. Overall drug survival was 76% after 1 year and 52% after 4.5 years. In patients who stopped due to ineffectiveness, longer drug survival was associated with the absence of specific comorbidities (P = 0.03). In patients who stopped due to side-effects, longer drug survival was associated with male gender (P = 0.02). Predictors of adalimumab drug survival in psoriasis differ by reason for discontinuation. Strong, specific predictors can lead to patient-tailored treatment. © 2014 European Academy of Dermatology and Venereology.

  1. Waiting-time distributions of magnetic discontinuities: Clustering or Poisson process?

    International Nuclear Information System (INIS)

    Greco, A.; Matthaeus, W. H.; Servidio, S.; Dmitruk, P.

    2009-01-01

    Using solar wind data from the Advanced Composition Explorer spacecraft, with the support of Hall magnetohydrodynamic simulations, the waiting-time distributions of magnetic discontinuities have been analyzed. A possible phenomenon of clusterization of these discontinuities is studied in detail. We perform a local Poisson's analysis in order to establish if these intermittent events are randomly distributed or not. Possible implications about the nature of solar wind discontinuities are discussed.

  2. Preparation and Mechanical Properties of Aligned Discontinuous Carbon Fiber Composites

    OpenAIRE

    DENG Hua; GAO Junpeng; BAO Jianwen

    2018-01-01

    Aligned discontinuous carbon fiber composites were fabricated from aligned discontinuous carbon fiber prepreg, which was prepared from continuous carbon fiber prepreg via mechanical high-frequency cutting. The internal quality and mechanical properties were characterized and compared with continuous carbon fiber composites. The results show that the internal quality of the aligned discontinuous carbon fiber composites is fine and the mechanical properties have high retention rate after the fi...

  3. From Rasch scores to regression

    DEFF Research Database (Denmark)

    Christensen, Karl Bang

    2006-01-01

    Rasch models provide a framework for measurement and modelling latent variables. Having measured a latent variable in a population a comparison of groups will often be of interest. For this purpose the use of observed raw scores will often be inadequate because these lack interval scale propertie....... This paper compares two approaches to group comparison: linear regression models using estimated person locations as outcome variables and latent regression models based on the distribution of the score....

  4. Regression model development and computational procedures to support estimation of real-time concentrations and loads of selected constituents in two tributaries to Lake Houston near Houston, Texas, 2005-9

    Science.gov (United States)

    Lee, Michael T.; Asquith, William H.; Oden, Timothy D.

    2012-01-01

    In December 2005, the U.S. Geological Survey (USGS), in cooperation with the City of Houston, Texas, began collecting discrete water-quality samples for nutrients, total organic carbon, bacteria (Escherichia coli and total coliform), atrazine, and suspended sediment at two USGS streamflow-gaging stations that represent watersheds contributing to Lake Houston (08068500 Spring Creek near Spring, Tex., and 08070200 East Fork San Jacinto River near New Caney, Tex.). Data from the discrete water-quality samples collected during 2005–9, in conjunction with continuously monitored real-time data that included streamflow and other physical water-quality properties (specific conductance, pH, water temperature, turbidity, and dissolved oxygen), were used to develop regression models for the estimation of concentrations of water-quality constituents of substantial source watersheds to Lake Houston. The potential explanatory variables included discharge (streamflow), specific conductance, pH, water temperature, turbidity, dissolved oxygen, and time (to account for seasonal variations inherent in some water-quality data). The response variables (the selected constituents) at each site were nitrite plus nitrate nitrogen, total phosphorus, total organic carbon, E. coli, atrazine, and suspended sediment. The explanatory variables provide easily measured quantities to serve as potential surrogate variables to estimate concentrations of the selected constituents through statistical regression. Statistical regression also facilitates accompanying estimates of uncertainty in the form of prediction intervals. Each regression model potentially can be used to estimate concentrations of a given constituent in real time. Among other regression diagnostics, the diagnostics used as indicators of general model reliability and reported herein include the adjusted R-squared, the residual standard error, residual plots, and p-values. Adjusted R-squared values for the Spring Creek models ranged

  5. Challenges Associated with Estimating Utility in Wet Age-Related Macular Degeneration: A Novel Regression Analysis to Capture the Bilateral Nature of the Disease.

    Science.gov (United States)

    Hodgson, Robert; Reason, Timothy; Trueman, David; Wickstead, Rose; Kusel, Jeanette; Jasilek, Adam; Claxton, Lindsay; Taylor, Matthew; Pulikottil-Jacob, Ruth

    2017-10-01

    The estimation of utility values for the economic evaluation of therapies for wet age-related macular degeneration (AMD) is a particular challenge. Previous economic models in wet AMD have been criticized for failing to capture the bilateral nature of wet AMD by modelling visual acuity (VA) and utility values associated with the better-seeing eye only. Here we present a de novo regression analysis using generalized estimating equations (GEE) applied to a previous dataset of time trade-off (TTO)-derived utility values from a sample of the UK population that wore contact lenses to simulate visual deterioration in wet AMD. This analysis allows utility values to be estimated as a function of VA in both the better-seeing eye (BSE) and worse-seeing eye (WSE). VAs in both the BSE and WSE were found to be statistically significant (p regression analysis provides a possible source of utility values to allow future economic models to capture the quality of life impact of changes in VA in both eyes. Novartis Pharmaceuticals UK Limited.

  6. An hp-local Discontinuous Galerkin Method for Parabolic Integro-Differential Equations

    KAUST Repository

    Pani, Amiya K.

    2010-06-06

    In this article, a priori error bounds are derived for an hp-local discontinuous Galerkin (LDG) approximation to a parabolic integro-differential equation. It is shown that error estimates in L 2-norm of the gradient as well as of the potential are optimal in the discretizing parameter h and suboptimal in the degree of polynomial p. Due to the presence of the integral term, an introduction of an expanded mixed type Ritz-Volterra projection helps us to achieve optimal estimates. Further, it is observed that a negative norm estimate of the gradient plays a crucial role in our convergence analysis. As in the elliptic case, similar results on order of convergence are established for the semidiscrete method after suitably modifying the numerical fluxes. The optimality of these theoretical results is tested in a series of numerical experiments on two dimensional domains. © 2010 Springer Science+Business Media, LLC.

  7. An hp-local Discontinuous Galerkin Method for Parabolic Integro-Differential Equations

    KAUST Repository

    Pani, Amiya K.; Yadav, Sangita

    2010-01-01

    In this article, a priori error bounds are derived for an hp-local discontinuous Galerkin (LDG) approximation to a parabolic integro-differential equation. It is shown that error estimates in L 2-norm of the gradient as well as of the potential are optimal in the discretizing parameter h and suboptimal in the degree of polynomial p. Due to the presence of the integral term, an introduction of an expanded mixed type Ritz-Volterra projection helps us to achieve optimal estimates. Further, it is observed that a negative norm estimate of the gradient plays a crucial role in our convergence analysis. As in the elliptic case, similar results on order of convergence are established for the semidiscrete method after suitably modifying the numerical fluxes. The optimality of these theoretical results is tested in a series of numerical experiments on two dimensional domains. © 2010 Springer Science+Business Media, LLC.

  8. A comparison of discontinuation rates of tofacitinib and biologic disease-modifying anti-rheumatic drugs in rheumatoid arthritis: a systematic review and Bayesian network meta-analysis.

    Science.gov (United States)

    Park, Sun-Kyeong; Lee, Min-Young; Jang, Eun-Jin; Kim, Hye-Lin; Ha, Dong-Mun; Lee, Eui-Kyung

    2017-01-01

    The purpose of this study was to compare the discontinuation rates of tofacitinib and biologics (tumour necrosis factor inhibitors (TNFi), abatacept, rituximab, and tocilizumab) in rheumatoid arthritis (RA) patients considering inadequate responses (IRs) to previous treatment(s). Randomised controlled trials of tofacitinib and biologics - reporting at least one total discontinuation, discontinuation due to lack of efficacy (LOE), and discontinuation due to adverse events (AEs) - were identified through systematic review. The analyses were conducted for patients with IRs to conventional synthetic disease-modifying anti-rheumatic drugs (cDMARDs) and for patients with biologics-IR, separately. Bayesian network meta-analysis was used to estimate rate ratio (RR) of a biologic relative to tofacitinib with 95% credible interval (CrI), and probability of RR being tofacitinib and biologics in the cDMARDs-IR group. In the biologics-IR group, however, TNFi (RR 0.17, 95% CrI 0.01-3.61, P[RRtofacitinib did. Despite the difference, discontinuation cases owing to LOE and AEs revealed that tofacitinib was comparable to the biologics. The comparability of discontinuation rate between tofacitinib and biologics was different based on previous treatments and discontinuation reasons: LOE, AEs, and total (due to other reasons). Therefore, those factors need to be considered to decide the optimal treatment strategy.

  9. Meta-Modeling by Symbolic Regression and Pareto Simulated Annealing

    NARCIS (Netherlands)

    Stinstra, E.; Rennen, G.; Teeuwen, G.J.A.

    2006-01-01

    The subject of this paper is a new approach to Symbolic Regression.Other publications on Symbolic Regression use Genetic Programming.This paper describes an alternative method based on Pareto Simulated Annealing.Our method is based on linear regression for the estimation of constants.Interval

  10. Using Structured Additive Regression Models to Estimate Risk Factors of Malaria: Analysis of 2010 Malawi Malaria Indicator Survey Data

    Science.gov (United States)

    Chirombo, James; Lowe, Rachel; Kazembe, Lawrence

    2014-01-01

    Background After years of implementing Roll Back Malaria (RBM) interventions, the changing landscape of malaria in terms of risk factors and spatial pattern has not been fully investigated. This paper uses the 2010 malaria indicator survey data to investigate if known malaria risk factors remain relevant after many years of interventions. Methods We adopted a structured additive logistic regression model that allowed for spatial correlation, to more realistically estimate malaria risk factors. Our model included child and household level covariates, as well as climatic and environmental factors. Continuous variables were modelled by assuming second order random walk priors, while spatial correlation was specified as a Markov random field prior, with fixed effects assigned diffuse priors. Inference was fully Bayesian resulting in an under five malaria risk map for Malawi. Results Malaria risk increased with increasing age of the child. With respect to socio-economic factors, the greater the household wealth, the lower the malaria prevalence. A general decline in malaria risk was observed as altitude increased. Minimum temperatures and average total rainfall in the three months preceding the survey did not show a strong association with disease risk. Conclusions The structured additive regression model offered a flexible extension to standard regression models by enabling simultaneous modelling of possible nonlinear effects of continuous covariates, spatial correlation and heterogeneity, while estimating usual fixed effects of categorical and continuous observed variables. Our results confirmed that malaria epidemiology is a complex interaction of biotic and abiotic factors, both at the individual, household and community level and that risk factors are still relevant many years after extensive implementation of RBM activities. PMID:24991915

  11. Receiver Function Imaging of Mantle Transition Zone Discontinuities Beneath Alaska

    Science.gov (United States)

    Dahm, Haider Hassan Faraj

    Subduction of tectonic plates is one of the most important tectonic processes, yet many aspects of subduction zone geodynamics remain unsolved and poorly understood, such as the depth extent of the subducted slab and its geometry. The Alaska subduction zone, which is associated with the subduction of the Pacific Plate beneath the North America plate, has a complex tectonic setting and carries a series of subduction episodes, and represents an excellent target to study such plate tectonic processes. Previous seismological studies in Alaska have proposed different depth estimations and geometry for the subducted slab. The Mantle transition zone discontinuities of the 410km and the 660 km provide independent constraints on the depth extent of the subducted slabs. We conducted a receiver function study to map the topography of the 410 km and the 660 km discontinuities beneath Alaska and its adjacent areas by taking advantage of the teleseismic data from the new USArray deployment in Alaska and northwestern Canada. Stacking over 75,000 high-quality radial receiver functions recorded in Alaska with more than 40 years of recording period, the topographies of the 410 km and 660 km are mapped. The depths of both d410 and d660 show systematic spatial variations, the mean depth of d410 and d660 are within 6 km and 6 km from the global average, respectively. The mean MTZ thickness of the entire study area is within -2 km from the global average of 250 km, suggesting normal MTZ conditions on average. Central and south-central Alaska are characterized by a larger than normal MTZ thickness, suggesting that the subducting Pacific slab is thermally interacted with the MTZ. This study shows that lateral upper mantle velocity variations contribute the bulk of the observed apparent undulations of the MTZ discontinuities.

  12. Multinomial Logistic Regression & Bootstrapping for Bayesian Estimation of Vertical Facies Prediction in Heterogeneous Sandstone Reservoirs

    Science.gov (United States)

    Al-Mudhafar, W. J.

    2013-12-01

    Precisely prediction of rock facies leads to adequate reservoir characterization by improving the porosity-permeability relationships to estimate the properties in non-cored intervals. It also helps to accurately identify the spatial facies distribution to perform an accurate reservoir model for optimal future reservoir performance. In this paper, the facies estimation has been done through Multinomial logistic regression (MLR) with respect to the well logs and core data in a well in upper sandstone formation of South Rumaila oil field. The entire independent variables are gamma rays, formation density, water saturation, shale volume, log porosity, core porosity, and core permeability. Firstly, Robust Sequential Imputation Algorithm has been considered to impute the missing data. This algorithm starts from a complete subset of the dataset and estimates sequentially the missing values in an incomplete observation by minimizing the determinant of the covariance of the augmented data matrix. Then, the observation is added to the complete data matrix and the algorithm continues with the next observation with missing values. The MLR has been chosen to estimate the maximum likelihood and minimize the standard error for the nonlinear relationships between facies & core and log data. The MLR is used to predict the probabilities of the different possible facies given each independent variable by constructing a linear predictor function having a set of weights that are linearly combined with the independent variables by using a dot product. Beta distribution of facies has been considered as prior knowledge and the resulted predicted probability (posterior) has been estimated from MLR based on Baye's theorem that represents the relationship between predicted probability (posterior) with the conditional probability and the prior knowledge. To assess the statistical accuracy of the model, the bootstrap should be carried out to estimate extra-sample prediction error by randomly

  13. SNOW DEPTH ESTIMATION USING TIME SERIES PASSIVE MICROWAVE IMAGERY VIA GENETICALLY SUPPORT VECTOR REGRESSION (CASE STUDY URMIA LAKE BASIN

    Directory of Open Access Journals (Sweden)

    N. Zahir

    2015-12-01

    Full Text Available Lake Urmia is one of the most important ecosystems of the country which is on the verge of elimination. Many factors contribute to this crisis among them is the precipitation, paly important roll. Precipitation has many forms one of them is in the form of snow. The snow on Sahand Mountain is one of the main and important sources of the Lake Urmia’s water. Snow Depth (SD is vital parameters for estimating water balance for future year. In this regards, this study is focused on SD parameter using Special Sensor Microwave/Imager (SSM/I instruments on board the Defence Meteorological Satellite Program (DMSP F16. The usual statistical methods for retrieving SD include linear and non-linear ones. These methods used least square procedure to estimate SD model. Recently, kernel base methods widely used for modelling statistical problem. From these methods, the support vector regression (SVR is achieved the high performance for modelling the statistical problem. Examination of the obtained data shows the existence of outlier in them. For omitting these outliers, wavelet denoising method is applied. After the omission of the outliers it is needed to select the optimum bands and parameters for SVR. To overcome these issues, feature selection methods have shown a direct effect on improving the regression performance. We used genetic algorithm (GA for selecting suitable features of the SSMI bands in order to estimate SD model. The results for the training and testing data in Sahand mountain is [R²_TEST=0.9049 and RMSE= 6.9654] that show the high SVR performance.

  14. Regression of Cardiac Rhabdomyomas in a Neonate after Everolimus Treatment

    Directory of Open Access Journals (Sweden)

    Helen Bornaun

    2016-01-01

    Full Text Available Cardiac rhabdomyoma often shows spontaneous regression and usually requires only close follow-up. However, patients with symptomatic inoperable rhabdomyomas may be candidates for everolimus treatment. Our patient had multiple inoperable cardiac rhabdomyomas causing serious left ventricle outflow-tract obstruction that showed a dramatic reduction in the size after everolimus therapy, a mammalian target of rapamycin (mTOR inhibitor. After discontinuation of therapy, an increase in the diameter of masses occurred and everolimus was restarted. After 6 months of treatment, rhabdomyomas decreased in size and therapy was stopped. In conclusion, everolimus could be a possible novel therapy for neonates with clinically significant rhabdomyomas.

  15. Discontinuous Galerkin methods and a posteriori error analysis for heterogenous diffusion problems

    International Nuclear Information System (INIS)

    Stephansen, A.F.

    2007-12-01

    In this thesis we analyse a discontinuous Galerkin (DG) method and two computable a posteriori error estimators for the linear and stationary advection-diffusion-reaction equation with heterogeneous diffusion. The DG method considered, the SWIP method, is a variation of the Symmetric Interior Penalty Galerkin method. The difference is that the SWIP method uses weighted averages with weights that depend on the diffusion. The a priori analysis shows optimal convergence with respect to mesh-size and robustness with respect to heterogeneous diffusion, which is confirmed by numerical tests. Both a posteriori error estimators are of the residual type and control the energy (semi-)norm of the error. Local lower bounds are obtained showing that almost all indicators are independent of heterogeneities. The exception is for the non-conforming part of the error, which has been evaluated using the Oswald interpolator. The second error estimator is sharper in its estimate with respect to the first one, but it is slightly more costly. This estimator is based on the construction of an H(div)-conforming Raviart-Thomas-Nedelec flux using the conservativeness of DG methods. Numerical results show that both estimators can be used for mesh-adaptation. (author)

  16. A non-conventional discontinuous Lagrangian for viscous flow

    Science.gov (United States)

    Marner, F.

    2017-01-01

    Drawing an analogy with quantum mechanics, a new Lagrangian is proposed for a variational formulation of the Navier–Stokes equations which to-date has remained elusive. A key feature is that the resulting Lagrangian is discontinuous in nature, posing additional challenges apropos the mathematical treatment of the related variational problem, all of which are resolvable. In addition to extending Lagrange's formalism to problems involving discontinuous behaviour, it is demonstrated that the associated equations of motion can self-consistently be interpreted within the framework of thermodynamics beyond local equilibrium, with the limiting case recovering the classical Navier–Stokes equations. Perspectives for applying the new formalism to discontinuous physical phenomena such as phase and grain boundaries, shock waves and flame fronts are provided. PMID:28386415

  17. The structure of rotational discontinuities. [in solar wind

    Science.gov (United States)

    Neugebauer, M.

    1989-01-01

    This study examines the structures of a set of rotational discontinuities detected in the solar wind by the ISEE-3 spacecraft. It is found that the complexity of the structure increases as the angle theta between the propagation vector k and the magnetic field decreases. For rotational discontinuities that propagate at a large angle to the field with an ion (left-hand) sense of rotation, the magnetic hodograms tend to be flattened, in agreement with prior numerical simulations. When theta is large, angular 'overshoots' are often observed at one or both ends of the discontinuity. When the propagation is nearly parallel to the field (when theta is small), many different types of structure are seen, ranging from straight lines, to S-shaped curves, to complex, disorganized shapes.

  18. News and views in discontinuous phase transitions

    Science.gov (United States)

    Nagler, Jan

    2014-03-01

    Recent progress in the theory of discontinuous percolation allow us to better understand the the sudden emergence of large-scale connectedness both in networked systems and on the lattice. We analytically study mechanisms for the amplification of critical fluctuations at the phase transition point, non-self-averaging and power law fluctuations. A single event analysis allow to establish criteria for discontinuous percolation transitions, even on the high-dimensional lattice. Some applications such as salad bowl percolation, and inverse fragmentation are discussed.

  19. Dynamic stability and failure modes of slopes in discontinuous rock mass

    International Nuclear Information System (INIS)

    Shimizu, Yasuhiro; Aydan, O.; Ichikawa, Yasuaki; Kawamoto, Toshikazu.

    1988-01-01

    The stability of rock slopes during earthquakes are of great concern in rock engineering works such as highway, dam, and nuclear power station constructions. As rock mass in nature is usually discontinuous, the stability of rock slopes will be geverned by the spatial distribution of discontinuities in relation with the geometry of slope and their mechanical properties rather than the rock element. The authors have carried out some model tests on discontinuous rock slopes using three different model tests techniques in order to investigate the dynamic behaviour and failure modes of the slopes in discontinuous rock mass. This paper describes the findings and observations made on model rock slopes with various discontinuity patterns and slope geometry. In addition some stability criterions are developed and the calculated results are compared with those of experiments. (author)

  20. Multitask Quantile Regression under the Transnormal Model.

    Science.gov (United States)

    Fan, Jianqing; Xue, Lingzhou; Zou, Hui

    2016-01-01

    We consider estimating multi-task quantile regression under the transnormal model, with focus on high-dimensional setting. We derive a surprisingly simple closed-form solution through rank-based covariance regularization. In particular, we propose the rank-based ℓ 1 penalization with positive definite constraints for estimating sparse covariance matrices, and the rank-based banded Cholesky decomposition regularization for estimating banded precision matrices. By taking advantage of alternating direction method of multipliers, nearest correlation matrix projection is introduced that inherits sampling properties of the unprojected one. Our work combines strengths of quantile regression and rank-based covariance regularization to simultaneously deal with nonlinearity and nonnormality for high-dimensional regression. Furthermore, the proposed method strikes a good balance between robustness and efficiency, achieves the "oracle"-like convergence rate, and provides the provable prediction interval under the high-dimensional setting. The finite-sample performance of the proposed method is also examined. The performance of our proposed rank-based method is demonstrated in a real application to analyze the protein mass spectroscopy data.

  1. Discontinuity of maximum entropy inference and quantum phase transitions

    International Nuclear Information System (INIS)

    Chen, Jianxin; Ji, Zhengfeng; Yu, Nengkun; Zeng, Bei; Li, Chi-Kwong; Poon, Yiu-Tung; Shen, Yi; Zhou, Duanlu

    2015-01-01

    In this paper, we discuss the connection between two genuinely quantum phenomena—the discontinuity of quantum maximum entropy inference and quantum phase transitions at zero temperature. It is shown that the discontinuity of the maximum entropy inference of local observable measurements signals the non-local type of transitions, where local density matrices of the ground state change smoothly at the transition point. We then propose to use the quantum conditional mutual information of the ground state as an indicator to detect the discontinuity and the non-local type of quantum phase transitions in the thermodynamic limit. (paper)

  2. Integrating travel behavior with land use regression to estimate dynamic air pollution exposure in Hong Kong.

    Science.gov (United States)

    Tang, Robert; Tian, Linwei; Thach, Thuan-Quoc; Tsui, Tsz Him; Brauer, Michael; Lee, Martha; Allen, Ryan; Yuchi, Weiran; Lai, Poh-Chin; Wong, Paulina; Barratt, Benjamin

    2018-04-01

    Epidemiological studies typically use subjects' residential address to estimate individuals' air pollution exposure. However, in reality this exposure is rarely static as people move from home to work/study locations and commute during the day. Integrating mobility and time-activity data may reduce errors and biases, thereby improving estimates of health risks. To incorporate land use regression with movement and building infiltration data to estimate time-weighted air pollution exposures stratified by age, sex, and employment status for population subgroups in Hong Kong. A large population-representative survey (N = 89,385) was used to characterize travel behavior, and derive time-activity pattern for each subject. Infiltration factors calculated from indoor/outdoor monitoring campaigns were used to estimate micro-environmental concentrations. We evaluated dynamic and static (residential location-only) exposures in a staged modeling approach to quantify effects of each component. Higher levels of exposures were found for working adults and students due to increased mobility. Compared to subjects aged 65 or older, exposures to PM 2.5 , BC, and NO 2 were 13%, 39% and 14% higher, respectively for subjects aged below 18, and 3%, 18% and 11% higher, respectively for working adults. Exposures of females were approximately 4% lower than those of males. Dynamic exposures were around 20% lower than ambient exposures at residential addresses. The incorporation of infiltration and mobility increased heterogeneity in population exposure and allowed identification of highly exposed groups. The use of ambient concentrations may lead to exposure misclassification which introduces bias, resulting in lower effect estimates than 'true' exposures. Copyright © 2018 Elsevier Ltd. All rights reserved.

  3. Discontinuous Galerkin methods and a posteriori error analysis for heterogenous diffusion problems; Methodes de Galerkine discontinues et analyse d'erreur a posteriori pour les problemes de diffusion heterogene

    Energy Technology Data Exchange (ETDEWEB)

    Stephansen, A.F

    2007-12-15

    In this thesis we analyse a discontinuous Galerkin (DG) method and two computable a posteriori error estimators for the linear and stationary advection-diffusion-reaction equation with heterogeneous diffusion. The DG method considered, the SWIP method, is a variation of the Symmetric Interior Penalty Galerkin method. The difference is that the SWIP method uses weighted averages with weights that depend on the diffusion. The a priori analysis shows optimal convergence with respect to mesh-size and robustness with respect to heterogeneous diffusion, which is confirmed by numerical tests. Both a posteriori error estimators are of the residual type and control the energy (semi-)norm of the error. Local lower bounds are obtained showing that almost all indicators are independent of heterogeneities. The exception is for the non-conforming part of the error, which has been evaluated using the Oswald interpolator. The second error estimator is sharper in its estimate with respect to the first one, but it is slightly more costly. This estimator is based on the construction of an H(div)-conforming Raviart-Thomas-Nedelec flux using the conservativeness of DG methods. Numerical results show that both estimators can be used for mesh-adaptation. (author)

  4. Regression methodology in groundwater composition estimation with composition predictions for Romuvaara borehole KR10

    Energy Technology Data Exchange (ETDEWEB)

    Luukkonen, A.; Korkealaakso, J.; Pitkaenen, P. [VTT Communities and Infrastructure, Espoo (Finland)

    1997-11-01

    Teollisuuden Voima Oy selected five investigation areas for preliminary site studies (1987Ae1992). The more detailed site investigation project, launched at the beginning of 1993 and presently supervised by Posiva Oy, is concentrated to three investigation areas. Romuvaara at Kuhmo is one of the present target areas, and the geochemical, structural and hydrological data used in this study are extracted from there. The aim of the study is to develop suitable methods for groundwater composition estimation based on a group of known hydrogeological variables. The input variables used are related to the host type of groundwater, hydrological conditions around the host location, mixing potentials between different types of groundwater, and minerals equilibrated with the groundwater. The output variables are electrical conductivity, Ca, Mg, Mn, Na, K, Fe, Cl, S, HS, SO{sub 4}, alkalinity, {sup 3}H, {sup 14}C, {sup 13}C, Al, Sr, F, Br and I concentrations, and pH of the groundwater. The methodology is to associate the known hydrogeological conditions (i.e. input variables), with the known water compositions (output variables), and to evaluate mathematical relations between these groups. Output estimations are done with two separate procedures: partial least squares regressions on the principal components of input variables, and by training neural networks with input-output pairs. Coefficients of linear equations and trained networks are optional methods for actual predictions. The quality of output predictions are monitored with confidence limit estimations, evaluated from input variable covariances and output variances, and with charge balance calculations. Groundwater compositions in Romuvaara borehole KR10 are predicted at 10 metre intervals with both prediction methods. 46 refs.

  5. MHD intermediate shock discontinuities: Pt. 1

    International Nuclear Information System (INIS)

    Kennel, C.F.; Blandford, R.D.; Coppi, P.

    1989-01-01

    Recent numerical investigations have focused attention once more on the role of intermediate shocks in MHD. Four types of intermediate shock are identified using a graphical representation of the MHD Rankine-Hugoniot conditions. This same representation can be used to exhibit the close relationship of intermediate shocks to switch-on shocks and rotational discontinuities. The conditions under which intermediate discontinuities can be found are elucidated. The variations in velocity, pressure, entropy and magnetic-field jumps with upstream parameters in intermediate shocks are exhibited graphically. The evolutionary arguments traditionally advanced against intermediate shocks may fail because the equations of classical MHD are not strictly hyperbolic. (author)

  6. Evaluation of distant results after lamivudine discontinuation in children with chronic hepatitis B.

    Directory of Open Access Journals (Sweden)

    Lech Chyczewski

    2010-08-01

    Full Text Available The aim of this study was to estimate distant results after discontinuation of long term lamivudine treatment in children with chronic hepatitis B. Furthermore, the emergence of HBV polymerase gene variants in YMDD motif during therapy was examined. Additionally, the most commonly occurring type of mutation in the polymerase YMDD region were investigated. The study involved 27 HBeAg positive children with chronic hepatitis B. Children included to lamivudine therapy were previously treated without effects with interferon alpha.

  7. Multivariate and semiparametric kernel regression

    OpenAIRE

    Härdle, Wolfgang; Müller, Marlene

    1997-01-01

    The paper gives an introduction to theory and application of multivariate and semiparametric kernel smoothing. Multivariate nonparametric density estimation is an often used pilot tool for examining the structure of data. Regression smoothing helps in investigating the association between covariates and responses. We concentrate on kernel smoothing using local polynomial fitting which includes the Nadaraya-Watson estimator. Some theory on the asymptotic behavior and bandwidth selection is pro...

  8. Contribution of Strong Discontinuities to the Power Spectrum of the Solar Wind

    International Nuclear Information System (INIS)

    Borovsky, Joseph E.

    2010-01-01

    Eight and a half years of magnetic field measurements (2 22 samples) from the ACE spacecraft in the solar wind at 1 A.U. are analyzed. Strong (large-rotation-angle) discontinuities in the solar wind are collected and measured. An artificial time series is created that preserves the timing and amplitudes of the discontinuities. The power spectral density of the discontinuity series is calculated and compared with the power spectral density of the solar-wind magnetic field. The strong discontinuities produce a power-law spectrum in the ''inertial subrange'' with a spectral index near the Kolmogorov -5/3 index. The discontinuity spectrum contains about half of the power of the full solar-wind magnetic field over this ''inertial subrange.'' Warnings are issued about the significant contribution of discontinuities to the spectrum of the solar wind, complicating interpretation of spectral power and spectral indices.

  9. Discontinuous Galerkin finite element methods for hyperbolic differential equations

    NARCIS (Netherlands)

    van der Vegt, Jacobus J.W.; van der Ven, H.; Boelens, O.J.; Boelens, O.J.; Toro, E.F.

    2002-01-01

    In this paper a suryey is given of the important steps in the development of discontinuous Galerkin finite element methods for hyperbolic partial differential equations. Special attention is paid to the application of the discontinuous Galerkin method to the solution of the Euler equations of gas

  10. The microcomputer scientific software series 2: general linear model--regression.

    Science.gov (United States)

    Harold M. Rauscher

    1983-01-01

    The general linear model regression (GLMR) program provides the microcomputer user with a sophisticated regression analysis capability. The output provides a regression ANOVA table, estimators of the regression model coefficients, their confidence intervals, confidence intervals around the predicted Y-values, residuals for plotting, a check for multicollinearity, a...

  11. Clopidogrel discontinuation and platelet reactivity following coronary stenting

    LENUS (Irish Health Repository)

    2011-01-01

    Summary. Aims: Antiplatelet therapy with aspirin and clopidogrel is recommended for 1 year after drug-eluting stent (DES) implantation or myocardial infarction. However, the discontinuation of antiplatelet therapy has become an important issue as recent studies have suggested a clustering of ischemic events within 90 days of clopidogrel withdrawal. The objective of this investigation was to explore the hypothesis that there is a transient ‘rebound’ increase in platelet reactivity within 3 months of clopidogrel discontinuation. Methods and Results: In this prospective study, platelet function was assessed in patients taking aspirin and clopidogrel for at least 1 year following DES implantation. Platelet aggregation was measured using a modification of light transmission aggregometry in response to multiple concentrations of adenosine diphosphate (ADP), epinephrine, arachidonic acid, thrombin receptor activating peptide and collagen. Clopidogrel was stopped and platelet function was reassessed 1 week, 1 month and 3 months later. Thirty-two patients on dual antiplatelet therapy were recruited. Discontinuation of clopidogrel increased platelet aggregation to all agonists, except arachidonic acid. Platelet aggregation in response to ADP (2.5, 5, 10, and 20 μm) and epinephrine (5 and 20 μm) was significantly increased at 1 month compared with 3 months following clopidogrel withdrawal. Thus, a transient period of increased platelet reactivity to both ADP and epinephrine was observed 1 month after clopidogrel discontinuation. Conclusions: This study demonstrates a transient increase in platelet reactivity 1 month after clopidogrel withdrawal. This phenomenon may, in part, explain the known clustering of thrombotic events observed after clopidogrel discontinuation. This observation requires confirmation in larger populations.

  12. Estimation of operational parameters for a direct injection turbocharged spark ignition engine by using regression analysis and artificial neural network

    Directory of Open Access Journals (Sweden)

    Tosun Erdi

    2017-01-01

    Full Text Available This study was aimed at estimating the variation of several engine control parameters within the rotational speed-load map, using regression analysis and artificial neural network techniques. Duration of injection, specific fuel consumption, exhaust gas at turbine inlet, and within the catalytic converter brick were chosen as the output parameters for the models, while engine speed and brake mean effective pressure were selected as independent variables for prediction. Measurements were performed on a turbocharged direct injection spark ignition engine fueled with gasoline. A three-layer feed-forward structure and back-propagation algorithm was used for training the artificial neural network. It was concluded that this technique is capable of predicting engine parameters with better accuracy than linear and non-linear regression techniques.

  13. [Application of detecting and taking overdispersion into account in Poisson regression model].

    Science.gov (United States)

    Bouche, G; Lepage, B; Migeot, V; Ingrand, P

    2009-08-01

    Researchers often use the Poisson regression model to analyze count data. Overdispersion can occur when a Poisson regression model is used, resulting in an underestimation of variance of the regression model parameters. Our objective was to take overdispersion into account and assess its impact with an illustration based on the data of a study investigating the relationship between use of the Internet to seek health information and number of primary care consultations. Three methods, overdispersed Poisson, a robust estimator, and negative binomial regression, were performed to take overdispersion into account in explaining variation in the number (Y) of primary care consultations. We tested overdispersion in the Poisson regression model using the ratio of the sum of Pearson residuals over the number of degrees of freedom (chi(2)/df). We then fitted the three models and compared parameter estimation to the estimations given by Poisson regression model. Variance of the number of primary care consultations (Var[Y]=21.03) was greater than the mean (E[Y]=5.93) and the chi(2)/df ratio was 3.26, which confirmed overdispersion. Standard errors of the parameters varied greatly between the Poisson regression model and the three other regression models. Interpretation of estimates from two variables (using the Internet to seek health information and single parent family) would have changed according to the model retained, with significant levels of 0.06 and 0.002 (Poisson), 0.29 and 0.09 (overdispersed Poisson), 0.29 and 0.13 (use of a robust estimator) and 0.45 and 0.13 (negative binomial) respectively. Different methods exist to solve the problem of underestimating variance in the Poisson regression model when overdispersion is present. The negative binomial regression model seems to be particularly accurate because of its theorical distribution ; in addition this regression is easy to perform with ordinary statistical software packages.

  14. Class of reconstructed discontinuous Galerkin methods in computational fluid dynamics

    International Nuclear Information System (INIS)

    Luo, Hong; Xia, Yidong; Nourgaliev, Robert

    2011-01-01

    A class of reconstructed discontinuous Galerkin (DG) methods is presented to solve compressible flow problems on arbitrary grids. The idea is to combine the efficiency of the reconstruction methods in finite volume methods and the accuracy of the DG methods to obtain a better numerical algorithm in computational fluid dynamics. The beauty of the resulting reconstructed discontinuous Galerkin (RDG) methods is that they provide a unified formulation for both finite volume and DG methods, and contain both classical finite volume and standard DG methods as two special cases of the RDG methods, and thus allow for a direct efficiency comparison. Both Green-Gauss and least-squares reconstruction methods and a least-squares recovery method are presented to obtain a quadratic polynomial representation of the underlying linear discontinuous Galerkin solution on each cell via a so-called in-cell reconstruction process. The devised in-cell reconstruction is aimed to augment the accuracy of the discontinuous Galerkin method by increasing the order of the underlying polynomial solution. These three reconstructed discontinuous Galerkin methods are used to compute a variety of compressible flow problems on arbitrary meshes to assess their accuracy. The numerical experiments demonstrate that all three reconstructed discontinuous Galerkin methods can significantly improve the accuracy of the underlying second-order DG method, although the least-squares reconstructed DG method provides the best performance in terms of both accuracy, efficiency, and robustness. (author)

  15. Research progress on criteria for discontinuation of EGFR inhibitor therapy

    Directory of Open Access Journals (Sweden)

    Zhuang HQ

    2012-10-01

    Full Text Available Hong-qing Zhuang, Zhi-yong Yuan, Jun Wang, Ping Wang, Lu-jun Zhao, Bai-lin ZhangDepartment of Radiotherapy, Tianjin Medical University Cancer Institute and Hospital, Tianjin Key Laboratory of Cancer Prevention and Therapy, Tianjin Lung Cancer Center, Tianjin, People's Republic of ChinaAbstract: The clinical success of the epidermal growth factor receptor (EGFR tyrosine kinase inhibitors (TKI as therapeutic agents has prompted great interest in their further development and clinical testing for a wide variety of malignancies. However, most studies have focused on the efficacy of TKI, and few studies have been done on the criteria for their discontinuation. The current standard for drug discontinuation is “until progression”, based on change in tumor size. However, tumor size is not related to the gene expression which determines the efficacy of TKI in the final analysis, and it is also difficult to make a thorough and correct prediction based on tumor size when the TKI is discontinued. Nevertheless, clinical evaluation of the criteria for TKI discontinuation is still in its early days. Some promising findings have started to emerge. With the improving knowledge of EGFR and its inhibitors, it is expected that the criteria for discontinuation of EGFR inhibitor therapy will become clearer.Keywords: epidermal growth factor receptor, drug discontinuation, acquired drug-resistance

  16. Convergence Improvement of Response Matrix Method with Large Discontinuity Factors

    International Nuclear Information System (INIS)

    Yamamoto, Akio

    2003-01-01

    In the response matrix method, a numerical divergence problem has been reported when extremely small or large discontinuity factors are utilized in the calculations. In this paper, an alternative response matrix formulation to solve the divergence problem is discussed, and properties of iteration matrixes are investigated through eigenvalue analyses. In the conventional response matrix formulation, partial currents between adjacent nodes are assumed to be discontinuous, and outgoing partial currents are converted into incoming partial currents by the discontinuity factor matrix. Namely, the partial currents of the homogeneous system (i.e., homogeneous partial currents) are treated in the conventional response matrix formulation. In this approach, the spectral radius of an iteration matrix for the partial currents may exceed unity when an extremely small or large discontinuity factor is used. Contrary to this, an alternative response matrix formulation using heterogeneous partial currents is discussed in this paper. In the latter approach, partial currents are assumed to be continuous between adjacent nodes, and discontinuity factors are directly considered in the coefficients of a response matrix. From the eigenvalue analysis of the iteration matrix for the one-group, one-dimensional problem, the spectral radius for the heterogeneous partial current formulation does not exceed unity even if an extremely small or large discontinuity factor is used in the calculation; numerical stability of the alternative formulation is superior to the conventional one. The numerical stability of the heterogeneous partial current formulation is also confirmed by the two-dimensional light water reactor core analysis. Since the heterogeneous partial current formulation does not require any approximation, the converged solution exactly reproduces the reference solution when the discontinuity factors are directly derived from the reference calculation

  17. Estimation of Production KWS Maize Hybrids Using Nonlinear Regression

    Directory of Open Access Journals (Sweden)

    Florica MORAR

    2018-06-01

    Full Text Available This article approaches the model of non-linear regression and the method of smallest squares with examples, including calculations for the model of logarithmic function. This required data obtained from a study which involved the observation of the phases of growth and development in KWS maize hybrids in order to analyze the influence of the MMB quality indicator on grain production per hectare.

  18. Schroedinger propagation of initial discontinuities leads to divergence of moments

    International Nuclear Information System (INIS)

    Marchewka, A.; Schuss, Z.

    2009-01-01

    We show that the large phase expansion of the Schroedinger propagation of an initially discontinuous wave function leads to the divergence of average energy, momentum, and displacement, rendering them unphysical states. If initially discontinuous wave functions are considered to be approximations to continuous ones, the determinant of the spreading rate of these averages is the maximal gradient of the initial wave function. Therefore a dilemma arises between the inclusion of discontinuous wave functions in quantum mechanics and the requirement of finite moments.

  19. Schroedinger propagation of initial discontinuities leads to divergence of moments

    Energy Technology Data Exchange (ETDEWEB)

    Marchewka, A., E-mail: avi.marchewka@gmail.co [Ruppin Academic Center, Emek-Hefer 40250 (Israel); Schuss, Z., E-mail: schuss@post.tau.ac.i [Department of Mathematics, Tel-Aviv University, Ramat-Aviv, 69978 Tel-Aviv (Israel)

    2009-09-21

    We show that the large phase expansion of the Schroedinger propagation of an initially discontinuous wave function leads to the divergence of average energy, momentum, and displacement, rendering them unphysical states. If initially discontinuous wave functions are considered to be approximations to continuous ones, the determinant of the spreading rate of these averages is the maximal gradient of the initial wave function. Therefore a dilemma arises between the inclusion of discontinuous wave functions in quantum mechanics and the requirement of finite moments.

  20. Descriptor Learning via Supervised Manifold Regularization for Multioutput Regression.

    Science.gov (United States)

    Zhen, Xiantong; Yu, Mengyang; Islam, Ali; Bhaduri, Mousumi; Chan, Ian; Li, Shuo

    2017-09-01

    Multioutput regression has recently shown great ability to solve challenging problems in both computer vision and medical image analysis. However, due to the huge image variability and ambiguity, it is fundamentally challenging to handle the highly complex input-target relationship of multioutput regression, especially with indiscriminate high-dimensional representations. In this paper, we propose a novel supervised descriptor learning (SDL) algorithm for multioutput regression, which can establish discriminative and compact feature representations to improve the multivariate estimation performance. The SDL is formulated as generalized low-rank approximations of matrices with a supervised manifold regularization. The SDL is able to simultaneously extract discriminative features closely related to multivariate targets and remove irrelevant and redundant information by transforming raw features into a new low-dimensional space aligned to targets. The achieved discriminative while compact descriptor largely reduces the variability and ambiguity for multioutput regression, which enables more accurate and efficient multivariate estimation. We conduct extensive evaluation of the proposed SDL on both synthetic data and real-world multioutput regression tasks for both computer vision and medical image analysis. Experimental results have shown that the proposed SDL can achieve high multivariate estimation accuracy on all tasks and largely outperforms the algorithms in the state of the arts. Our method establishes a novel SDL framework for multioutput regression, which can be widely used to boost the performance in different applications.

  1. Forensic Excavation of Rock Masses: A Technique to Investigate Discontinuity Persistence

    Science.gov (United States)

    Shang, J.; Hencher, S. R.; West, L. J.; Handley, K.

    2017-11-01

    True persistence of rock discontinuities (areas with insignificant tensile strength) is an important factor controlling the engineering behaviour of fractured rock masses, but is extremely difficult to quantify using current geological survey methodologies, even where there is good rock exposure. Trace length as measured in the field or using remote measurement devices is actually only broadly indicative of persistence for rock engineering practice and numerical modelling. Visible traces of discontinuities are treated as if they were open fractures within rock mass classifications, despite many such traces being non-persistent and actually retaining considerable strength. The common assumption of 100% persistence, based on trace length, is generally extremely conservative in terms of strength and stiffness, but not always so and may lead to a wrong prediction of failure mechanism or of excavatability. Assuming full persistence would give hopelessly incorrect predictions of hydraulic conductivity. A new technique termed forensic excavation of rock masses is introduced, as a procedure for directly investigating discontinuity persistence. This technique involves non-explosive excavation of rock masses by injecting an expansive chemical splitter along incipient discontinuities. On expansion, the splitter causes the incipient traces to open as true joints. Experiments are described in which near-planar rock discontinuities, through siltstone and sandstone, were opened up by injecting the splitter into holes drilled along the lines of visible traces of the discontinuities in the laboratory and in the field. Once exposed the surfaces were examined to investigate the pre-existing persistence characteristics of the incipient discontinuities. One conclusion from this study is that visible trace length of a discontinuity can be a poor indicator of true persistence (defined for a fracture area with negligible tensile strength). An observation from this series of experiments

  2. The effect of discontinuities on the corrosion behaviour of copper canisters

    International Nuclear Information System (INIS)

    King, F.

    2004-03-01

    Discontinuities may remain in the weld region of copper canisters following the final closure welding and inspection procedures. Although the shell of the copper canister is expected to exhibit excellent corrosion properties in the repository environment, the question remains what impact these discontinuities might have on the long-term performance and service life of the canister. A review of the relevant corrosion literature has been carried out and an expert opinion of the impact of these discontinuities on the canister lifetime has been developed. Since the amount of oxidant in the repository is limited and the maximum wall penetration is expected to be 2 O/Cu(OH) 2 film at a critical electrochemical potential determines where and when pits initiate, not the presence of pit-shaped surface discontinuities. The factors controlling pit growth and death are well understood. There is evidence for a maximum pit radius for copper in chloride solutions, above which the small anodic: cathodic surface area ratio required for the formation of deep pits cannot be sustained. This maximum pit radius is of the order of 0.1-0.5 mm. Surface discontinuities larger than this size are unlikely to propagate as pits, and pits generated from smaller discontinuities will die once they reach this maximum size. Death of propagating pits will be compounded by the decrease in oxygen flux to the canister as the repository environment becomes anoxic. Surface discontinuities could impact the SCC behaviour either through their effect on the local environment or via stress concentration or intensification. There is no evidence that surface discontinuities will affect the initiation of SCC by ennoblement of the corrosion potential or the formation of locally aggressive conditions. Stress concentration at pits could lead to crack initiation under some circumstances, but the stress intensity factor for the resultant cracks, or for pre-existing crack-like discontinuities, will be smaller than the

  3. Allelic drop-out probabilities estimated by logistic regression

    DEFF Research Database (Denmark)

    Tvedebrink, Torben; Eriksen, Poul Svante; Asplund, Maria

    2012-01-01

    We discuss the model for estimating drop-out probabilities presented by Tvedebrink et al. [7] and the concerns, that have been raised. The criticism of the model has demonstrated that the model is not perfect. However, the model is very useful for advanced forensic genetic work, where allelic drop-out...... is occurring. With this discussion, we hope to improve the drop-out model, so that it can be used for practical forensic genetics and stimulate further discussions. We discuss how to estimate drop-out probabilities when using a varying number of PCR cycles and other experimental conditions....

  4. Automatic extraction of discontinuity orientation from rock mass surface 3D point cloud

    Science.gov (United States)

    Chen, Jianqin; Zhu, Hehua; Li, Xiaojun

    2016-10-01

    This paper presents a new method for extracting discontinuity orientation automatically from rock mass surface 3D point cloud. The proposed method consists of four steps: (1) automatic grouping of discontinuity sets using an improved K-means clustering method, (2) discontinuity segmentation and optimization, (3) discontinuity plane fitting using Random Sample Consensus (RANSAC) method, and (4) coordinate transformation of discontinuity plane. The method is first validated by the point cloud of a small piece of a rock slope acquired by photogrammetry. The extracted discontinuity orientations are compared with measured ones in the field. Then it is applied to a publicly available LiDAR data of a road cut rock slope at Rockbench repository. The extracted discontinuity orientations are compared with the method proposed by Riquelme et al. (2014). The results show that the presented method is reliable and of high accuracy, and can meet the engineering needs.

  5. On discontinuous Galerkin and discrete ordinates approximations for neutron transport equation and the critical eigenvalue

    International Nuclear Information System (INIS)

    Asadzadeh, M.; Thevenot, L.

    2010-01-01

    The objective of this paper is to give a mathematical framework for a fully discrete numerical approach for the study of the neutron transport equation in a cylindrical domain (container model,). More specifically, we consider the discontinuous Galerkin (D G) finite element method for spatial approximation of the mono-energetic, critical neutron transport equation in an infinite cylindrical domain ??in R3 with a polygonal convex cross-section ? The velocity discretization relies on a special quadrature rule developed to give optimal estimates in discrete ordinate parameters compatible with the quasi-uniform spatial mesh. We use interpolation spaces and derive optimal error estimates, up to maximal available regularity, for the fully discrete scalar flux. Finally we employ a duality argument and prove superconvergence estimates for the critical eigenvalue.

  6. Applied regression analysis a research tool

    CERN Document Server

    Pantula, Sastry; Dickey, David

    1998-01-01

    Least squares estimation, when used appropriately, is a powerful research tool. A deeper understanding of the regression concepts is essential for achieving optimal benefits from a least squares analysis. This book builds on the fundamentals of statistical methods and provides appropriate concepts that will allow a scientist to use least squares as an effective research tool. Applied Regression Analysis is aimed at the scientist who wishes to gain a working knowledge of regression analysis. The basic purpose of this book is to develop an understanding of least squares and related statistical methods without becoming excessively mathematical. It is the outgrowth of more than 30 years of consulting experience with scientists and many years of teaching an applied regression course to graduate students. Applied Regression Analysis serves as an excellent text for a service course on regression for non-statisticians and as a reference for researchers. It also provides a bridge between a two-semester introduction to...

  7. Spatial correlation in Bayesian logistic regression with misclassification

    DEFF Research Database (Denmark)

    Bihrmann, Kristine; Toft, Nils; Nielsen, Søren Saxmose

    2014-01-01

    Standard logistic regression assumes that the outcome is measured perfectly. In practice, this is often not the case, which could lead to biased estimates if not accounted for. This study presents Bayesian logistic regression with adjustment for misclassification of the outcome applied to data...

  8. Panel data specifications in nonparametric kernel regression

    DEFF Research Database (Denmark)

    Czekaj, Tomasz Gerard; Henningsen, Arne

    parametric panel data estimators to analyse the production technology of Polish crop farms. The results of our nonparametric kernel regressions generally differ from the estimates of the parametric models but they only slightly depend on the choice of the kernel functions. Based on economic reasoning, we...

  9. Establishment of regression dependences. Linear and nonlinear dependences

    International Nuclear Information System (INIS)

    Onishchenko, A.M.

    1994-01-01

    The main problems of determination of linear and 19 types of nonlinear regression dependences are completely discussed. It is taken into consideration that total dispersions are the sum of measurement dispersions and parameter variation dispersions themselves. Approaches to all dispersions determination are described. It is shown that the least square fit gives inconsistent estimation for industrial objects and processes. The correction methods by taking into account comparable measurement errors for both variable give an opportunity to obtain consistent estimation for the regression equation parameters. The condition of the correction technique application expediency is given. The technique for determination of nonlinear regression dependences taking into account the dependence form and comparable errors of both variables is described. 6 refs., 1 tab

  10. Estimation of genotype X environment interactions, in a grassbased system, for milk yield, body condition score,and body weight using random regression models

    NARCIS (Netherlands)

    Berry, D.P.; Buckley, F.; Dillon, P.; Evans, R.D.; Rath, M.; Veerkamp, R.F.

    2003-01-01

    (Co)variance components for milk yield, body condition score (BCS), body weight (BW), BCS change and BW change over different herd-year mean milk yields (HMY) and nutritional environments (concentrate feeding level, grazing severity and silage quality) were estimated using a random regression model.

  11. A simple model of discontinuous firm’s growth

    OpenAIRE

    D'Elia, Enrico

    2011-01-01

    Typically, firms change their size through a row of discrete leaps over time. Sunk costs, regulatory, financial and organizational constraints, talent distribution and other factors may explain this fact. However, firms tend to grow or fall discontinuously even if those inertial factors were removed. For instance, a very essential model of discontinuous growth can be based on a couple of assumptions concerning only technology and entrepreneurs’ strategy, that is: (a) in the short run, the...

  12. Reciprocity principle for scattered fields from discontinuities in waveguides.

    Science.gov (United States)

    Pau, Annamaria; Capecchi, Danilo; Vestroni, Fabrizio

    2015-01-01

    This study investigates the scattering of guided waves from a discontinuity exploiting the principle of reciprocity in elastodynamics, written in a form that applies to waveguides. The coefficients of reflection and transmission for an arbitrary mode can be derived as long as the principle of reciprocity is satisfied at the discontinuity. Two elastodynamic states are related by the reciprocity. One is the response of the waveguide in the presence of the discontinuity, with the scattered fields expressed as a superposition of wave modes. The other state is the response of the waveguide in the absence of the discontinuity oscillating according to an arbitrary mode. The semi-analytical finite element method is applied to derive the needed dispersion relation and wave mode shapes. An application to a solid cylinder with a symmetric double change of cross-section is presented. This model is assumed to be representative of a damaged rod. The coefficients of reflection and transmission of longitudinal waves are investigated for selected values of notch length and varying depth. Copyright © 2014 Elsevier B.V. All rights reserved.

  13. Early Discontinuation of Montelukast Treatment; A Danish Nationwide Utilization Study

    DEFF Research Database (Denmark)

    Farah, Rahmo I; Damkier, Per; Christiansen, Anders

    2018-01-01

    Montelukast, a leukotriene receptor antagonist, was marketed in 1998 as an oral supplementary treatment to patients with mild to moderate asthma. The aim of this study was to describe the early discontinuation pattern among montelukast users in Denmark in the period of 1 March 1998 to 31 December....... Early discontinuation was defined as failing to fill a second prescription for montelukast within at least a year after the initial montelukast prescription. Among 135,271 included montelukast users, 47,480 (35%) discontinued the use of montelukast after a single redeemed prescription. The trend...

  14. Gaussian Process Regression Model in Spatial Logistic Regression

    Science.gov (United States)

    Sofro, A.; Oktaviarina, A.

    2018-01-01

    Spatial analysis has developed very quickly in the last decade. One of the favorite approaches is based on the neighbourhood of the region. Unfortunately, there are some limitations such as difficulty in prediction. Therefore, we offer Gaussian process regression (GPR) to accommodate the issue. In this paper, we will focus on spatial modeling with GPR for binomial data with logit link function. The performance of the model will be investigated. We will discuss the inference of how to estimate the parameters and hyper-parameters and to predict as well. Furthermore, simulation studies will be explained in the last section.

  15. Estimasi Model Seemingly Unrelated Regression (SUR dengan Metode Generalized Least Square (GLS

    Directory of Open Access Journals (Sweden)

    Ade Widyaningsih

    2015-04-01

    Full Text Available Regression analysis is a statistical tool that is used to determine the relationship between two or more quantitative variables so that one variable can be predicted from the other variables. A method that can used to obtain a good estimation in the regression analysis is ordinary least squares method. The least squares method is used to estimate the parameters of one or more regression but relationships among the errors in the response of other estimators are not allowed. One way to overcome this problem is Seemingly Unrelated Regression model (SUR in which parameters are estimated using Generalized Least Square (GLS. In this study, the author applies SUR model using GLS method on world gasoline demand data. The author obtains that SUR using GLS is better than OLS because SUR produce smaller errors than the OLS.

  16. Estimasi Model Seemingly Unrelated Regression (SUR dengan Metode Generalized Least Square (GLS

    Directory of Open Access Journals (Sweden)

    Ade Widyaningsih

    2014-06-01

    Full Text Available Regression analysis is a statistical tool that is used to determine the relationship between two or more quantitative variables so that one variable can be predicted from the other variables. A method that can used to obtain a good estimation in the regression analysis is ordinary least squares method. The least squares method is used to estimate the parameters of one or more regression but relationships among the errors in the response of other estimators are not allowed. One way to overcome this problem is Seemingly Unrelated Regression model (SUR in which parameters are estimated using Generalized Least Square (GLS. In this study, the author applies SUR model using GLS method on world gasoline demand data. The author obtains that SUR using GLS is better than OLS because SUR produce smaller errors than the OLS.

  17. Estimation of nutrients and organic matter in Korean swine slurry using multiple regression analysis of physical and chemical properties.

    Science.gov (United States)

    Suresh, Arumuganainar; Choi, Hong Lim

    2011-10-01

    Swine waste land application has increased due to organic fertilization, but excess application in an arable system can cause environmental risk. Therefore, in situ characterizations of such resources are important prior to application. To explore this, 41 swine slurry samples were collected from Korea, and wide differences were observed in the physico-biochemical properties. However, significant (Phydrometer, EC meter, drying oven and pH meter were found useful to estimate Mn, Fe, Ca, K, Al, Na, N and 5-day biochemical oxygen demands (BOD₅) at improved R² values of 0.83, 0.82, 0.77, 0.75, 0.67, 0.47, 0.88 and 0.70, respectively. The results from this study suggest that multiple property regressions can facilitate the prediction of micronutrients and organic matter much better than a single property regression for livestock waste. Copyright © 2011 Elsevier Ltd. All rights reserved.

  18. Estimating and mapping forest biomass using regression models and Spot-6 images (case study: Hyrcanian forests of north of Iran).

    Science.gov (United States)

    Motlagh, Mohadeseh Ghanbari; Kafaky, Sasan Babaie; Mataji, Asadollah; Akhavan, Reza

    2018-05-21

    Hyrcanian forests of North of Iran are of great importance in terms of various economic and environmental aspects. In this study, Spot-6 satellite images and regression models were applied to estimate above-ground biomass in these forests. This research was carried out in six compartments in three climatic (semi-arid to humid) types and two altitude classes. In the first step, ground sampling methods at the compartment level were used to estimate aboveground biomass (Mg/ha). Then, by reviewing the results of other studies, the most appropriate vegetation indices were selected. In this study, three indices of NDVI, RVI, and TVI were calculated. We investigated the relationship between the vegetation indices and aboveground biomass measured at sample-plot level. Based on the results, the relationship between aboveground biomass values and vegetation indices was a linear regression with the highest level of significance for NDVI in all compartments. Since at the compartment level the correlation coefficient between NDVI and aboveground biomass was the highest, NDVI was used for mapping aboveground biomass. According to the results of this study, biomass values were highly different in various climatic and altitudinal classes with the highest biomass value observed in humid climate and high-altitude class.

  19. Continuous water-quality monitoring and regression analysis to estimate constituent concentrations and loads in the Red River of the North at Fargo and Grand Forks, North Dakota, 2003-12

    Science.gov (United States)

    Galloway, Joel M.

    2014-01-01

    The Red River of the North (hereafter referred to as “Red River”) Basin is an important hydrologic region where water is a valuable resource for the region’s economy. Continuous water-quality monitors have been operated by the U.S. Geological Survey, in cooperation with the North Dakota Department of Health, Minnesota Pollution Control Agency, City of Fargo, City of Moorhead, City of Grand Forks, and City of East Grand Forks at the Red River at Fargo, North Dakota, from 2003 through 2012 and at Grand Forks, N.Dak., from 2007 through 2012. The purpose of the monitoring was to provide a better understanding of the water-quality dynamics of the Red River and provide a way to track changes in water quality. Regression equations were developed that can be used to estimate concentrations and loads for dissolved solids, sulfate, chloride, nitrate plus nitrite, total phosphorus, and suspended sediment using explanatory variables such as streamflow, specific conductance, and turbidity. Specific conductance was determined to be a significant explanatory variable for estimating dissolved solids concentrations at the Red River at Fargo and Grand Forks. The regression equations provided good relations between dissolved solid concentrations and specific conductance for the Red River at Fargo and at Grand Forks, with adjusted coefficients of determination of 0.99 and 0.98, respectively. Specific conductance, log-transformed streamflow, and a seasonal component were statistically significant explanatory variables for estimating sulfate in the Red River at Fargo and Grand Forks. Regression equations provided good relations between sulfate concentrations and the explanatory variables, with adjusted coefficients of determination of 0.94 and 0.89, respectively. For the Red River at Fargo and Grand Forks, specific conductance, streamflow, and a seasonal component were statistically significant explanatory variables for estimating chloride. For the Red River at Grand Forks, a time

  20. Discontinuous precipitation and ordering in Ni2V-Cu alloys

    International Nuclear Information System (INIS)

    Sukhanov, V.D; Boyarshinova, T.S.; Shashkov, O.D.

    1986-01-01

    Ni-V-Cu system alloys were used to investigate the effect of ordering on over-saturated solid solution decomposition. It was discovered that ordering in the process of grain boundary migration (discontinuous disordering), stimulated changing of continuous precipitation mechanism for discontinuous one

  1. The Use of Alternative Regression Methods in Social Sciences and the Comparison of Least Squares and M Estimation Methods in Terms of the Determination of Coefficient

    Science.gov (United States)

    Coskuntuncel, Orkun

    2013-01-01

    The purpose of this study is two-fold; the first aim being to show the effect of outliers on the widely used least squares regression estimator in social sciences. The second aim is to compare the classical method of least squares with the robust M-estimator using the "determination of coefficient" (R[superscript 2]). For this purpose,…

  2. Factors Influencing Contraceptive Choice and Discontinuation ...

    African Journals Online (AJOL)

    Erah

    2010-03-30

    women indicated that their HIV status dictated contraceptive decisions, particularly with ... Women reported method discontinuation because of side effects, having met desired parity, ...... Washington, D.C., 2009. ... Accessed March 30, 2010.

  3. 41 CFR 101-39.105 - Discontinuance or curtailment of service.

    Science.gov (United States)

    2010-07-01

    ..., AND MOTOR VEHICLES 39-INTERAGENCY FLEET MANAGEMENT SYSTEMS 39.1-Establishment, Modification, and Discontinuance of Interagency Fleet Management Systems § 101-39.105 Discontinuance or curtailment of service. (a... efficiencies are realized from the operation of any fleet management system, the Administrator, GSA, will...

  4. Piecewise linear regression splines with hyperbolic covariates

    International Nuclear Information System (INIS)

    Cologne, John B.; Sposto, Richard

    1992-09-01

    Consider the problem of fitting a curve to data that exhibit a multiphase linear response with smooth transitions between phases. We propose substituting hyperbolas as covariates in piecewise linear regression splines to obtain curves that are smoothly joined. The method provides an intuitive and easy way to extend the two-phase linear hyperbolic response model of Griffiths and Miller and Watts and Bacon to accommodate more than two linear segments. The resulting regression spline with hyperbolic covariates may be fit by nonlinear regression methods to estimate the degree of curvature between adjoining linear segments. The added complexity of fitting nonlinear, as opposed to linear, regression models is not great. The extra effort is particularly worthwhile when investigators are unwilling to assume that the slope of the response changes abruptly at the join points. We can also estimate the join points (the values of the abscissas where the linear segments would intersect if extrapolated) if their number and approximate locations may be presumed known. An example using data on changing age at menarche in a cohort of Japanese women illustrates the use of the method for exploratory data analysis. (author)

  5. Testing the Perturbation Sensitivity of Abortion-Crime Regressions

    Directory of Open Access Journals (Sweden)

    Michał Brzeziński

    2012-06-01

    Full Text Available The hypothesis that the legalisation of abortion contributed significantly to the reduction of crime in the United States in 1990s is one of the most prominent ideas from the recent “economics-made-fun” movement sparked by the book Freakonomics. This paper expands on the existing literature about the computational stability of abortion-crime regressions by testing the sensitivity of coefficients’ estimates to small amounts of data perturbation. In contrast to previous studies, we use a new data set on crime correlates for each of the US states, the original model specifica-tion and estimation methodology, and an improved data perturbation algorithm. We find that the coefficients’ estimates in abortion-crime regressions are not computationally stable and, therefore, are unreliable.

  6. How do horizontal, frictional discontinuities affect reverse fault-propagation folding?

    Science.gov (United States)

    Bonanno, Emanuele; Bonini, Lorenzo; Basili, Roberto; Toscani, Giovanni; Seno, Silvio

    2017-09-01

    The development of new reverse faults and related folds is strongly controlled by the mechanical characteristics of the host rocks. In this study we analyze the impact of a specific kind of anisotropy, i.e. thin mechanical and frictional discontinuities, in affecting the development of reverse faults and of the associated folds using physical scaled models. We perform analog modeling introducing one or two initially horizontal, thin discontinuities above an initially blind fault dipping at 30° in one case, and 45° in another, and then compare the results with those obtained from a fully isotropic model. The experimental results show that the occurrence of thin discontinuities affects both the development and the propagation of new faults and the shape of the associated folds. New faults 1) accelerate or decelerate their propagation depending on the location of the tips with respect to the discontinuities, 2) cross the discontinuities at a characteristic angle (∼90°), and 3) produce folds with different shapes, resulting not only from the dip of the new faults but also from their non-linear propagation history. Our results may have direct impact on future kinematic models, especially those aimed to reconstruct the tectonic history of faults that developed in layered rocks or in regions affected by pre-existing faults.

  7. Radiologic assessment of third molar tooth and spheno-occipital synchondrosis for age estimation: a multiple regression analysis study.

    Science.gov (United States)

    Demirturk Kocasarac, Husniye; Sinanoglu, Alper; Noujeim, Marcel; Helvacioglu Yigit, Dilek; Baydemir, Canan

    2016-05-01

    For forensic age estimation, radiographic assessment of third molar mineralization is important between 14 and 21 years which coincides with the legal age in most countries. The spheno-occipital synchondrosis (SOS) is an important growth site during development, and its use for age estimation is beneficial when combined with other markers. In this study, we aimed to develop a regression model to estimate and narrow the age range based on the radiologic assessment of third molar and SOS in a Turkish subpopulation. Panoramic radiographs and cone beam CT scans of 349 subjects (182 males, 167 females) with age between 8 and 25 were evaluated. Four-stage system was used to evaluate the fusion degree of SOS, and Demirjian's eight stages of development for calcification for third molars. The Pearson correlation indicated a strong positive relationship between age and third molar calcification for both sexes (r = 0.850 for females, r = 0.839 for males, P < 0.001) and also between age and SOS fusion for females (r = 0.814), but a moderate relationship was found for males (r = 0.599), P < 0.001). Based on the results obtained, an age determination formula using these scores was established.

  8. Comparison of several measure-correlate-predict models using support vector regression techniques to estimate wind power densities. A case study

    International Nuclear Information System (INIS)

    Díaz, Santiago; Carta, José A.; Matías, José M.

    2017-01-01

    Highlights: • Eight measure-correlate-predict (MCP) models used to estimate the wind power densities (WPDs) at a target site are compared. • Support vector regressions are used as the main prediction techniques in the proposed MCPs. • The most precise MCP uses two sub-models which predict wind speed and air density in an unlinked manner. • The most precise model allows to construct a bivariable (wind speed and air density) WPD probability density function. • MCP models trained to minimise wind speed prediction error do not minimise WPD prediction error. - Abstract: The long-term annual mean wind power density (WPD) is an important indicator of wind as a power source which is usually included in regional wind resource maps as useful prior information to identify potentially attractive sites for the installation of wind projects. In this paper, a comparison is made of eight proposed Measure-Correlate-Predict (MCP) models to estimate the WPDs at a target site. Seven of these models use the Support Vector Regression (SVR) and the eighth the Multiple Linear Regression (MLR) technique, which serves as a basis to compare the performance of the other models. In addition, a wrapper technique with 10-fold cross-validation has been used to select the optimal set of input features for the SVR and MLR models. Some of the eight models were trained to directly estimate the mean hourly WPDs at a target site. Others, however, were firstly trained to estimate the parameters on which the WPD depends (i.e. wind speed and air density) and then, using these parameters, the target site mean hourly WPDs. The explanatory features considered are different combinations of the mean hourly wind speeds, wind directions and air densities recorded in 2014 at ten weather stations in the Canary Archipelago (Spain). The conclusions that can be drawn from the study undertaken include the argument that the most accurate method for the long-term estimation of WPDs requires the execution of a

  9. Survival analysis II: Cox regression

    NARCIS (Netherlands)

    Stel, Vianda S.; Dekker, Friedo W.; Tripepi, Giovanni; Zoccali, Carmine; Jager, Kitty J.

    2011-01-01

    In contrast to the Kaplan-Meier method, Cox proportional hazards regression can provide an effect estimate by quantifying the difference in survival between patient groups and can adjust for confounding effects of other variables. The purpose of this article is to explain the basic concepts of the

  10. Censored Hurdle Negative Binomial Regression (Case Study: Neonatorum Tetanus Case in Indonesia)

    Science.gov (United States)

    Yuli Rusdiana, Riza; Zain, Ismaini; Wulan Purnami, Santi

    2017-06-01

    Hurdle negative binomial model regression is a method that can be used for discreate dependent variable, excess zero and under- and overdispersion. It uses two parts approach. The first part estimates zero elements from dependent variable is zero hurdle model and the second part estimates not zero elements (non-negative integer) from dependent variable is called truncated negative binomial models. The discrete dependent variable in such cases is censored for some values. The type of censor that will be studied in this research is right censored. This study aims to obtain the parameter estimator hurdle negative binomial regression for right censored dependent variable. In the assessment of parameter estimation methods used Maximum Likelihood Estimator (MLE). Hurdle negative binomial model regression for right censored dependent variable is applied on the number of neonatorum tetanus cases in Indonesia. The type data is count data which contains zero values in some observations and other variety value. This study also aims to obtain the parameter estimator and test statistic censored hurdle negative binomial model. Based on the regression results, the factors that influence neonatorum tetanus case in Indonesia is the percentage of baby health care coverage and neonatal visits.

  11. Estimation of Genetic Parameters for First Lactation Monthly Test-day Milk Yields using Random Regression Test Day Model in Karan Fries Cattle

    Directory of Open Access Journals (Sweden)

    Ajay Singh

    2016-06-01

    Full Text Available A single trait linear mixed random regression test-day model was applied for the first time for analyzing the first lactation monthly test-day milk yield records in Karan Fries cattle. The test-day milk yield data was modeled using a random regression model (RRM considering different order of Legendre polynomial for the additive genetic effect (4th order and the permanent environmental effect (5th order. Data pertaining to 1,583 lactation records spread over a period of 30 years were recorded and analyzed in the study. The variance component, heritability and genetic correlations among test-day milk yields were estimated using RRM. RRM heritability estimates of test-day milk yield varied from 0.11 to 0.22 in different test-day records. The estimates of genetic correlations between different test-day milk yields ranged 0.01 (test-day 1 [TD-1] and TD-11 to 0.99 (TD-4 and TD-5. The magnitudes of genetic correlations between test-day milk yields decreased as the interval between test-days increased and adjacent test-day had higher correlations. Additive genetic and permanent environment variances were higher for test-day milk yields at both ends of lactation. The residual variance was observed to be lower than the permanent environment variance for all the test-day milk yields.

  12. Father's Labour Migration and Children's School Discontinuation in Rural Mozambique.

    Science.gov (United States)

    Yabiku, Scott T; Agadjanian, Victor

    2017-08-01

    We examine how the discontinuation of schooling among left-behind children is related to multiple dimensions of male labor migration: the accumulation of migration experience, the timing of these migration experiences in the child's life course, and the economic success of the migration. Our setting is rural southern Mozambique, an impoverished area with massive male labor out-migration. Results show that fathers' economically successful labor migration is more beneficial for children's schooling than unsuccessful migration or non-migration. There are large differences, however, by gender: compared to sons of non-migrants, sons of migrant fathers (regardless of migration success) have lower rates of school discontinuation, while daughters of migrant fathers have rates of school discontinuation no different than daughters of non-migrants. Furthermore, accumulated labor migration across the child's life course is beneficial for boys' schooling, but not girls'. Remittances sent in the past year reduce the rate of discontinuation for sons, but not daughters.

  13. Discontinued Information and Communication Technology Usage among Older Adults in Continuing Care Retirement Communities in the United States.

    Science.gov (United States)

    Rikard, R V; Berkowsky, Ronald W; Cotten, Shelia R

    2018-01-01

    Older adults are increasingly using information and communication technologies (ICTs). Recent studies show beneficial effects of using ICTs for older adults, particularly in terms of reducing loneliness and depression. However, little is known about the factors that may prevent discontinued ICT use in populations that may be at greater risk, such as those in continuing care retirement communities (CCRCs). The purpose of this study is to examine a range of factors that may influence discontinued (1) ICT use, (2) searching for health information, and (3) searching for general information over time among CCRC residents. We use longitudinal data from a randomized controlled trial conducted with residents of 19 CCRCs. We use flexible parametric models to estimate the hazard ratio or hazard rate over 5 waves of data to determine what factors significantly predict discontinued (1) ICT use, (2) health information searching, and (3) general information searching. The analysis reveals that independent living residents who took part in an 8-week ICT training intervention were less likely to stop using ICTs. Age and the number of instrumental activities of daily living (IADL) impairments significantly predicted an increased likelihood of stopping ICT use. When examining specific ICT-related activities, the analysis reveals that independent living residents who took part in the ICT training intervention were less likely to stop searching for health information and general information online. In addition, age and the number of IADL impairments were associated with increased likelihood of discontinued health information searches and discontinued general information searches. ICT training interventions may motivate residents of CCRCs to stay connected by increasing the ICT skill level and promoting confidence, thus decreasing the probability that they will discontinue using ICTs and searching for general information. However, the effects of ICT training on motivating continued ICT

  14. A Model for Shovel Capital Cost Estimation, Using a Hybrid Model of Multivariate Regression and Neural Networks

    Directory of Open Access Journals (Sweden)

    Abdolreza Yazdani-Chamzini

    2017-12-01

    Full Text Available Cost estimation is an essential issue in feasibility studies in civil engineering. Many different methods can be applied to modelling costs. These methods can be divided into several main groups: (1 artificial intelligence, (2 statistical methods, and (3 analytical methods. In this paper, the multivariate regression (MVR method, which is one of the most popular linear models, and the artificial neural network (ANN method, which is widely applied to solving different prediction problems with a high degree of accuracy, have been combined to provide a cost estimate model for a shovel machine. This hybrid methodology is proposed, taking the advantages of MVR and ANN models in linear and nonlinear modelling, respectively. In the proposed model, the unique advantages of the MVR model in linear modelling are used first to recognize the existing linear structure in data, and, then, the ANN for determining nonlinear patterns in preprocessed data is applied. The results with three indices indicate that the proposed model is efficient and capable of increasing the prediction accuracy.

  15. General properties of solutions to inhomogeneous Black-Scholes equations with discontinuous maturity payoffs

    Science.gov (United States)

    O, Hyong-Chol; Jo, Jong-Jun; Kim, Ji-Sok

    2016-02-01

    We provide representations of solutions to terminal value problems of inhomogeneous Black-Scholes equations and study such general properties as min-max estimates, gradient estimates, monotonicity and convexity of the solutions with respect to the stock price variable, which are important for financial security pricing. In particular, we focus on finding representation of the gradient (with respect to the stock price variable) of solutions to the terminal value problems with discontinuous terminal payoffs or inhomogeneous terms. Such terminal value problems are often encountered in pricing problems of compound-like options such as Bermudan options or defaultable bonds with discrete default barrier, default intensity and endogenous default recovery. Our results can be used in pricing real defaultable bonds under consideration of existence of discrete coupons or taxes on coupons.

  16. Discontinuance of ADHD Treatment in Adolescents

    Directory of Open Access Journals (Sweden)

    J Gordon Millichap

    2009-04-01

    Full Text Available Prevalence of ADHD drug discontinuance in adolescents and young adults was studied in the UK by using the General Practice Database for patients aged 15-21 years from 1999 to 2006.

  17. Factors associated with β-blocker initiation and discontinuation in a population-based cohort of seniors newly diagnosed with heart failure

    Directory of Open Access Journals (Sweden)

    Girouard C

    2016-09-01

    Full Text Available Catherine Girouard,1–3 Jean-Pierre Grégoire,1–3 Paul Poirier,2,4 Jocelyne Moisan1–3 1Chair on Adherence to Treatments, Université Laval, 2Faculty of Pharmacy, Université Laval, 3Population Health and Optimal Health Practices Research Unit, CHU de Québec Research Center, 4Quebec Heart and Lung Institute-Université Laval, Quebec City, QC, Canada Purpose: β-Blockers (bisoprolol, carvedilol, and metoprolol are the cornerstone of heart failure (HF management. The incidence rate of β-blocker initiation and discontinuation and their associated factors among seniors with a first HF diagnosis were assessed.Methods: A population-based inception cohort study that included all individuals aged ≥65 years with a first HF diagnosis in Quebec was conducted. β-Blockers initiation among 91,131 patients who were not using β-blockers at the time of HF diagnosis and discontinuation among those who initiated a β-blocker after HF diagnosis were assessed. Stepwise Cox regression analyses were used to calculate hazard ratios (HR and to identify factors associated with β-blocker initiation and discontinuation.Results: After HF diagnosis, 32,989 (36.2% individuals initiated a β-blocker. Of these, 15,408 (46.7% discontinued their β-blocker during the follow-up. Individuals more likely to initiate a β-blocker were those diagnosed in a recent calendar year (2009: HR, 2.11; 95% confidence interval [CI], 2.00–2.23 and diagnosed by a cardiologist (HR, 1.38; 95% CI, 1.34–1.42. Individuals less likely to initiate were those aged ≥90 years (HR, 0.65; 95% CI, 0.61–0.68 and those with chronic obstructive pulmonary disease (HR, 0.66; 95% CI, 0.64–0.68. Individuals more likely to discontinue were those with more than nine medical consultations (HR, 1.14; 95% CI, 1.10–1.18 and those with dementia (HR, 1.13; 95% CI, 1.01–1.27. Individuals less likely to discontinue were those diagnosed in a recent calendar year (2009: HR 0.74; 95% CI, 0.65–0.82 and

  18. Asymptotic theory for regressions with smoothly changing parameters

    DEFF Research Database (Denmark)

    Hillebrand, Eric; Medeiros, Marcelo; Xu, Junyue

    2013-01-01

    We derive asymptotic properties of the quasi maximum likelihood estimator of smooth transition regressions when time is the transition variable. The consistency of the estimator and its asymptotic distribution are examined. It is shown that the estimator converges at the usual pT-rate and has...... an asymptotically normal distribution. Finite sample properties of the estimator are explored in simulations. We illustrate with an application to US inflation and output data....

  19. Bayesian Bandwidth Selection for a Nonparametric Regression Model with Mixed Types of Regressors

    Directory of Open Access Journals (Sweden)

    Xibin Zhang

    2016-04-01

    Full Text Available This paper develops a sampling algorithm for bandwidth estimation in a nonparametric regression model with continuous and discrete regressors under an unknown error density. The error density is approximated by the kernel density estimator of the unobserved errors, while the regression function is estimated using the Nadaraya-Watson estimator admitting continuous and discrete regressors. We derive an approximate likelihood and posterior for bandwidth parameters, followed by a sampling algorithm. Simulation results show that the proposed approach typically leads to better accuracy of the resulting estimates than cross-validation, particularly for smaller sample sizes. This bandwidth estimation approach is applied to nonparametric regression model of the Australian All Ordinaries returns and the kernel density estimation of gross domestic product (GDP growth rates among the organisation for economic co-operation and development (OECD and non-OECD countries.

  20. 21 CFR 1301.52 - Termination of registration; transfer of registration; distribution upon discontinuance of business.

    Science.gov (United States)

    2010-04-01

    ... discontinues business or professional practice. Any registrant who ceases legal existence or discontinues... registration; distribution upon discontinuance of business. 1301.52 Section 1301.52 Food and Drugs DRUG... of registration; transfer of registration; distribution upon discontinuance of business. (a) Except...