WorldWideScience

Sample records for regression coefficient beta

  1. THE DETERMINATION OF BETA COEFFICIENTS OF PUBLICLY-HELD COMPANIES BY A REGRESSION MODEL AND AN APPLICATION ON PRIVATE FIRMS

    Directory of Open Access Journals (Sweden)

    METİN KAMİL ERCAN

    2013-06-01

    Full Text Available It is possible to determine the value of private companies by means of suggestions and assumptions derived from their financial statements. However, there comes out a serious problem in the determination of equity costs of these private companies using Capital Assets Pricing Model (CAPM as beta coefficients are unknown or unavailable. In this study, firstly, a regression model that represents the relationship between the beta coefficients and financial statements’ Variables of publicly-held companies will be developed. Then, this model will be tested and applied on private companies.

  2. Boosted beta regression.

    Directory of Open Access Journals (Sweden)

    Matthias Schmid

    Full Text Available Regression analysis with a bounded outcome is a common problem in applied statistics. Typical examples include regression models for percentage outcomes and the analysis of ratings that are measured on a bounded scale. In this paper, we consider beta regression, which is a generalization of logit models to situations where the response is continuous on the interval (0,1. Consequently, beta regression is a convenient tool for analyzing percentage responses. The classical approach to fit a beta regression model is to use maximum likelihood estimation with subsequent AIC-based variable selection. As an alternative to this established - yet unstable - approach, we propose a new estimation technique called boosted beta regression. With boosted beta regression estimation and variable selection can be carried out simultaneously in a highly efficient way. Additionally, both the mean and the variance of a percentage response can be modeled using flexible nonlinear covariate effects. As a consequence, the new method accounts for common problems such as overdispersion and non-binomial variance structures.

  3. Systematic Risk on Istanbul Stock Exchange: Traditional Beta Coefficient Versus Downside Beta Coefficient

    Directory of Open Access Journals (Sweden)

    Gülfen TUNA

    2013-03-01

    Full Text Available The aim of this study is to test the validity of Downside Capital Asset Pricing Model (D-CAPM on the ISE. At the same time, the explanatory power of CAPM's traditional beta and D-CAPM's downside beta on the changes in the average return values are examined comparatively. In this context, the monthly data for seventy three stocks that are continuously traded on the ISE for the period 1991-2009 is used. Regression analysis is applied in this study. The research results have shown that D-CAPM is valid on the ISE. In addition, it is obtained that the power of downside beta coefficient is higher than traditional beta coefficient on explaining the return changes. Therefore, it can be said that the downside beta is superior to traditional beta in the ISE for chosen period.

  4. Modified Regression Correlation Coefficient for Poisson Regression Model

    Science.gov (United States)

    Kaengthong, Nattacha; Domthong, Uthumporn

    2017-09-01

    This study gives attention to indicators in predictive power of the Generalized Linear Model (GLM) which are widely used; however, often having some restrictions. We are interested in regression correlation coefficient for a Poisson regression model. This is a measure of predictive power, and defined by the relationship between the dependent variable (Y) and the expected value of the dependent variable given the independent variables [E(Y|X)] for the Poisson regression model. The dependent variable is distributed as Poisson. The purpose of this research was modifying regression correlation coefficient for Poisson regression model. We also compare the proposed modified regression correlation coefficient with the traditional regression correlation coefficient in the case of two or more independent variables, and having multicollinearity in independent variables. The result shows that the proposed regression correlation coefficient is better than the traditional regression correlation coefficient based on Bias and the Root Mean Square Error (RMSE).

  5. Standards for Standardized Logistic Regression Coefficients

    Science.gov (United States)

    Menard, Scott

    2011-01-01

    Standardized coefficients in logistic regression analysis have the same utility as standardized coefficients in linear regression analysis. Although there has been no consensus on the best way to construct standardized logistic regression coefficients, there is now sufficient evidence to suggest a single best approach to the construction of a…

  6. Beta-binomial regression and bimodal utilization.

    Science.gov (United States)

    Liu, Chuan-Fen; Burgess, James F; Manning, Willard G; Maciejewski, Matthew L

    2013-10-01

    To illustrate how the analysis of bimodal U-shaped distributed utilization can be modeled with beta-binomial regression, which is rarely used in health services research. Veterans Affairs (VA) administrative data and Medicare claims in 2001-2004 for 11,123 Medicare-eligible VA primary care users in 2000. We compared means and distributions of VA reliance (the proportion of all VA/Medicare primary care visits occurring in VA) predicted from beta-binomial, binomial, and ordinary least-squares (OLS) models. Beta-binomial model fits the bimodal distribution of VA reliance better than binomial and OLS models due to the nondependence on normality and the greater flexibility in shape parameters. Increased awareness of beta-binomial regression may help analysts apply appropriate methods to outcomes with bimodal or U-shaped distributions. © Health Research and Educational Trust.

  7. Determination of beta attenuation coefficients by means of timing method

    International Nuclear Information System (INIS)

    Ermis, E.E.; Celiktas, C.

    2012-01-01

    Highlights: ► Beta attenuation coefficients of absorber materials were found in this study. ► For this process, a new method (timing method) was suggested. ► The obtained beta attenuation coefficients were compatible with the results from the traditional one. ► The timing method can be used to determine beta attenuation coefficient. - Abstract: Using a counting system with plastic scintillation detector, beta linear and mass attenuation coefficients were determined for bakelite, Al, Fe and plexiglass absorbers by means of timing method. To show the accuracy and reliability of the obtained results through this method, the coefficients were also found via conventional energy method. Obtained beta attenuation coefficients from both methods were compared with each other and the literature values. Beta attenuation coefficients obtained through timing method were found to be compatible with the values obtained from conventional energy method and the literature.

  8. On the Occurrence of Standardized Regression Coefficients Greater than One.

    Science.gov (United States)

    Deegan, John, Jr.

    1978-01-01

    It is demonstrated here that standardized regression coefficients greater than one can legitimately occur. Furthermore, the relationship between the occurrence of such coefficients and the extent of multicollinearity present among the set of predictor variables in an equation is examined. Comments on the interpretation of these coefficients are…

  9. Using the Coefficient of Determination "R"[superscript 2] to Test the Significance of Multiple Linear Regression

    Science.gov (United States)

    Quinino, Roberto C.; Reis, Edna A.; Bessegato, Lupercio F.

    2013-01-01

    This article proposes the use of the coefficient of determination as a statistic for hypothesis testing in multiple linear regression based on distributions acquired by beta sampling. (Contains 3 figures.)

  10. Augmented Beta rectangular regression models: A Bayesian perspective.

    Science.gov (United States)

    Wang, Jue; Luo, Sheng

    2016-01-01

    Mixed effects Beta regression models based on Beta distributions have been widely used to analyze longitudinal percentage or proportional data ranging between zero and one. However, Beta distributions are not flexible to extreme outliers or excessive events around tail areas, and they do not account for the presence of the boundary values zeros and ones because these values are not in the support of the Beta distributions. To address these issues, we propose a mixed effects model using Beta rectangular distribution and augment it with the probabilities of zero and one. We conduct extensive simulation studies to assess the performance of mixed effects models based on both the Beta and Beta rectangular distributions under various scenarios. The simulation studies suggest that the regression models based on Beta rectangular distributions improve the accuracy of parameter estimates in the presence of outliers and heavy tails. The proposed models are applied to the motivating Neuroprotection Exploratory Trials in Parkinson's Disease (PD) Long-term Study-1 (LS-1 study, n = 1741), developed by The National Institute of Neurological Disorders and Stroke Exploratory Trials in Parkinson's Disease (NINDS NET-PD) network. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. Regression Models for Predicting Force Coefficients of Aerofoils

    Directory of Open Access Journals (Sweden)

    Mohammed ABDUL AKBAR

    2015-09-01

    Full Text Available Renewable sources of energy are attractive and advantageous in a lot of different ways. Among the renewable energy sources, wind energy is the fastest growing type. Among wind energy converters, Vertical axis wind turbines (VAWTs have received renewed interest in the past decade due to some of the advantages they possess over their horizontal axis counterparts. VAWTs have evolved into complex 3-D shapes. A key component in predicting the output of VAWTs through analytical studies is obtaining the values of lift and drag coefficients which is a function of shape of the aerofoil, ‘angle of attack’ of wind and Reynolds’s number of flow. Sandia National Laboratories have carried out extensive experiments on aerofoils for the Reynolds number in the range of those experienced by VAWTs. The volume of experimental data thus obtained is huge. The current paper discusses three Regression analysis models developed wherein lift and drag coefficients can be found out using simple formula without having to deal with the bulk of the data. Drag coefficients and Lift coefficients were being successfully estimated by regression models with R2 values as high as 0.98.

  12. Overcoming multicollinearity in multiple regression using correlation coefficient

    Science.gov (United States)

    Zainodin, H. J.; Yap, S. J.

    2013-09-01

    Multicollinearity happens when there are high correlations among independent variables. In this case, it would be difficult to distinguish between the contributions of these independent variables to that of the dependent variable as they may compete to explain much of the similar variance. Besides, the problem of multicollinearity also violates the assumption of multiple regression: that there is no collinearity among the possible independent variables. Thus, an alternative approach is introduced in overcoming the multicollinearity problem in achieving a well represented model eventually. This approach is accomplished by removing the multicollinearity source variables on the basis of the correlation coefficient values based on full correlation matrix. Using the full correlation matrix can facilitate the implementation of Excel function in removing the multicollinearity source variables. It is found that this procedure is easier and time-saving especially when dealing with greater number of independent variables in a model and a large number of all possible models. Hence, in this paper detailed insight of the procedure is shown, compared and implemented.

  13. Interpreting Bivariate Regression Coefficients: Going beyond the Average

    Science.gov (United States)

    Halcoussis, Dennis; Phillips, G. Michael

    2010-01-01

    Statistics, econometrics, investment analysis, and data analysis classes often review the calculation of several types of averages, including the arithmetic mean, geometric mean, harmonic mean, and various weighted averages. This note shows how each of these can be computed using a basic regression framework. By recognizing when a regression model…

  14. A Monte Carlo simulation study comparing linear regression, beta regression, variable-dispersion beta regression and fractional logit regression at recovering average difference measures in a two sample design.

    Science.gov (United States)

    Meaney, Christopher; Moineddin, Rahim

    2014-01-24

    In biomedical research, response variables are often encountered which have bounded support on the open unit interval--(0,1). Traditionally, researchers have attempted to estimate covariate effects on these types of response data using linear regression. Alternative modelling strategies may include: beta regression, variable-dispersion beta regression, and fractional logit regression models. This study employs a Monte Carlo simulation design to compare the statistical properties of the linear regression model to that of the more novel beta regression, variable-dispersion beta regression, and fractional logit regression models. In the Monte Carlo experiment we assume a simple two sample design. We assume observations are realizations of independent draws from their respective probability models. The randomly simulated draws from the various probability models are chosen to emulate average proportion/percentage/rate differences of pre-specified magnitudes. Following simulation of the experimental data we estimate average proportion/percentage/rate differences. We compare the estimators in terms of bias, variance, type-1 error and power. Estimates of Monte Carlo error associated with these quantities are provided. If response data are beta distributed with constant dispersion parameters across the two samples, then all models are unbiased and have reasonable type-1 error rates and power profiles. If the response data in the two samples have different dispersion parameters, then the simple beta regression model is biased. When the sample size is small (N0 = N1 = 25) linear regression has superior type-1 error rates compared to the other models. Small sample type-1 error rates can be improved in beta regression models using bias correction/reduction methods. In the power experiments, variable-dispersion beta regression and fractional logit regression models have slightly elevated power compared to linear regression models. Similar results were observed if the

  15. Bias in regression coefficient estimates upon different treatments of ...

    African Journals Online (AJOL)

    MS and PW consistently overestimated the population parameter. EM and RI, on the other hand, tended to consistently underestimate the population parameter under non-monotonic pattern. Keywords: Missing data, bias, regression, percent missing, non-normality, missing pattern > East African Journal of Statistics Vol.

  16. Modeling maximum daily temperature using a varying coefficient regression model

    Science.gov (United States)

    Han Li; Xinwei Deng; Dong-Yum Kim; Eric P. Smith

    2014-01-01

    Relationships between stream water and air temperatures are often modeled using linear or nonlinear regression methods. Despite a strong relationship between water and air temperatures and a variety of models that are effective for data summarized on a weekly basis, such models did not yield consistently good predictions for summaries such as daily maximum temperature...

  17. Using beta coefficients to impute missing correlations in meta-analysis research: Reasons for caution.

    Science.gov (United States)

    Roth, Philip L; Le, Huy; Oh, In-Sue; Van Iddekinge, Chad H; Bobko, Philip

    2018-06-01

    Meta-analysis has become a well-accepted method for synthesizing empirical research about a given phenomenon. Many meta-analyses focus on synthesizing correlations across primary studies, but some primary studies do not report correlations. Peterson and Brown (2005) suggested that researchers could use standardized regression weights (i.e., beta coefficients) to impute missing correlations. Indeed, their beta estimation procedures (BEPs) have been used in meta-analyses in a wide variety of fields. In this study, the authors evaluated the accuracy of BEPs in meta-analysis. We first examined how use of BEPs might affect results from a published meta-analysis. We then developed a series of Monte Carlo simulations that systematically compared the use of existing correlations (that were not missing) to data sets that incorporated BEPs (that impute missing correlations from corresponding beta coefficients). These simulations estimated ρ̄ (mean population correlation) and SDρ (true standard deviation) across a variety of meta-analytic conditions. Results from both the existing meta-analysis and the Monte Carlo simulations revealed that BEPs were associated with potentially large biases when estimating ρ̄ and even larger biases when estimating SDρ. Using only existing correlations often substantially outperformed use of BEPs and virtually never performed worse than BEPs. Overall, the authors urge a return to the standard practice of using only existing correlations in meta-analysis. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  18. Regulation of the friction coefficient of articular cartilage by TGF-beta1 and IL-1beta.

    Science.gov (United States)

    DuRaine, Grayson; Neu, Corey P; Chan, Stephanie M T; Komvopoulos, Kyriakos; June, Ronald K; Reddi, A Hari

    2009-02-01

    Articular cartilage functions to provide a low-friction surface for joint movement for many decades of life. Superficial zone protein (SZP) is a glycoprotein secreted by chondrocytes in the superficial layer of articular cartilage that contributes to effective boundary lubrication. In both cell and explant cultures, TGF-beta1 and IL-1beta have been demonstrated to, respectively, upregulate and downregulate SZP protein levels. It was hypothesized that the friction coefficient of articular cartilage could also be modulated by these cytokines through SZP regulation. The friction coefficient between cartilage explants (both untreated and treated with TGF-beta1 or IL-1beta) and a smooth glass surface due to sliding in the boundary lubrication regime was measured with a pin-on-disk tribometer. SZP was quantified using an enzyme-linked immunosorbant assay and localized by immunohistochemistry. Both TGF-beta1 and IL-1beta treatments resulted in the decrease of the friction coefficient of articular cartilage in a location- and time-dependent manner. Changes in the friction coefficient due to the TGF-beta1 treatment corresponded to increased depth of SZP staining within the superficial zone, while friction coefficient changes due to the IL-1beta treatment were independent of SZP depth of staining. However, the changes induced by the IL-1beta treatment corresponded to changes in surface roughness, determined from the analysis of surface images obtained with an atomic force microscope. These findings demonstrate that the low friction of articular cartilage can be modified by TGF-beta1 and IL-1beta treatment and that the friction coefficient depends on multiple factors, including SZP localization and surface roughness.

  19. Using the Ridge Regression Procedures to Estimate the Multiple Linear Regression Coefficients

    Science.gov (United States)

    Gorgees, HazimMansoor; Mahdi, FatimahAssim

    2018-05-01

    This article concerns with comparing the performance of different types of ordinary ridge regression estimators that have been already proposed to estimate the regression parameters when the near exact linear relationships among the explanatory variables is presented. For this situations we employ the data obtained from tagi gas filling company during the period (2008-2010). The main result we reached is that the method based on the condition number performs better than other methods since it has smaller mean square error (MSE) than the other stated methods.

  20. Estimating nonlinear selection gradients using quadratic regression coefficients: double or nothing?

    Science.gov (United States)

    Stinchcombe, John R; Agrawal, Aneil F; Hohenlohe, Paul A; Arnold, Stevan J; Blows, Mark W

    2008-09-01

    The use of regression analysis has been instrumental in allowing evolutionary biologists to estimate the strength and mode of natural selection. Although directional and correlational selection gradients are equal to their corresponding regression coefficients, quadratic regression coefficients must be doubled to estimate stabilizing/disruptive selection gradients. Based on a sample of 33 papers published in Evolution between 2002 and 2007, at least 78% of papers have not doubled quadratic regression coefficients, leading to an appreciable underestimate of the strength of stabilizing and disruptive selection. Proper treatment of quadratic regression coefficients is necessary for estimation of fitness surfaces and contour plots, canonical analysis of the gamma matrix, and modeling the evolution of populations on an adaptive landscape.

  1. Beta Regression Finite Mixture Models of Polarization and Priming

    Science.gov (United States)

    Smithson, Michael; Merkle, Edgar C.; Verkuilen, Jay

    2011-01-01

    This paper describes the application of finite-mixture general linear models based on the beta distribution to modeling response styles, polarization, anchoring, and priming effects in probability judgments. These models, in turn, enhance our capacity for explicitly testing models and theories regarding the aforementioned phenomena. The mixture…

  2. Refining Our Understanding of Beta through Quantile Regressions

    Directory of Open Access Journals (Sweden)

    Allen B. Atkins

    2014-05-01

    Full Text Available The Capital Asset Pricing Model (CAPM has been a key theory in financial economics since the 1960s. One of its main contributions is to attempt to identify how the risk of a particular stock is related to the risk of the overall stock market using the risk measure Beta. If the relationship between an individual stock’s returns and the returns of the market exhibit heteroskedasticity, then the estimates of Beta for different quantiles of the relationship can be quite different. The behavioral ideas first proposed by Kahneman and Tversky (1979, which they called prospect theory, postulate that: (i people exhibit “loss-aversion” in a gain frame; and (ii people exhibit “risk-seeking” in a loss frame. If this is true, people could prefer lower Beta stocks after they have experienced a gain and higher Beta stocks after they have experienced a loss. Stocks that exhibit converging heteroskedasticity (22.2% of our sample should be preferred by investors, and stocks that exhibit diverging heteroskedasticity (12.6% of our sample should not be preferred. Investors may be able to benefit by choosing portfolios that are more closely aligned with their preferences.

  3. Comparing Regression Coefficients between Nested Linear Models for Clustered Data with Generalized Estimating Equations

    Science.gov (United States)

    Yan, Jun; Aseltine, Robert H., Jr.; Harel, Ofer

    2013-01-01

    Comparing regression coefficients between models when one model is nested within another is of great practical interest when two explanations of a given phenomenon are specified as linear models. The statistical problem is whether the coefficients associated with a given set of covariates change significantly when other covariates are added into…

  4. Sintering equation: determination of its coefficients by experiments - using multiple regression

    International Nuclear Information System (INIS)

    Windelberg, D.

    1999-01-01

    Sintering is a method for volume-compression (or volume-contraction) of powdered or grained material applying high temperature (less than the melting point of the material). Maekipirtti tried to find an equation which describes the process of sintering by its main parameters sintering time, sintering temperature and volume contracting. Such equation is called a sintering equation. It also contains some coefficients which characterise the behaviour of the material during the process of sintering. These coefficients have to be determined by experiments. Here we show that some linear regressions will produce wrong coefficients, but multiple regression results in an useful sintering equation. (orig.)

  5. A review and comparison of Bayesian and likelihood-based inferences in beta regression and zero-or-one-inflated beta regression.

    Science.gov (United States)

    Liu, Fang; Eugenio, Evercita C

    2018-04-01

    Beta regression is an increasingly popular statistical technique in medical research for modeling of outcomes that assume values in (0, 1), such as proportions and patient reported outcomes. When outcomes take values in the intervals [0,1), (0,1], or [0,1], zero-or-one-inflated beta (zoib) regression can be used. We provide a thorough review on beta regression and zoib regression in the modeling, inferential, and computational aspects via the likelihood-based and Bayesian approaches. We demonstrate the statistical and practical importance of correctly modeling the inflation at zero/one rather than ad hoc replacing them with values close to zero/one via simulation studies; the latter approach can lead to biased estimates and invalid inferences. We show via simulation studies that the likelihood-based approach is computationally faster in general than MCMC algorithms used in the Bayesian inferences, but runs the risk of non-convergence, large biases, and sensitivity to starting values in the optimization algorithm especially with clustered/correlated data, data with sparse inflation at zero and one, and data that warrant regularization of the likelihood. The disadvantages of the regular likelihood-based approach make the Bayesian approach an attractive alternative in these cases. Software packages and tools for fitting beta and zoib regressions in both the likelihood-based and Bayesian frameworks are also reviewed.

  6. Meta-analytical synthesis of regression coefficients under different categorization scheme of continuous covariates.

    Science.gov (United States)

    Yoneoka, Daisuke; Henmi, Masayuki

    2017-11-30

    Recently, the number of clinical prediction models sharing the same regression task has increased in the medical literature. However, evidence synthesis methodologies that use the results of these regression models have not been sufficiently studied, particularly in meta-analysis settings where only regression coefficients are available. One of the difficulties lies in the differences between the categorization schemes of continuous covariates across different studies. In general, categorization methods using cutoff values are study specific across available models, even if they focus on the same covariates of interest. Differences in the categorization of covariates could lead to serious bias in the estimated regression coefficients and thus in subsequent syntheses. To tackle this issue, we developed synthesis methods for linear regression models with different categorization schemes of covariates. A 2-step approach to aggregate the regression coefficient estimates is proposed. The first step is to estimate the joint distribution of covariates by introducing a latent sampling distribution, which uses one set of individual participant data to estimate the marginal distribution of covariates with categorization. The second step is to use a nonlinear mixed-effects model with correction terms for the bias due to categorization to estimate the overall regression coefficients. Especially in terms of precision, numerical simulations show that our approach outperforms conventional methods, which only use studies with common covariates or ignore the differences between categorization schemes. The method developed in this study is also applied to a series of WHO epidemiologic studies on white blood cell counts. Copyright © 2017 John Wiley & Sons, Ltd.

  7. The estimation of beta coefficient for shares quoted on the Belgrade Stock Exchange

    Directory of Open Access Journals (Sweden)

    Lastić Ljiljana

    2017-01-01

    Full Text Available The paper contains an estimate of the Beta coefficient for the shares listed on the Prime Listing of the Belgrade Stock Exchange. The companies chosen for analysis are "NIS j.s.c.", The "Nikola Tesla" Airport, "Energoprojekt holding" and "Sojaprotein". The main aim of this paper is to evaluate the Beta coefficients for the mentioned companies in order to make it easier for investors to bring investment decisions. During the calculation of Beta coefficients, quite similar results were obtained, which was later analyzed and presented. One of the reasons for similar results is that the analysis was based on a single and incomplete financial market of the Republic of Serbia.

  8. Longitudinal beta regression models for analyzing health-related quality of life scores over time

    Directory of Open Access Journals (Sweden)

    Hunger Matthias

    2012-09-01

    Full Text Available Abstract Background Health-related quality of life (HRQL has become an increasingly important outcome parameter in clinical trials and epidemiological research. HRQL scores are typically bounded at both ends of the scale and often highly skewed. Several regression techniques have been proposed to model such data in cross-sectional studies, however, methods applicable in longitudinal research are less well researched. This study examined the use of beta regression models for analyzing longitudinal HRQL data using two empirical examples with distributional features typically encountered in practice. Methods We used SF-6D utility data from a German older age cohort study and stroke-specific HRQL data from a randomized controlled trial. We described the conceptual differences between mixed and marginal beta regression models and compared both models to the commonly used linear mixed model in terms of overall fit and predictive accuracy. Results At any measurement time, the beta distribution fitted the SF-6D utility data and stroke-specific HRQL data better than the normal distribution. The mixed beta model showed better likelihood-based fit statistics than the linear mixed model and respected the boundedness of the outcome variable. However, it tended to underestimate the true mean at the upper part of the distribution. Adjusted group means from marginal beta model and linear mixed model were nearly identical but differences could be observed with respect to standard errors. Conclusions Understanding the conceptual differences between mixed and marginal beta regression models is important for their proper use in the analysis of longitudinal HRQL data. Beta regression fits the typical distribution of HRQL data better than linear mixed models, however, if focus is on estimating group mean scores rather than making individual predictions, the two methods might not differ substantially.

  9. Perturbative two- and three-loop coefficients from large {beta} Monte Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Lepage, G.P.; Mackenzie, P.B.; Shakespeare, N.H.; Trottier, H.D

    2000-03-01

    Perturbative coefficients for Wilson loops and the static quark self-energy are extracted from Monte Carlo simulations at large {beta} on finite volumes, where all the lattice momenta are large. The Monte Carlo results are in excellent agreement with perturbation theory through second order. New results for third order coefficients are reported. Twisted boundary conditions are used to eliminate zero modes and to suppress Z{sub 3} tunneling.

  10. SPSS and SAS programs for comparing Pearson correlations and OLS regression coefficients.

    Science.gov (United States)

    Weaver, Bruce; Wuensch, Karl L

    2013-09-01

    Several procedures that use summary data to test hypotheses about Pearson correlations and ordinary least squares regression coefficients have been described in various books and articles. To our knowledge, however, no single resource describes all of the most common tests. Furthermore, many of these tests have not yet been implemented in popular statistical software packages such as SPSS and SAS. In this article, we describe all of the most common tests and provide SPSS and SAS programs to perform them. When they are applicable, our code also computes 100 × (1 - α)% confidence intervals corresponding to the tests. For testing hypotheses about independent regression coefficients, we demonstrate one method that uses summary data and another that uses raw data (i.e., Potthoff analysis). When the raw data are available, the latter method is preferred, because use of summary data entails some loss of precision due to rounding.

  11. Synthesis of linear regression coefficients by recovering the within-study covariance matrix from summary statistics.

    Science.gov (United States)

    Yoneoka, Daisuke; Henmi, Masayuki

    2017-06-01

    Recently, the number of regression models has dramatically increased in several academic fields. However, within the context of meta-analysis, synthesis methods for such models have not been developed in a commensurate trend. One of the difficulties hindering the development is the disparity in sets of covariates among literature models. If the sets of covariates differ across models, interpretation of coefficients will differ, thereby making it difficult to synthesize them. Moreover, previous synthesis methods for regression models, such as multivariate meta-analysis, often have problems because covariance matrix of coefficients (i.e. within-study correlations) or individual patient data are not necessarily available. This study, therefore, proposes a brief explanation regarding a method to synthesize linear regression models under different covariate sets by using a generalized least squares method involving bias correction terms. Especially, we also propose an approach to recover (at most) threecorrelations of covariates, which is required for the calculation of the bias term without individual patient data. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  12. Application of the Beta Coefficient in the Market of Direct residential Real Estate Investments

    Directory of Open Access Journals (Sweden)

    Wolski Rafał

    2014-07-01

    Full Text Available The beta coefficient is one of the most popular indices used in contemporary finances. Despite the fact that there are justified doubts connected with its application, it is currently difficult to imagine a situation in which the cost of capital would be calculated without the use of the CAPM model. Thus, an attempt at answering the question whether and to what degree beta may be used in the real estate market constitutes an interesting problem. This is because on the one hand, the formal structure suggests that beta should not be used for assets which are not included in the benchmark but, on the other hand, such a benchmark should, at least theoretically, contain all market assets. Therefore, a decision was made to have a closer look at this issue, with the analysis of the possibility of using the beta coefficient in the residential real estate market set as the objective. Using the database of prices in the direct real estate investment created by the NBP, a comparison was conducted with regard to features of undertaken investments on the basis of an analysis of systematic risk calculated using selected indices available on the Polish market.

  13. The performance of random coefficient regression in accounting for residual confounding.

    Science.gov (United States)

    Gustafson, Paul; Greenland, Sander

    2006-09-01

    Greenland (2000, Biometrics 56, 915-921) describes the use of random coefficient regression to adjust for residual confounding in a particular setting. We examine this setting further, giving theoretical and empirical results concerning the frequentist and Bayesian performance of random coefficient regression. Particularly, we compare estimators based on this adjustment for residual confounding to estimators based on the assumption of no residual confounding. This devolves to comparing an estimator from a nonidentified but more realistic model to an estimator from a less realistic but identified model. The approach described by Gustafson (2005, Statistical Science 20, 111-140) is used to quantify the performance of a Bayesian estimator arising from a nonidentified model. From both theoretical calculations and simulations we find support for the idea that superior performance can be obtained by replacing unrealistic identifying constraints with priors that allow modest departures from those constraints. In terms of point-estimator bias this superiority arises when the extent of residual confounding is substantial, but the advantage is much broader in terms of interval estimation. The benefit from modeling residual confounding is maintained when the prior distributions employed only roughly correspond to reality, for the standard identifying constraints are equivalent to priors that typically correspond much worse.

  14. MANCOVA for one way classification with homogeneity of regression coefficient vectors

    Science.gov (United States)

    Mokesh Rayalu, G.; Ravisankar, J.; Mythili, G. Y.

    2017-11-01

    The MANOVA and MANCOVA are the extensions of the univariate ANOVA and ANCOVA techniques to multidimensional or vector valued observations. The assumption of a Gaussian distribution has been replaced with the Multivariate Gaussian distribution for the vectors data and residual term variables in the statistical models of these techniques. The objective of MANCOVA is to determine if there are statistically reliable mean differences that can be demonstrated between groups later modifying the newly created variable. When randomization assignment of samples or subjects to groups is not possible, multivariate analysis of covariance (MANCOVA) provides statistical matching of groups by adjusting dependent variables as if all subjects scored the same on the covariates. In this research article, an extension has been made to the MANCOVA technique with more number of covariates and homogeneity of regression coefficient vectors is also tested.

  15. Partial F-tests with multiply imputed data in the linear regression framework via coefficient of determination.

    Science.gov (United States)

    Chaurasia, Ashok; Harel, Ofer

    2015-02-10

    Tests for regression coefficients such as global, local, and partial F-tests are common in applied research. In the framework of multiple imputation, there are several papers addressing tests for regression coefficients. However, for simultaneous hypothesis testing, the existing methods are computationally intensive because they involve calculation with vectors and (inversion of) matrices. In this paper, we propose a simple method based on the scalar entity, coefficient of determination, to perform (global, local, and partial) F-tests with multiply imputed data. The proposed method is evaluated using simulated data and applied to suicide prevention data. Copyright © 2014 John Wiley & Sons, Ltd.

  16. Towards molecular design using 2D-molecular contour maps obtained from PLS regression coefficients

    Science.gov (United States)

    Borges, Cleber N.; Barigye, Stephen J.; Freitas, Matheus P.

    2017-12-01

    The multivariate image analysis descriptors used in quantitative structure-activity relationships are direct representations of chemical structures as they are simply numerical decodifications of pixels forming the 2D chemical images. These MDs have found great utility in the modeling of diverse properties of organic molecules. Given the multicollinearity and high dimensionality of the data matrices generated with the MIA-QSAR approach, modeling techniques that involve the projection of the data space onto orthogonal components e.g. Partial Least Squares (PLS) have been generally used. However, the chemical interpretation of the PLS-based MIA-QSAR models, in terms of the structural moieties affecting the modeled bioactivity has not been straightforward. This work describes the 2D-contour maps based on the PLS regression coefficients, as a means of assessing the relevance of single MIA predictors to the response variable, and thus allowing for the structural, electronic and physicochemical interpretation of the MIA-QSAR models. A sample study to demonstrate the utility of the 2D-contour maps to design novel drug-like molecules is performed using a dataset of some anti-HIV-1 2-amino-6-arylsulfonylbenzonitriles and derivatives, and the inferences obtained are consistent with other reports in the literature. In addition, the different schemes for encoding atomic properties in molecules are discussed and evaluated.

  17. Varying coefficient subdistribution regression for left-truncated semi-competing risks data.

    Science.gov (United States)

    Li, Ruosha; Peng, Limin

    2014-10-01

    Semi-competing risks data frequently arise in biomedical studies when time to a disease landmark event is subject to dependent censoring by death, the observation of which however is not precluded by the occurrence of the landmark event. In observational studies, the analysis of such data can be further complicated by left truncation. In this work, we study a varying co-efficient subdistribution regression model for left-truncated semi-competing risks data. Our method appropriately accounts for the specifical truncation and censoring features of the data, and moreover has the flexibility to accommodate potentially varying covariate effects. The proposed method can be easily implemented and the resulting estimators are shown to have nice asymptotic properties. We also present inference, such as Kolmogorov-Smirnov type and Cramér Von-Mises type hypothesis testing procedures for the covariate effects. Simulation studies and an application to the Denmark diabetes registry demonstrate good finite-sample performance and practical utility of the proposed method.

  18. [Correlation coefficient-based classification method of hydrological dependence variability: With auto-regression model as example].

    Science.gov (United States)

    Zhao, Yu Xi; Xie, Ping; Sang, Yan Fang; Wu, Zi Yi

    2018-04-01

    Hydrological process evaluation is temporal dependent. Hydrological time series including dependence components do not meet the data consistency assumption for hydrological computation. Both of those factors cause great difficulty for water researches. Given the existence of hydrological dependence variability, we proposed a correlationcoefficient-based method for significance evaluation of hydrological dependence based on auto-regression model. By calculating the correlation coefficient between the original series and its dependence component and selecting reasonable thresholds of correlation coefficient, this method divided significance degree of dependence into no variability, weak variability, mid variability, strong variability, and drastic variability. By deducing the relationship between correlation coefficient and auto-correlation coefficient in each order of series, we found that the correlation coefficient was mainly determined by the magnitude of auto-correlation coefficient from the 1 order to p order, which clarified the theoretical basis of this method. With the first-order and second-order auto-regression models as examples, the reasonability of the deduced formula was verified through Monte-Carlo experiments to classify the relationship between correlation coefficient and auto-correlation coefficient. This method was used to analyze three observed hydrological time series. The results indicated the coexistence of stochastic and dependence characteristics in hydrological process.

  19. Coefficient shifts in geographical ecology: an empirical evaluation of spatial and non-spatial regression

    DEFF Research Database (Denmark)

    Bini, L. M.; Diniz-Filho, J. A. F.; Rangel, T. F. L. V. B.

    2009-01-01

    A major focus of geographical ecology and macroecology is to understand the causes of spatially structured ecological patterns. However, achieving this understanding can be complicated when using multiple regression, because the relative importance of explanatory variables, as measured by regress...

  20. Application of inverse models and XRD analysis to the determination of Ti-17 {beta}-phase coefficients of thermal expansion

    Energy Technology Data Exchange (ETDEWEB)

    Freour, S. [GeM, Institut de Recherche en Genie Civil et Mecanique (UMR CNRS 6183), Universite de Nantes, Ecole Centrale de Nantes, 37 Boulevard de l' Universite, BP 406, 44 602 Saint-Nazaire cedex (France)]. E-mail: freour@crttsn.univ-nantes.fr; Gloaguen, D. [GeM, Institut de Recherche en Genie Civil et Mecanique (UMR CNRS 6183), Universite de Nantes, Ecole Centrale de Nantes, 37 Boulevard de l' Universite, BP 406, 44 602 Saint-Nazaire cedex (France); Francois, M. [Laboratoire des Systemes Mecaniques et d' Ingenierie Simultanee (LASMIS FRE CNRS 2719), Universite de Technologie de Troyes, 12 Rue Marie Curie, BP 2060, 10010 Troyes (France); Guillen, R. [GeM, Institut de Recherche en Genie Civil et Mecanique (UMR CNRS 6183), Universite de Nantes, Ecole Centrale de Nantes, 37 Boulevard de l' Universite, BP 406, 44 602 Saint-Nazaire cedex (France)

    2006-04-15

    scope of this work is the determination of the coefficients of thermal expansion of the Ti-17 {beta}-phase. A rigorous inverse thermo-elastic self-consistent scale transition micro-mechanical model extended to multi-phase materials was used. The experimental data required for the application of the inverse method were obtained from both the available literature and especially dedicated X-ray diffraction lattice strain measurements performed on the studied ({alpha} + {beta}) two-phase titanium alloy.

  1. Diffusion coefficients of the ternary system (2-hydroxypropyl-{beta}-cyclodextrin + caffeine + water) at T = 298.15 K

    Energy Technology Data Exchange (ETDEWEB)

    Ribeiro, Ana C.F. [Department of Chemistry, University of Coimbra, 3004-535 Coimbra (Portugal)], E-mail: anacfrib@ci.uc.pt; Santos, Cecilia I.A.V. [Departamento de Quimica Fisica, Facultad de Farmacia, Universidad de Alcala, 28871 Alcala de Henares, Madrid (Spain)], E-mail: cecilia.alves@uah.es; Lobo, Victor M.M. [Department of Chemistry, University of Coimbra, 3004-535 Coimbra (Portugal)], E-mail: vlobo@ci.uc.pt; Cabral, Ana M.T.D.P.V. [Faculty of Pharmacy, University of Coimbra, 3000-295 Coimbra (Portugal)], E-mail: acabral@ff.uc.pt; Veiga, Francisco J.B. [Faculty of Pharmacy, University of Coimbra, 3000-295 Coimbra (Portugal)], E-mail: fveiga@ci.uc.pt; Esteso, Miguel A. [Departamento de Quimica Fisica, Facultad de Farmacia, Universidad de Alcala, 28871 Alcala de Henares, Madrid (Spain)], E-mail: miguel.esteso@uah.es

    2009-12-15

    Ternary mutual diffusion coefficients measured by Taylor dispersion method (D{sub 11}, D{sub 22}, D{sub 12}, and D{sub 21}) are reported for aqueous solutions of 2-hydroxypropyl-{beta}-cyclodextrin (HP-{beta}-CD) + caffeine at T = 298.15 K at carrier concentrations from (0.000 to 0.010) mol . dm{sup -3}, for each solute, respectively. These diffusion coefficients have been measured having in mind a better understanding of the structure of these systems and thermodynamic behaviour of caffeine and 2-hydroxypropyl-{beta}-cyclodextrin in solution. For example, from these data it will be possible to estimate some parameters, such as the fraction of associated species HP-{beta}-CD (X{sub 1}) and caffeine (X{sub 2}) in this complex, the monomer and dimer fractions, X{sub 2}{sup M} and X{sub 2}{sup D}, respectively, and the limiting diffusion coefficients of the HP-{beta}-CD, D{sub HPBCD}{sup 0}, of the dimers caffeine entities, D{sub D}{sup 0}, and of those complexes (1:1), D{sub complex}{sup 0}.

  2. Beta Coefficient and Market Share: Downloading and Processing Data from DIALOG to LOTUS 1-2-3.

    Science.gov (United States)

    Popovich, Charles J.

    This article briefly describes the topics "beta coefficient"--a measurement of the price volatility of a company's stock in relationship to the overall stock market--and "market share"--an average measurement for the overall stock market based on a specified group of stocks. It then selectively recommends a database (file) on…

  3. 'aspect' - a new spectrometer for the measurement of the angular correlation coefficient a in neutron beta decay

    CERN Document Server

    Zimmer, O; Grinten, M G D; Heil, W; Glück, F

    2000-01-01

    The combination of the coefficient a of the antineutrino/electron angular correlation with the beta asymmetry of the neutron provides a sensitive test for scalar and tensor contributions to the electroweak Lagrangian, as well as for right-handed currents. A method is given for measuring a with high sensitivity from the proton recoil spectrum. The method is based on a magnetic spectrometer with electrostatic retardation potentials such as used for searches of the neutrino mass in tritium beta decay. The spectrometer can also be used for similar studies using radioactive nuclei.

  4. Application of inverse models and XRD analysis to the determination of Ti-17 beta-phase Coefficients of Thermal Expansion

    OpenAIRE

    Fréour , Sylvain; Gloaguen , David; François , Marc; Guillén , Ronald

    2006-01-01

    International audience; The scope of this work is the determination of the coefficients of thermal expansion of the Ti-17 beta-phase. A rigorous inverse thermo-elastic self-consistent scale transition inicro-mechanical model extended to multi-phase materials was used. The experimental data required for the application of the inverse method were obtained from both the available literature and especially dedicated X-ray diffraction lattice strain measurements performed on the studied (alpha + b...

  5. Misleading Betas: An Educational Example

    Science.gov (United States)

    Chong, James; Halcoussis, Dennis; Phillips, G. Michael

    2012-01-01

    The dual-beta model is a generalization of the CAPM model. In the dual-beta model, separate beta estimates are provided for up-market and down-market days. This paper uses the historical "Anscombe quartet" results which illustrated how very different datasets can produce the same regression coefficients to motivate a discussion of the…

  6. Comparison of regression coefficient and GIS-based methodologies for regional estimates of forest soil carbon stocks

    International Nuclear Information System (INIS)

    Elliott Campbell, J.; Moen, Jeremie C.; Ney, Richard A.; Schnoor, Jerald L.

    2008-01-01

    Estimates of forest soil organic carbon (SOC) have applications in carbon science, soil quality studies, carbon sequestration technologies, and carbon trading. Forest SOC has been modeled using a regression coefficient methodology that applies mean SOC densities (mass/area) to broad forest regions. A higher resolution model is based on an approach that employs a geographic information system (GIS) with soil databases and satellite-derived landcover images. Despite this advancement, the regression approach remains the basis of current state and federal level greenhouse gas inventories. Both approaches are analyzed in detail for Wisconsin forest soils from 1983 to 2001, applying rigorous error-fixing algorithms to soil databases. Resulting SOC stock estimates are 20% larger when determined using the GIS method rather than the regression approach. Average annual rates of increase in SOC stocks are 3.6 and 1.0 million metric tons of carbon per year for the GIS and regression approaches respectively. - Large differences in estimates of soil organic carbon stocks and annual changes in stocks for Wisconsin forestlands indicate a need for validation from forthcoming forest surveys

  7. Predicting volume of distribution with decision tree-based regression methods using predicted tissue:plasma partition coefficients.

    Science.gov (United States)

    Freitas, Alex A; Limbu, Kriti; Ghafourian, Taravat

    2015-01-01

    Volume of distribution is an important pharmacokinetic property that indicates the extent of a drug's distribution in the body tissues. This paper addresses the problem of how to estimate the apparent volume of distribution at steady state (Vss) of chemical compounds in the human body using decision tree-based regression methods from the area of data mining (or machine learning). Hence, the pros and cons of several different types of decision tree-based regression methods have been discussed. The regression methods predict Vss using, as predictive features, both the compounds' molecular descriptors and the compounds' tissue:plasma partition coefficients (Kt:p) - often used in physiologically-based pharmacokinetics. Therefore, this work has assessed whether the data mining-based prediction of Vss can be made more accurate by using as input not only the compounds' molecular descriptors but also (a subset of) their predicted Kt:p values. Comparison of the models that used only molecular descriptors, in particular, the Bagging decision tree (mean fold error of 2.33), with those employing predicted Kt:p values in addition to the molecular descriptors, such as the Bagging decision tree using adipose Kt:p (mean fold error of 2.29), indicated that the use of predicted Kt:p values as descriptors may be beneficial for accurate prediction of Vss using decision trees if prior feature selection is applied. Decision tree based models presented in this work have an accuracy that is reasonable and similar to the accuracy of reported Vss inter-species extrapolations in the literature. The estimation of Vss for new compounds in drug discovery will benefit from methods that are able to integrate large and varied sources of data and flexible non-linear data mining methods such as decision trees, which can produce interpretable models. Graphical AbstractDecision trees for the prediction of tissue partition coefficient and volume of distribution of drugs.

  8. Comparison of beta-binomial regression model approaches to analyze health-related quality of life data.

    Science.gov (United States)

    Najera-Zuloaga, Josu; Lee, Dae-Jin; Arostegui, Inmaculada

    2017-01-01

    Health-related quality of life has become an increasingly important indicator of health status in clinical trials and epidemiological research. Moreover, the study of the relationship of health-related quality of life with patients and disease characteristics has become one of the primary aims of many health-related quality of life studies. Health-related quality of life scores are usually assumed to be distributed as binomial random variables and often highly skewed. The use of the beta-binomial distribution in the regression context has been proposed to model such data; however, the beta-binomial regression has been performed by means of two different approaches in the literature: (i) beta-binomial distribution with a logistic link; and (ii) hierarchical generalized linear models. None of the existing literature in the analysis of health-related quality of life survey data has performed a comparison of both approaches in terms of adequacy and regression parameter interpretation context. This paper is motivated by the analysis of a real data application of health-related quality of life outcomes in patients with Chronic Obstructive Pulmonary Disease, where the use of both approaches yields to contradictory results in terms of covariate effects significance and consequently the interpretation of the most relevant factors in health-related quality of life. We present an explanation of the results in both methodologies through a simulation study and address the need to apply the proper approach in the analysis of health-related quality of life survey data for practitioners, providing an R package.

  9. Investigation of Pear Drying Performance by Different Methods and Regression of Convective Heat Transfer Coefficient with Support Vector Machine

    Directory of Open Access Journals (Sweden)

    Mehmet Das

    2018-01-01

    Full Text Available In this study, an air heated solar collector (AHSC dryer was designed to determine the drying characteristics of the pear. Flat pear slices of 10 mm thickness were used in the experiments. The pears were dried both in the AHSC dryer and under the sun. Panel glass temperature, panel floor temperature, panel inlet temperature, panel outlet temperature, drying cabinet inlet temperature, drying cabinet outlet temperature, drying cabinet temperature, drying cabinet moisture, solar radiation, pear internal temperature, air velocity and mass loss of pear were measured at 30 min intervals. Experiments were carried out during the periods of June 2017 in Elazig, Turkey. The experiments started at 8:00 a.m. and continued till 18:00. The experiments were continued until the weight changes in the pear slices stopped. Wet basis moisture content (MCw, dry basis moisture content (MCd, adjustable moisture ratio (MR, drying rate (DR, and convective heat transfer coefficient (hc were calculated with both in the AHSC dryer and the open sun drying experiment data. It was found that the values of hc in both drying systems with a range 12.4 and 20.8 W/m2 °C. Three different kernel models were used in the support vector machine (SVM regression to construct the predictive model of the calculated hc values for both systems. The mean absolute error (MAE, root mean squared error (RMSE, relative absolute error (RAE and root relative absolute error (RRAE analysis were performed to indicate the predictive model’s accuracy. As a result, the rate of drying of the pear was examined for both systems and it was observed that the pear had dried earlier in the AHSC drying system. A predictive model was obtained using the SVM regression for the calculated hc values for the pear in the AHSC drying system. The normalized polynomial kernel was determined as the best kernel model in SVM for estimating the hc values.

  10. Zero-inflated beta regression model for leaf citrus canker incidence in orange genotypes grafted onto different rootstocks

    Directory of Open Access Journals (Sweden)

    Oilson Alberto Gonzatto Junior

    2017-06-01

    Full Text Available Data with excess zeros are frequently found in practice, and the recommended analysis is to use models that adequately address the counting of zero observations. In this study, the Zero Inflated Beta Regression Model (BeZI was used on experimental data to describe the mean incidence of leaf citrus canker in orange groves under the influence of genotype and rootstocks of origin. Based on the model, it was possible to quantify the odds that a null observation to mean incidence comes from a particular plant according to genotype and rootstock, and estimate its expected value according to this combination. Laranja Caipira rootstock proved to be the most resistant to leaf citrus canker as well as Limão Cravo proved to be the most fragile. The Ipiguá IAC, Arapongas, EEL and Olímpia genotypes have statistically equivalent chances.

  11. Using beta-binomial regression for high-precision differential methylation analysis in multifactor whole-genome bisulfite sequencing experiments

    Science.gov (United States)

    2014-01-01

    Background Whole-genome bisulfite sequencing currently provides the highest-precision view of the epigenome, with quantitative information about populations of cells down to single nucleotide resolution. Several studies have demonstrated the value of this precision: meaningful features that correlate strongly with biological functions can be found associated with only a few CpG sites. Understanding the role of DNA methylation, and more broadly the role of DNA accessibility, requires that methylation differences between populations of cells are identified with extreme precision and in complex experimental designs. Results In this work we investigated the use of beta-binomial regression as a general approach for modeling whole-genome bisulfite data to identify differentially methylated sites and genomic intervals. Conclusions The regression-based analysis can handle medium- and large-scale experiments where it becomes critical to accurately model variation in methylation levels between replicates and account for influence of various experimental factors like cell types or batch effects. PMID:24962134

  12. The Use of Alternative Regression Methods in Social Sciences and the Comparison of Least Squares and M Estimation Methods in Terms of the Determination of Coefficient

    Science.gov (United States)

    Coskuntuncel, Orkun

    2013-01-01

    The purpose of this study is two-fold; the first aim being to show the effect of outliers on the widely used least squares regression estimator in social sciences. The second aim is to compare the classical method of least squares with the robust M-estimator using the "determination of coefficient" (R[superscript 2]). For this purpose,…

  13. A SOCIOLOGICAL ANALYSIS OF THE CHILDBEARING COEFFICIENT IN THE ALTAI REGION BASED ON METHOD OF FUZZY LINEAR REGRESSION

    Directory of Open Access Journals (Sweden)

    Sergei Vladimirovich Varaksin

    2017-06-01

    Full Text Available Purpose. Construction of a mathematical model of the dynamics of childbearing change in the Altai region in 2000–2016, analysis of the dynamics of changes in birth rates for multiple age categories of women of childbearing age. Methodology. A auxiliary analysis element is the construction of linear mathematical models of the dynamics of childbearing by using fuzzy linear regression method based on fuzzy numbers. Fuzzy linear regression is considered as an alternative to standard statistical linear regression for short time series and unknown distribution law. The parameters of fuzzy linear and standard statistical regressions for childbearing time series were defined with using the built in language MatLab algorithm. Method of fuzzy linear regression is not used in sociological researches yet. Results. There are made the conclusions about the socio-demographic changes in society, the high efficiency of the demographic policy of the leadership of the region and the country, and the applicability of the method of fuzzy linear regression for sociological analysis.

  14. Improved profile fitting and quantification of uncertainty in experimental measurements of impurity transport coefficients using Gaussian process regression

    International Nuclear Information System (INIS)

    Chilenski, M.A.; Greenwald, M.; Howard, N.T.; White, A.E.; Rice, J.E.; Walk, J.R.; Marzouk, Y.

    2015-01-01

    The need to fit smooth temperature and density profiles to discrete observations is ubiquitous in plasma physics, but the prevailing techniques for this have many shortcomings that cast doubt on the statistical validity of the results. This issue is amplified in the context of validation of gyrokinetic transport models (Holland et al 2009 Phys. Plasmas 16 052301), where the strong sensitivity of the code outputs to input gradients means that inadequacies in the profile fitting technique can easily lead to an incorrect assessment of the degree of agreement with experimental measurements. In order to rectify the shortcomings of standard approaches to profile fitting, we have applied Gaussian process regression (GPR), a powerful non-parametric regression technique, to analyse an Alcator C-Mod L-mode discharge used for past gyrokinetic validation work (Howard et al 2012 Nucl. Fusion 52 063002). We show that the GPR techniques can reproduce the previous results while delivering more statistically rigorous fits and uncertainty estimates for both the value and the gradient of plasma profiles with an improved level of automation. We also discuss how the use of GPR can allow for dramatic increases in the rate of convergence of uncertainty propagation for any code that takes experimental profiles as inputs. The new GPR techniques for profile fitting and uncertainty propagation are quite useful and general, and we describe the steps to implementation in detail in this paper. These techniques have the potential to substantially improve the quality of uncertainty estimates on profile fits and the rate of convergence of uncertainty propagation, making them of great interest for wider use in fusion experiments and modelling efforts. (paper)

  15. An R package to compute commonality coefficients in the multiple regression case: an introduction to the package and a practical example.

    Science.gov (United States)

    Nimon, Kim; Lewis, Mitzi; Kane, Richard; Haynes, R Michael

    2008-05-01

    Multiple regression is a widely used technique for data analysis in social and behavioral research. The complexity of interpreting such results increases when correlated predictor variables are involved. Commonality analysis provides a method of determining the variance accounted for by respective predictor variables and is especially useful in the presence of correlated predictors. However, computing commonality coefficients is laborious. To make commonality analysis accessible to more researchers, a program was developed to automate the calculation of unique and common elements in commonality analysis, using the statistical package R. The program is described, and a heuristic example using data from the Holzinger and Swineford (1939) study, readily available in the MBESS R package, is presented.

  16. Quantitative structure-property relationship study of n-octanol-water partition coefficients of some of diverse drugs using multiple linear regression

    International Nuclear Information System (INIS)

    Ghasemi, Jahanbakhsh; Saaidpour, Saadi

    2007-01-01

    A quantitative structure-property relationship (QSPR) study was performed to develop models those relate the structures of 150 drug organic compounds to their n-octanol-water partition coefficients (log P o/w ). Molecular descriptors derived solely from 3D structures of the molecular drugs. A genetic algorithm was also applied as a variable selection tool in QSPR analysis. The models were constructed using 110 molecules as training set, and predictive ability tested using 40 compounds. Modeling of log P o/w of these compounds as a function of the theoretically derived descriptors was established by multiple linear regression (MLR). Four descriptors for these compounds molecular volume (MV) (geometrical), hydrophilic-lipophilic balance (HLB) (constitutional), hydrogen bond forming ability (HB) (electronic) and polar surface area (PSA) (electrostatic) are taken as inputs for the model. The use of descriptors calculated only from molecular structure eliminates the need for experimental determination of properties for use in the correlation and allows for the estimation of log P o/w for molecules not yet synthesized. Application of the developed model to a testing set of 40 drug organic compounds demonstrates that the model is reliable with good predictive accuracy and simple formulation. The prediction results are in good agreement with the experimental value. The root mean square error of prediction (RMSEP) and square correlation coefficient (R 2 ) for MLR model were 0.22 and 0.99 for the prediction set log P o/w

  17. Prediction of octanol-water partition coefficients of organic compounds by multiple linear regression, partial least squares, and artificial neural network.

    Science.gov (United States)

    Golmohammadi, Hassan

    2009-11-30

    A quantitative structure-property relationship (QSPR) study was performed to develop models those relate the structure of 141 organic compounds to their octanol-water partition coefficients (log P(o/w)). A genetic algorithm was applied as a variable selection tool. Modeling of log P(o/w) of these compounds as a function of theoretically derived descriptors was established by multiple linear regression (MLR), partial least squares (PLS), and artificial neural network (ANN). The best selected descriptors that appear in the models are: atomic charge weighted partial positively charged surface area (PPSA-3), fractional atomic charge weighted partial positive surface area (FPSA-3), minimum atomic partial charge (Qmin), molecular volume (MV), total dipole moment of molecule (mu), maximum antibonding contribution of a molecule orbital in the molecule (MAC), and maximum free valency of a C atom in the molecule (MFV). The result obtained showed the ability of developed artificial neural network to prediction of partition coefficients of organic compounds. Also, the results revealed the superiority of ANN over the MLR and PLS models. Copyright 2009 Wiley Periodicals, Inc.

  18. A novel hybrid method of beta-turn identification in protein using binary logistic regression and neural network.

    Science.gov (United States)

    Asghari, Mehdi Poursheikhali; Hayatshahi, Sayyed Hamed Sadat; Abdolmaleki, Parviz

    2012-01-01

    From both the structural and functional points of view, β-turns play important biological roles in proteins. In the present study, a novel two-stage hybrid procedure has been developed to identify β-turns in proteins. Binary logistic regression was initially used for the first time to select significant sequence parameters in identification of β-turns due to a re-substitution test procedure. Sequence parameters were consisted of 80 amino acid positional occurrences and 20 amino acid percentages in sequence. Among these parameters, the most significant ones which were selected by binary logistic regression model, were percentages of Gly, Ser and the occurrence of Asn in position i+2, respectively, in sequence. These significant parameters have the highest effect on the constitution of a β-turn sequence. A neural network model was then constructed and fed by the parameters selected by binary logistic regression to build a hybrid predictor. The networks have been trained and tested on a non-homologous dataset of 565 protein chains. With applying a nine fold cross-validation test on the dataset, the network reached an overall accuracy (Qtotal) of 74, which is comparable with results of the other β-turn prediction methods. In conclusion, this study proves that the parameter selection ability of binary logistic regression together with the prediction capability of neural networks lead to the development of more precise models for identifying β-turns in proteins.

  19. Study and construction of a {beta}-spectrometer of uniform axial magnetic field fitted with a {beta}-{gamma} coincidence selector. Study of the {beta} spectra of {sup 32}P, {sup 203}Hg, {sup 198}Au. Measurement of the conversion coefficients of {sup 203}Ti and of {sup 198}Hg; Etude et realisation d'un spectrometre-{beta} a champ magnetique axial uniforme, muni d'un selecteur de coincidence {beta}-{gamma}. Etude des spectres {beta} du {sup 32}p, {sup 203}Hg, {sup 198}Au. Mesure des coefficients de conversion du {sup 203}Ti et du {sup 198}Hg

    Energy Technology Data Exchange (ETDEWEB)

    Parsignault, D [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires

    1964-06-01

    In the first part is given the principle of the beta spectrometer with uniform axial field using systematically the idea of caustics. The apparatus is described and its properties compared to those deduced from trajectory calculations. The {beta}-ray and {gamma}-ray detectors and the device for selecting coincidences with a 2 {tau} resolution of 5 nanoseconds are also presented. In the second part, the spectrometer is used for studying reference elements and the most accurate results are confirmed. The {beta} spectrum of {sup 60}Co has a statistical form with an accuracy of 1 per cent; the maximum energy E{sub 0} is 316.5 {+-} 1.5 keV. That of the 7/2 + {yields} 11/2 transition for {sup 137}Cs has a unique form, once forbidden. E{sub 0}= 522 {+-} 3 keV. Conversion coefficients {alpha}{sub k} = 96 {+-} 1 X 10{sup -3} {alpha}L + M + N = 20.9 {+-} 0.5 X 10{sup -3}. The two {beta} spectra of {sup 59}Fe, separated by coincidence with the gamma, have the statistical form E{sub 0} = 462 {+-} 2 keV (55.1 + 0,3 per cent) and E{sub 1} = 275 {+-} 4 keV (44.9 {+-} 0.3 per cent). It is then verified whether the l selection rule is apparent in the shape of the phosphorus 32 beta spectrum. It is found in fact that it is not of statistical shape and its shape coefficient is determined. For a theoretical interpretation it is necessary to have better approximations than those generally used and this interpretation will not be unique. This work has also made it possible to show that the source contains a small proportion of {sup 33}P. The study of the {sup 203}Hg {beta} spectrum followed by the 279 keV gamma spectrum is designed to determine the conversion coefficients. The interior spectrum of gold 198 is not of statistical shape either. The form coefficient is determined together with the conversion coefficients which are in slight disagreement with those calculated by Rose or Sliv. An interpretation of the spectrum is put forward which proposes an imperfect compensation for the

  20. Study and construction of a {beta}-spectrometer of uniform axial magnetic field fitted with a {beta}-{gamma} coincidence selector. Study of the {beta} spectra of {sup 32}P, {sup 203}Hg, {sup 198}Au. Measurement of the conversion coefficients of {sup 203}Ti and of {sup 198}Hg; Etude et realisation d'un spectrometre-{beta} a champ magnetique axial uniforme, muni d'un selecteur de coincidence {beta}-{gamma}. Etude des spectres {beta} du {sup 32}p, {sup 203}Hg, {sup 198}Au. Mesure des coefficients de conversion du {sup 203}Ti et du {sup 198}Hg

    Energy Technology Data Exchange (ETDEWEB)

    Parsignault, D. [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires

    1964-06-01

    In the first part is given the principle of the beta spectrometer with uniform axial field using systematically the idea of caustics. The apparatus is described and its properties compared to those deduced from trajectory calculations. The {beta}-ray and {gamma}-ray detectors and the device for selecting coincidences with a 2 {tau} resolution of 5 nanoseconds are also presented. In the second part, the spectrometer is used for studying reference elements and the most accurate results are confirmed. The {beta} spectrum of {sup 60}Co has a statistical form with an accuracy of 1 per cent; the maximum energy E{sub 0} is 316.5 {+-} 1.5 keV. That of the 7/2 + {yields} 11/2 transition for {sup 137}Cs has a unique form, once forbidden. E{sub 0}= 522 {+-} 3 keV. Conversion coefficients {alpha}{sub k} = 96 {+-} 1 X 10{sup -3} {alpha}L + M + N = 20.9 {+-} 0.5 X 10{sup -3}. The two {beta} spectra of {sup 59}Fe, separated by coincidence with the gamma, have the statistical form E{sub 0} = 462 {+-} 2 keV (55.1 + 0,3 per cent) and E{sub 1} = 275 {+-} 4 keV (44.9 {+-} 0.3 per cent). It is then verified whether the l selection rule is apparent in the shape of the phosphorus 32 beta spectrum. It is found in fact that it is not of statistical shape and its shape coefficient is determined. For a theoretical interpretation it is necessary to have better approximations than those generally used and this interpretation will not be unique. This work has also made it possible to show that the source contains a small proportion of {sup 33}P. The study of the {sup 203}Hg {beta} spectrum followed by the 279 keV gamma spectrum is designed to determine the conversion coefficients. The interior spectrum of gold 198 is not of statistical shape either. The form coefficient is determined together with the conversion coefficients which are in slight disagreement with those calculated by Rose or Sliv. An interpretation of the spectrum is put forward which proposes an imperfect compensation for the

  1. The mathematical modeling of the experiment on the determination of correlation coefficients in neutron beta-decay

    Science.gov (United States)

    Serebrov, A. P.; Zherebtsov, O. M.; Klyushnikov, G. N.

    2018-05-01

    An experiment on the measurement of the ratio of the axial coupling constant to the vector one is under development. The main idea of the experiment is to measure the values of A and B in the same setup. An additional measurement of the polarization is not necessary. The accuracy achieved to date in measuring λ is 2 × 10-3. It is expected that in the experiment the accuracy will be of the order of 10-4. Some particular problems of mathematical modeling concerning the experiment on the measurement of the ratio of the axial coupling constant to the vector one are considered. The force lines for the given tabular field of a magnetic trap are studied. The dependences of the longitudinal and transverse field non-uniformity coefficients on the coordinates are regarded. A special computational algorithm based on the law of a charged particle motion along a local magnetic force line is performed for the calculation of the electrons and protons motion time as well as for the evaluation of the total number of electrons colliding with the detector surface. The average values of the cosines of the angles with the coefficients of a, A and B have been estimated.

  2. A non-linear beta-binomial regression model for mapping EORTC QLQ- C30 to the EQ-5D-3L in lung cancer patients: a comparison with existing approaches.

    Science.gov (United States)

    Khan, Iftekhar; Morris, Stephen

    2014-11-12

    The performance of the Beta Binomial (BB) model is compared with several existing models for mapping the EORTC QLQ-C30 (QLQ-C30) on to the EQ-5D-3L using data from lung cancer trials. Data from 2 separate non small cell lung cancer clinical trials (TOPICAL and SOCCAR) are used to develop and validate the BB model. Comparisons with Linear, TOBIT, Quantile, Quadratic and CLAD models are carried out. The mean prediction error, R(2), proportion predicted outside the valid range, clinical interpretation of coefficients, model fit and estimation of Quality Adjusted Life Years (QALY) are reported and compared. Monte-Carlo simulation is also used. The Beta-Binomial regression model performed 'best' among all models. For TOPICAL and SOCCAR trials, respectively, residual mean square error (RMSE) was 0.09 and 0.11; R(2) was 0.75 and 0.71; observed vs. predicted means were 0.612 vs. 0.608 and 0.750 vs. 0.749. Mean difference in QALY's (observed vs. predicted) were 0.051 vs. 0.053 and 0.164 vs. 0.162 for TOPICAL and SOCCAR respectively. Models tested on independent data show simulated 95% confidence from the BB model containing the observed mean more often (77% and 59% for TOPICAL and SOCCAR respectively) compared to the other models. All algorithms over-predict at poorer health states but the BB model was relatively better, particularly for the SOCCAR data. The BB model may offer superior predictive properties amongst mapping algorithms considered and may be more useful when predicting EQ-5D-3L at poorer health states. We recommend the algorithm derived from the TOPICAL data due to better predictive properties and less uncertainty.

  3. Regression models to predict the behavior of the coefficient of friction of AISI 316L on UHMWPE under ISO 14243-3 conditions.

    Science.gov (United States)

    Garcia-Garcia, A L; Alvarez-Vera, M; Montoya-Santiyanes, L A; Dominguez-Lopez, I; Montes-Seguedo, J L; Sosa-Savedra, J C; Barceinas-Sanchez, J D O

    2018-06-01

    Friction is the natural response of all tribosystems. In a total knee replacement (TKR) prosthetic device, its measurement is hindered by the complex geometry of its integrating parts and that of the testing simulation rig operating under the ISO 14243-3:2014 standard. To develop prediction models of the coefficient of friction (COF) between AISI 316L steel and ultra-high molecular weight polyethylene (UHMWPE) lubricated with fetal bovine serum dilutions, the arthrokinematics and loading conditions prescribed by the ISO 142433: 2014 standard were translated to a simpler geometrical setup, via Hertz contact theory. Tribological testing proceeded by loading a stainless steel AISI 316L ball against the surface of a UHMWPE disk, with the test fluid at 37 °C. The method has been applied to study the behavior of the COF during a whole walking cycle. On the other hand, the role of protein aggregation phenomena as a lubrication mechanism has been extensively studied in hip joint replacements but little explored for the operating conditions of a TKR. Lubricant testing fluids were prepared with fetal bovine serum (FBS) dilutions having protein mass concentrations of 5, 10, 20 and 36 g/L. The results were contrasted against deionized, sterilized water. The results indicate that even at protein concentration as low as 5 g/L, protein aggregation phenomena play an important role in the lubrication of the metal-on-polymer tribopair. The regression models of the COF developed herein are available for numerical simulations of the tribological behavior of the aforementioned tribosystem. In this case, surface stress rather than film thickness should be considered. Copyright © 2018 Elsevier Ltd. All rights reserved.

  4. Development of a predictive model for distribution coefficient (Kd) of 13'7Cs and 60Co in marine sediments using multiple linear regression analysis

    International Nuclear Information System (INIS)

    Kumar, Ajay; Ravi, P.M.; Guneshwar, S.L.; Rout, Sabyasachi; Mishra, Manish K.; Pulhani, Vandana; Tripathi, R.M.

    2018-01-01

    Numerous common methods (batch laboratory, the column laboratory, field-batch method, field modeling and K 0c method) are used frequently for determination of K d values. Recently, multiple regression models are considered as new best estimates for predicting the K d of radionuclides in the environment. It is also well known fact that the K d value is highly influenced by physico-chemical properties of sediment. Due to the significant variability in influencing parameters, the measured K d values can range over several orders of magnitude under different environmental conditions. The aim of this study is to develop a predictive model for K d values of 137 Cs and 60 Co based on the sediment properties using multiple linear regression analysis

  5. Reduced Rank Regression

    DEFF Research Database (Denmark)

    Johansen, Søren

    2008-01-01

    The reduced rank regression model is a multivariate regression model with a coefficient matrix with reduced rank. The reduced rank regression algorithm is an estimation procedure, which estimates the reduced rank regression model. It is related to canonical correlations and involves calculating...

  6. Prediction of the thermal expansion coefficients of bio diesels from several sources through the application of linear regression; Predicao dos coeficientes de expansao termica de biodieseis de diversas origens atraves da aplicacao da regressa linear

    Energy Technology Data Exchange (ETDEWEB)

    Canciam, Cesar Augusto [Universidade Tecnologica Federal do Parana (UTFPR), Campus Ponta Grossa, PR (Brazil)], e-mail: canciam@utfpr.edu.br

    2012-07-01

    When evaluating the consumption of bio fuels, the knowledge of the density is of great importance for rectify the effect of temperature. The thermal expansion coefficient is a thermodynamic property that provides a measure of the density variation in response to temperature variation, keeping the pressure constant. This study aimed to predict the thermal expansion coefficients of ethyl bio diesels from castor beans, soybeans, sunflower seeds and Mabea fistulifera Mart. oils and of methyl bio diesels from soybeans, sunflower seeds, souari nut, cotton, coconut, castor beans and palm oils, from beef tallow, chicken fat and hydrogenated vegetable fat residual. For this purpose, there was a linear regression analysis of the density of each bio diesel a function of temperature. These data were obtained from other works. The thermal expansion coefficients for bio diesels are between 6.3729x{sup 10-4} and 1.0410x10{sup -3} degree C-1. In all the cases, the correlation coefficients were over 0.99. (author)

  7. Development of a predictive model for lead, cadmium and fluorine soil-water partition coefficients using sparse multiple linear regression analysis.

    Science.gov (United States)

    Nakamura, Kengo; Yasutaka, Tetsuo; Kuwatani, Tatsu; Komai, Takeshi

    2017-11-01

    In this study, we applied sparse multiple linear regression (SMLR) analysis to clarify the relationships between soil properties and adsorption characteristics for a range of soils across Japan and identify easily-obtained physical and chemical soil properties that could be used to predict K and n values of cadmium, lead and fluorine. A model was first constructed that can easily predict the K and n values from nine soil parameters (pH, cation exchange capacity, specific surface area, total carbon, soil organic matter from loss on ignition and water holding capacity, the ratio of sand, silt and clay). The K and n values of cadmium, lead and fluorine of 17 soil samples were used to verify the SMLR models by the root mean square error values obtained from 512 combinations of soil parameters. The SMLR analysis indicated that fluorine adsorption to soil may be associated with organic matter, whereas cadmium or lead adsorption to soil is more likely to be influenced by soil pH, IL. We found that an accurate K value can be predicted from more than three soil parameters for most soils. Approximately 65% of the predicted values were between 33 and 300% of their measured values for the K value; 76% of the predicted values were within ±30% of their measured values for the n value. Our findings suggest that adsorption properties of lead, cadmium and fluorine to soil can be predicted from the soil physical and chemical properties using the presented models. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. T cell receptor (TCR-transgenic CD8 lymphocytes rendered insensitive to transforming growth factor beta (TGFβ signaling mediate superior tumor regression in an animal model of adoptive cell therapy

    Directory of Open Access Journals (Sweden)

    Quatromoni Jon G

    2012-06-01

    Full Text Available Abstract Tumor antigen-reactive T cells must enter into an immunosuppressive tumor microenvironment, continue to produce cytokine and deliver apoptotic death signals to affect tumor regression. Many tumors produce transforming growth factor beta (TGFβ, which inhibits T cell activation, proliferation and cytotoxicity. In a murine model of adoptive cell therapy, we demonstrate that transgenic Pmel-1 CD8 T cells, rendered insensitive to TGFβ by transduction with a TGFβ dominant negative receptor II (DN, were more effective in mediating regression of established B16 melanoma. Smaller numbers of DN Pmel-1 T cells effectively mediated tumor regression and retained the ability to produce interferon-γ in the tumor microenvironment. These results support efforts to incorporate this DN receptor in clinical trials of adoptive cell therapy for cancer.

  9. Kendall-Theil Robust Line (KTRLine--version 1.0)-A Visual Basic Program for Calculating and Graphing Robust Nonparametric Estimates of Linear-Regression Coefficients Between Two Continuous Variables

    Science.gov (United States)

    Granato, Gregory E.

    2006-01-01

    The Kendall-Theil Robust Line software (KTRLine-version 1.0) is a Visual Basic program that may be used with the Microsoft Windows operating system to calculate parameters for robust, nonparametric estimates of linear-regression coefficients between two continuous variables. The KTRLine software was developed by the U.S. Geological Survey, in cooperation with the Federal Highway Administration, for use in stochastic data modeling with local, regional, and national hydrologic data sets to develop planning-level estimates of potential effects of highway runoff on the quality of receiving waters. The Kendall-Theil robust line was selected because this robust nonparametric method is resistant to the effects of outliers and nonnormality in residuals that commonly characterize hydrologic data sets. The slope of the line is calculated as the median of all possible pairwise slopes between points. The intercept is calculated so that the line will run through the median of input data. A single-line model or a multisegment model may be specified. The program was developed to provide regression equations with an error component for stochastic data generation because nonparametric multisegment regression tools are not available with the software that is commonly used to develop regression models. The Kendall-Theil robust line is a median line and, therefore, may underestimate total mass, volume, or loads unless the error component or a bias correction factor is incorporated into the estimate. Regression statistics such as the median error, the median absolute deviation, the prediction error sum of squares, the root mean square error, the confidence interval for the slope, and the bias correction factor for median estimates are calculated by use of nonparametric methods. These statistics, however, may be used to formulate estimates of mass, volume, or total loads. The program is used to read a two- or three-column tab-delimited input file with variable names in the first row and

  10. Determination of the osmotic second virial coefficient and the demerization of beta-lactoglobulin in aqueous solutions with added salt at the isoelectric point

    NARCIS (Netherlands)

    Schaink, H.M.; Smit, J.A.M.

    2000-01-01

    Aqueous solutions of β-lactoglobulin (at the isoelectric point pH=5.18) have been studied by membrane osmometry. The osmotic second virial coefficient as well as the monomer–dimer equilibrium of β-lactoglobulin have been found to depend significantly on the salt concentration. At low salt

  11. Drug treatment rates with beta-blockers and ACE-inhibitors/angiotensin receptor blockers and recurrences in takotsubo cardiomyopathy: A meta-regression analysis.

    Science.gov (United States)

    Brunetti, Natale Daniele; Santoro, Francesco; De Gennaro, Luisa; Correale, Michele; Gaglione, Antonio; Di Biase, Matteo

    2016-07-01

    In a recent paper Singh et al. analyzed the effect of drug treatment on recurrence of takotsubo cardiomyopathy (TTC) in a comprehensive meta-analysis. The study found that recurrence rates were independent of clinic utilization of BB prescription, but inversely correlated with ACEi/ARB prescription: authors therefore conclude that ACEi/ARB rather than BB may reduce risk of recurrence. We aimed to re-analyze data reported in the study, now weighted for populations' size, in a meta-regression analysis. After multiple meta-regression analysis, we found a significant regression between rates of prescription of ACEi and rates of recurrence of TTC; regression was not statistically significant for BBs. On the bases of our re-analysis, we confirm that rates of recurrence of TTC are lower in populations of patients with higher rates of treatment with ACEi/ARB. That could not necessarily imply that ACEi may prevent recurrence of TTC, but barely that, for example, rates of recurrence are lower in cohorts more compliant with therapy or more prescribed with ACEi because more carefully followed. Randomized prospective studies are surely warranted. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  12. Earning on Response Coefficient in Automobile and Go Public Companies

    Directory of Open Access Journals (Sweden)

    Lisdawati Arifin

    2017-09-01

    Full Text Available This study aims to analyze factors that influence earnings response coefficients (ERC, simultaneously and partially, composed of leverage, the systematic risk (beta, growth opportunities (market to book value ratio, and the size of the firm (firm size, selection of the sample in this study the author take 12 automakers and components that meet the criteria of completeness of the data from the year 2008 to 2012, entirely based on consideration of the following criteria: (1 the company's automotive and components are listed on the stock exchange, (2 have the financial statements years 2008-2012 (3 has a return data (closing price the first day after the date of issuance of the financial statements. This study uses secondary data applying multiple linear regression models to analyze and test the effect of independent variables on the dependent variable partially (t-test, simultaneous (f-test, and the goodness of fit (R-square on a research model. The result shows that leverage, beta, growth opportunities (market to book value ratio and size along with (simultaneously the effect on the dependent variable (dependent variable earnings response coefficients. Partially leverage negatively affect earnings response coefficients, partially beta negatively correlated earnings response coefficients, partially growth opportunities (market to book value ratio significant effect on earnings response coefficients, partially sized companies (firm size significantly influence earnings response coefficients.

  13. The Use of Structure Coefficients to Address Multicollinearity in Sport and Exercise Science

    Science.gov (United States)

    Yeatts, Paul E.; Barton, Mitch; Henson, Robin K.; Martin, Scott B.

    2017-01-01

    A common practice in general linear model (GLM) analyses is to interpret regression coefficients (e.g., standardized ß weights) as indicators of variable importance. However, focusing solely on standardized beta weights may provide limited or erroneous information. For example, ß weights become increasingly unreliable when predictor variables are…

  14. Multiple linear regression analysis

    Science.gov (United States)

    Edwards, T. R.

    1980-01-01

    Program rapidly selects best-suited set of coefficients. User supplies only vectors of independent and dependent data and specifies confidence level required. Program uses stepwise statistical procedure for relating minimal set of variables to set of observations; final regression contains only most statistically significant coefficients. Program is written in FORTRAN IV for batch execution and has been implemented on NOVA 1200.

  15. Search for a nonzero triple-correlation coefficient and new experimental limit on T invariance in polarized-neutron beta decay

    International Nuclear Information System (INIS)

    Steinberg, R.I.; Liaud, P.; Vignon, B.; Hughes, V.W.

    1976-01-01

    A detailed description of an experimental test of time-reversal invariance in the β decay of the polarized free neutron is presented. The experiment consists of a measurement of the triple-correlation coefficient D between the neutron polarization vector and the electron and antineutrino momentum vectors. A nonzero value for this coefficient would imply T violation, since final-state interactions and other corrections may be neglected at the present level of precision. The experiment was performed using a cold-neutron beam at the High Flux Reactor of the Institut Laue-Langevin, Grenoble. A polarizing neutron guide tube yielded a beam intensity of 10 9 neutrons/sec with a polarization of 70%. Our result, based upon observation of approximately 6 x 10 6 decays, is D = (-1.1 +- 1.7) x 10 -3 , consistent with time-reversal invariance in the ΔS = 0 weak interaction. In terms of the relative phase angle between axial-vector and vector coupling constants, the result may be expressed as phi = 180.14 +- 0.22 0

  16. Dual Regression

    OpenAIRE

    Spady, Richard; Stouli, Sami

    2012-01-01

    We propose dual regression as an alternative to the quantile regression process for the global estimation of conditional distribution functions under minimal assumptions. Dual regression provides all the interpretational power of the quantile regression process while avoiding the need for repairing the intersecting conditional quantile surfaces that quantile regression often produces in practice. Our approach introduces a mathematical programming characterization of conditional distribution f...

  17. Recursive Algorithm For Linear Regression

    Science.gov (United States)

    Varanasi, S. V.

    1988-01-01

    Order of model determined easily. Linear-regression algorithhm includes recursive equations for coefficients of model of increased order. Algorithm eliminates duplicative calculations, facilitates search for minimum order of linear-regression model fitting set of data satisfactory.

  18. Avaliação do coeficiente de atrito de braquetes metálicos e estéticos com fios de aço inoxidável e beta-titânio Evaluation of the friction coefficient of metal and esthetic brackets with stainless steel and beta-titanium wires

    Directory of Open Access Journals (Sweden)

    Cristine Pritsch Braga

    2004-12-01

    Full Text Available Um fator importante que define a eficácia dos aparelhos ortodônticos fixos é o atrito existente entre as superfícies de fios e braquetes. Assim, este estudo teve como objetivo investigar o coeficiente de atrito estático entre fios de aço inoxidável e beta-titânio (TP Orthodontics e braquetes de aço inoxidável (Dynalock® - Unitek, braquetes estéticos com slot de aço inoxidável (Clarity® - Unitek e estéticos convencionais (Allure® - GAC. Para tanto, construiu-se um equipamento no Departamento de Engenharia Mecânica e Mecatrônica da PUCRS. Antes de serem iniciados os testes, foi quantificado o erro de método e constatou-se que não houve interferência significante (p>0,05 do fator operador nas medições. Então, pôde-se calcular o valor do coeficiente de atrito, obtido pela divisão da força de atrito pela carga normal. O método estatístico utilizado neste estudo foi Análise de Variância (ANOVA e teste de Comparações Múltiplas (Tukey. Constatou-se que: 1 a combinação com menor coeficiente de atrito foi composta pelo fio de aço inoxidável e braquete Dynalock® e a que apresentou maior coeficiente foi a do braquete Allure® com o fio de beta-titânio; 2 o fio de beta-titânio apresentou coeficiente de atrito significativamente maior do que o fio de aço inoxidável; 3 o braquete Dynalock® não apresentou diferenças significativas em relação ao coeficiente de atrito do braquete Clarity® quando o fio utilizado foi de beta-titânio. No entanto, quando o fio testado foi de aço inoxidável, apresentou coeficiente de atrito significativamente menor. O braquete Clarity® apresentou coeficiente de atrito significativamente menor do que o braquete Allure®.An important factor that defines the effectiveness of the appliances is the friction between the surfaces of wires and brackets. Thus, that study was developed in order to investigate the static friction coefficient between stainless steel and beta-titanium wires (TP

  19. Tools to support interpreting multiple regression in the face of multicollinearity.

    Science.gov (United States)

    Kraha, Amanda; Turner, Heather; Nimon, Kim; Zientek, Linda Reichwein; Henson, Robin K

    2012-01-01

    While multicollinearity may increase the difficulty of interpreting multiple regression (MR) results, it should not cause undue problems for the knowledgeable researcher. In the current paper, we argue that rather than using one technique to investigate regression results, researchers should consider multiple indices to understand the contributions that predictors make not only to a regression model, but to each other as well. Some of the techniques to interpret MR effects include, but are not limited to, correlation coefficients, beta weights, structure coefficients, all possible subsets regression, commonality coefficients, dominance weights, and relative importance weights. This article will review a set of techniques to interpret MR effects, identify the elements of the data on which the methods focus, and identify statistical software to support such analyses.

  20. Panel Smooth Transition Regression Models

    DEFF Research Database (Denmark)

    González, Andrés; Terasvirta, Timo; Dijk, Dick van

    We introduce the panel smooth transition regression model. This new model is intended for characterizing heterogeneous panels, allowing the regression coefficients to vary both across individuals and over time. Specifically, heterogeneity is allowed for by assuming that these coefficients are bou...

  1. Measurement of the {beta}-{nu} angular correlation parameter in the decay of {sup 6}He using a Paul trap; Mesure du coefficient de correlation angulaire {beta}-{nu} dans la decroissance de {sup 6}He a l'aide d'un piege de Paul

    Energy Technology Data Exchange (ETDEWEB)

    Mery, A

    2007-07-15

    The central topic of this work is the study of the properties and the implementation of a Paul trap used for the measurement of the beta-neutrino angular correlation parameter in the decay of {sup 6}He. This coefficient provides a signature of the nature of the interactions involved in the weak interaction. The value of this coefficient can be deduced from the kinematical distribution of the decay events. An electromagnetic trap is used for the trapping of {sup 6}He{sup +} ions in a small volume. This trap has an open geometry that enables the detection in coincidence of the electron and the recoil ion emitted in the beta decay. A dedicated detection set up is used for the measurement of the electron energy, the ion time of flight and the position of the two particles for each event. A general description of the LPCTrap facility and of its performances is presented and shows that this set up is able to fulfill the proposed measurement. Especially, a comparison is made between the characteristics of the ion cloud obtained from Monte Carlo simulations and the experimental measurements with a good agreement. More than 100 000 coincident events have been recorded during the first experiment. A preliminary analysis of these results is shown. It includes a description of the different observables as well as a comparison between the experimental time of flight spectrum and the simulated spectrum. These data will allow a measurement of the angular correlation parameter with a statistical error smaller than 2 %. (author)

  2. Regression Phalanxes

    OpenAIRE

    Zhang, Hongyang; Welch, William J.; Zamar, Ruben H.

    2017-01-01

    Tomal et al. (2015) introduced the notion of "phalanxes" in the context of rare-class detection in two-class classification problems. A phalanx is a subset of features that work well for classification tasks. In this paper, we propose a different class of phalanxes for application in regression settings. We define a "Regression Phalanx" - a subset of features that work well together for prediction. We propose a novel algorithm which automatically chooses Regression Phalanxes from high-dimensi...

  3. BETA digital beta radiometer

    International Nuclear Information System (INIS)

    Borovikov, N.V.; Kosinov, G.A.; Fedorov, Yu.N.

    1989-01-01

    Portable transportable digital beta radiometer providing for measuring beta-decay radionuclide specific activity in the range from 5x10 -9 up to 10 -6 Cu/kg (Cu/l) with error of ±25% is designed and introduced into commercial production for determination of volume and specific water and food radioactivity. The device specifications are given. Experience in the BETA radiometer application under conditions of the Chernobyl' NPP 30-km zone has shown that it is convenient for measuring specific activity of the order of 10 -8 Cu/kg, and application of a set of different beta detectors gives an opportunity to use it for surface contamination measurement in wide range of the measured value

  4. Vector regression introduced

    Directory of Open Access Journals (Sweden)

    Mok Tik

    2014-06-01

    Full Text Available This study formulates regression of vector data that will enable statistical analysis of various geodetic phenomena such as, polar motion, ocean currents, typhoon/hurricane tracking, crustal deformations, and precursory earthquake signals. The observed vector variable of an event (dependent vector variable is expressed as a function of a number of hypothesized phenomena realized also as vector variables (independent vector variables and/or scalar variables that are likely to impact the dependent vector variable. The proposed representation has the unique property of solving the coefficients of independent vector variables (explanatory variables also as vectors, hence it supersedes multivariate multiple regression models, in which the unknown coefficients are scalar quantities. For the solution, complex numbers are used to rep- resent vector information, and the method of least squares is deployed to estimate the vector model parameters after transforming the complex vector regression model into a real vector regression model through isomorphism. Various operational statistics for testing the predictive significance of the estimated vector parameter coefficients are also derived. A simple numerical example demonstrates the use of the proposed vector regression analysis in modeling typhoon paths.

  5. Constrained statistical inference: sample-size tables for ANOVA and regression

    Directory of Open Access Journals (Sweden)

    Leonard eVanbrabant

    2015-01-01

    Full Text Available Researchers in the social and behavioral sciences often have clear expectations about the order/direction of the parameters in their statistical model. For example, a researcher might expect that regression coefficient beta1 is larger than beta2 and beta3. The corresponding hypothesis is H: beta1 > {beta2, beta3} and this is known as an (order constrained hypothesis. A major advantage of testing such a hypothesis is that power can be gained and inherently a smaller sample size is needed. This article discusses this gain in sample size reduction, when an increasing number of constraints is included into the hypothesis. The main goal is to present sample-size tables for constrained hypotheses. A sample-size table contains the necessary sample-size at a prespecified power (say, 0.80 for an increasing number of constraints. To obtain sample-size tables, two Monte Carlo simulations were performed, one for ANOVA and one for multiple regression. Three results are salient. First, in an ANOVA the needed sample-size decreases with 30% to 50% when complete ordering of the parameters is taken into account. Second, small deviations from the imposed order have only a minor impact on the power. Third, at the maximum number of constraints, the linear regression results are comparable with the ANOVA results. However, in the case of fewer constraints, ordering the parameters (e.g., beta1 > beta2 results in a higher power than assigning a positive or a negative sign to the parameters (e.g., beta1 > 0.

  6. Multicollinearity and Regression Analysis

    Science.gov (United States)

    Daoud, Jamal I.

    2017-12-01

    In regression analysis it is obvious to have a correlation between the response and predictor(s), but having correlation among predictors is something undesired. The number of predictors included in the regression model depends on many factors among which, historical data, experience, etc. At the end selection of most important predictors is something objective due to the researcher. Multicollinearity is a phenomena when two or more predictors are correlated, if this happens, the standard error of the coefficients will increase [8]. Increased standard errors means that the coefficients for some or all independent variables may be found to be significantly different from In other words, by overinflating the standard errors, multicollinearity makes some variables statistically insignificant when they should be significant. In this paper we focus on the multicollinearity, reasons and consequences on the reliability of the regression model.

  7. Autistic Regression

    Science.gov (United States)

    Matson, Johnny L.; Kozlowski, Alison M.

    2010-01-01

    Autistic regression is one of the many mysteries in the developmental course of autism and pervasive developmental disorders not otherwise specified (PDD-NOS). Various definitions of this phenomenon have been used, further clouding the study of the topic. Despite this problem, some efforts at establishing prevalence have been made. The purpose of…

  8. Linear regression

    CERN Document Server

    Olive, David J

    2017-01-01

    This text covers both multiple linear regression and some experimental design models. The text uses the response plot to visualize the model and to detect outliers, does not assume that the error distribution has a known parametric distribution, develops prediction intervals that work when the error distribution is unknown, suggests bootstrap hypothesis tests that may be useful for inference after variable selection, and develops prediction regions and large sample theory for the multivariate linear regression model that has m response variables. A relationship between multivariate prediction regions and confidence regions provides a simple way to bootstrap confidence regions. These confidence regions often provide a practical method for testing hypotheses. There is also a chapter on generalized linear models and generalized additive models. There are many R functions to produce response and residual plots, to simulate prediction intervals and hypothesis tests, to detect outliers, and to choose response trans...

  9. Meta-regression analyses, meta-analyses, and trial sequential analyses of the effects of supplementation with Beta-carotene, vitamin a, and vitamin e singly or in different combinations on all-cause mortality

    DEFF Research Database (Denmark)

    Bjelakovic, Goran; Nikolova, Dimitrinka; Gluud, Christian

    2013-01-01

    Evidence shows that antioxidant supplements may increase mortality. Our aims were to assess whether different doses of beta-carotene, vitamin A, and vitamin E affect mortality in primary and secondary prevention randomized clinical trials with low risk of bias.......Evidence shows that antioxidant supplements may increase mortality. Our aims were to assess whether different doses of beta-carotene, vitamin A, and vitamin E affect mortality in primary and secondary prevention randomized clinical trials with low risk of bias....

  10. Regression modeling methods, theory, and computation with SAS

    CERN Document Server

    Panik, Michael

    2009-01-01

    Regression Modeling: Methods, Theory, and Computation with SAS provides an introduction to a diverse assortment of regression techniques using SAS to solve a wide variety of regression problems. The author fully documents the SAS programs and thoroughly explains the output produced by the programs.The text presents the popular ordinary least squares (OLS) approach before introducing many alternative regression methods. It covers nonparametric regression, logistic regression (including Poisson regression), Bayesian regression, robust regression, fuzzy regression, random coefficients regression,

  11. Interpretation of commonly used statistical regression models.

    Science.gov (United States)

    Kasza, Jessica; Wolfe, Rory

    2014-01-01

    A review of some regression models commonly used in respiratory health applications is provided in this article. Simple linear regression, multiple linear regression, logistic regression and ordinal logistic regression are considered. The focus of this article is on the interpretation of the regression coefficients of each model, which are illustrated through the application of these models to a respiratory health research study. © 2013 The Authors. Respirology © 2013 Asian Pacific Society of Respirology.

  12. Speculative Betas

    OpenAIRE

    Harrison Hong; David Sraer

    2012-01-01

    We provide a model for why high beta assets are more prone to speculative overpricing than low beta ones. When investors disagree about the common factor of cash-flows, high beta assets are more sensitive to this macro-disagreement and experience a greater divergence-of-opinion about their payoffs. Short-sales constraints for some investors such as retail mutual funds result in high beta assets being over-priced. When aggregate disagreement is low, expected return increases with beta due to r...

  13. Linear regression in astronomy. I

    Science.gov (United States)

    Isobe, Takashi; Feigelson, Eric D.; Akritas, Michael G.; Babu, Gutti Jogesh

    1990-01-01

    Five methods for obtaining linear regression fits to bivariate data with unknown or insignificant measurement errors are discussed: ordinary least-squares (OLS) regression of Y on X, OLS regression of X on Y, the bisector of the two OLS lines, orthogonal regression, and 'reduced major-axis' regression. These methods have been used by various researchers in observational astronomy, most importantly in cosmic distance scale applications. Formulas for calculating the slope and intercept coefficients and their uncertainties are given for all the methods, including a new general form of the OLS variance estimates. The accuracy of the formulas was confirmed using numerical simulations. The applicability of the procedures is discussed with respect to their mathematical properties, the nature of the astronomical data under consideration, and the scientific purpose of the regression. It is found that, for problems needing symmetrical treatment of the variables, the OLS bisector performs significantly better than orthogonal or reduced major-axis regression.

  14. Correlation and simple linear regression.

    Science.gov (United States)

    Zou, Kelly H; Tuncali, Kemal; Silverman, Stuart G

    2003-06-01

    In this tutorial article, the concepts of correlation and regression are reviewed and demonstrated. The authors review and compare two correlation coefficients, the Pearson correlation coefficient and the Spearman rho, for measuring linear and nonlinear relationships between two continuous variables. In the case of measuring the linear relationship between a predictor and an outcome variable, simple linear regression analysis is conducted. These statistical concepts are illustrated by using a data set from published literature to assess a computed tomography-guided interventional technique. These statistical methods are important for exploring the relationships between variables and can be applied to many radiologic studies.

  15. Retro-regression--another important multivariate regression improvement.

    Science.gov (United States)

    Randić, M

    2001-01-01

    We review the serious problem associated with instabilities of the coefficients of regression equations, referred to as the MRA (multivariate regression analysis) "nightmare of the first kind". This is manifested when in a stepwise regression a descriptor is included or excluded from a regression. The consequence is an unpredictable change of the coefficients of the descriptors that remain in the regression equation. We follow with consideration of an even more serious problem, referred to as the MRA "nightmare of the second kind", arising when optimal descriptors are selected from a large pool of descriptors. This process typically causes at different steps of the stepwise regression a replacement of several previously used descriptors by new ones. We describe a procedure that resolves these difficulties. The approach is illustrated on boiling points of nonanes which are considered (1) by using an ordered connectivity basis; (2) by using an ordering resulting from application of greedy algorithm; and (3) by using an ordering derived from an exhaustive search for optimal descriptors. A novel variant of multiple regression analysis, called retro-regression (RR), is outlined showing how it resolves the ambiguities associated with both "nightmares" of the first and the second kind of MRA.

  16. Interpreting Multiple Logistic Regression Coefficients in Prospective Observational Studies

    Science.gov (United States)

    1982-11-01

    prompted close examination of the issue at a workshop on hypertriglyceridemia where some of the cautions and perspectives given in this paper were...characteristics. If this is not the interest, then to isolate and-understand the effect of a characteris- tic on CHD when it could be one of several interacting...also easily extended to the case when several independent variables are modeled in a multiple logistic equation. In this instance, if xlx 2,..., x are

  17. Beta spectrometry

    International Nuclear Information System (INIS)

    Dryak, P.; Zderadicka, J.; Plch, J.; Kokta, L.; Novotna, P.

    1977-01-01

    For the purpose of beta spectrometry, a semiconductor spectrometer with one Si(Li) detector cooled with liquid nitrogen was designed. Geometrical detection efficiency is about 10% 4 sr. The achieved resolution for 624 keV conversion electrons of sup(137m)Ba is 2.6 keV (FWHM). A program was written in the FORTRAN language for the correction of the deformation of the measured spectra by backscattering in the analysis of continuous beta spectra. The method permits the determination of the maximum energy of the beta spectrum with an accuracy of +-5 keV. (author)

  18. Advanced statistics: linear regression, part II: multiple linear regression.

    Science.gov (United States)

    Marill, Keith A

    2004-01-01

    The applications of simple linear regression in medical research are limited, because in most situations, there are multiple relevant predictor variables. Univariate statistical techniques such as simple linear regression use a single predictor variable, and they often may be mathematically correct but clinically misleading. Multiple linear regression is a mathematical technique used to model the relationship between multiple independent predictor variables and a single dependent outcome variable. It is used in medical research to model observational data, as well as in diagnostic and therapeutic studies in which the outcome is dependent on more than one factor. Although the technique generally is limited to data that can be expressed with a linear function, it benefits from a well-developed mathematical framework that yields unique solutions and exact confidence intervals for regression coefficients. Building on Part I of this series, this article acquaints the reader with some of the important concepts in multiple regression analysis. These include multicollinearity, interaction effects, and an expansion of the discussion of inference testing, leverage, and variable transformations to multivariate models. Examples from the first article in this series are expanded on using a primarily graphic, rather than mathematical, approach. The importance of the relationships among the predictor variables and the dependence of the multivariate model coefficients on the choice of these variables are stressed. Finally, concepts in regression model building are discussed.

  19. Beta Blockers

    Science.gov (United States)

    ... may not work as effectively for people of African heritage and older people, especially when taken without ... conditions/high-blood-pressure/in-depth/beta-blockers/ART-20044522 . Mayo Clinic Footer Legal Conditions and Terms ...

  20. Realized Beta GARCH

    DEFF Research Database (Denmark)

    Hansen, Peter Reinhard; Lunde, Asger; Voev, Valeri Radkov

    2014-01-01

    is particularly useful for modeling financial returns during periods of rapid changes in the underlying covariance structure. When applied to market returns in conjunction with returns on an individual asset, the model yields a dynamic model specification of the conditional regression coefficient that is known...

  1. Differentiating regressed melanoma from regressed lichenoid keratosis.

    Science.gov (United States)

    Chan, Aegean H; Shulman, Kenneth J; Lee, Bonnie A

    2017-04-01

    Distinguishing regressed lichen planus-like keratosis (LPLK) from regressed melanoma can be difficult on histopathologic examination, potentially resulting in mismanagement of patients. We aimed to identify histopathologic features by which regressed melanoma can be differentiated from regressed LPLK. Twenty actively inflamed LPLK, 12 LPLK with regression and 15 melanomas with regression were compared and evaluated by hematoxylin and eosin staining as well as Melan-A, microphthalmia transcription factor (MiTF) and cytokeratin (AE1/AE3) immunostaining. (1) A total of 40% of regressed melanomas showed complete or near complete loss of melanocytes within the epidermis with Melan-A and MiTF immunostaining, while 8% of regressed LPLK exhibited this finding. (2) Necrotic keratinocytes were seen in the epidermis in 33% regressed melanomas as opposed to all of the regressed LPLK. (3) A dense infiltrate of melanophages in the papillary dermis was seen in 40% of regressed melanomas, a feature not seen in regressed LPLK. In summary, our findings suggest that a complete or near complete loss of melanocytes within the epidermis strongly favors a regressed melanoma over a regressed LPLK. In addition, necrotic epidermal keratinocytes and the presence of a dense band-like distribution of dermal melanophages can be helpful in differentiating these lesions. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  2. Regression: A Bibliography.

    Science.gov (United States)

    Pedrini, D. T.; Pedrini, Bonnie C.

    Regression, another mechanism studied by Sigmund Freud, has had much research, e.g., hypnotic regression, frustration regression, schizophrenic regression, and infra-human-animal regression (often directly related to fixation). Many investigators worked with hypnotic age regression, which has a long history, going back to Russian reflexologists.…

  3. Sparse Regression by Projection and Sparse Discriminant Analysis

    KAUST Repository

    Qi, Xin; Luo, Ruiyan; Carroll, Raymond J.; Zhao, Hongyu

    2015-01-01

    predictions. We introduce a new framework, regression by projection, and its sparse version to analyze high-dimensional data. The unique nature of this framework is that the directions of the regression coefficients are inferred first, and the lengths

  4. AN APPLICATION OF FUNCTIONAL MULTIVARIATE REGRESSION MODEL TO MULTICLASS CLASSIFICATION

    OpenAIRE

    Krzyśko, Mirosław; Smaga, Łukasz

    2017-01-01

    In this paper, the scale response functional multivariate regression model is considered. By using the basis functions representation of functional predictors and regression coefficients, this model is rewritten as a multivariate regression model. This representation of the functional multivariate regression model is used for multiclass classification for multivariate functional data. Computational experiments performed on real labelled data sets demonstrate the effectiveness of the proposed ...

  5. Back to the Future Betas: Empirical Asset Pricing of US and Southeast Asian Markets

    Directory of Open Access Journals (Sweden)

    Jordan French

    2016-07-01

    Full Text Available The study adds an empirical outlook on the predicting power of using data from the future to predict future returns. The crux of the traditional Capital Asset Pricing Model (CAPM methodology is using historical data in the calculation of the beta coefficient. This study instead uses a battery of Generalized Auto Regressive Conditional Heteroskedasticity (GARCH models, of differing lag and parameter terms, to forecast the variance of the market used in the denominator of the beta formula. The covariance of the portfolio and market returns are assumed to remain constant in the time-varying beta calculations. The data spans from 3 January 2005 to 29 December 2014. One ten-year, two five-year, and three three-year sample periods were used, for robustness, with ten different portfolios. Out of sample forecasts, mean absolute error (MAE and mean squared forecast error (MSE were used to compare the forecasting ability of the ex-ante GARCH models, Artificial Neural Network, and the standard market ex-post model. Find that the time-varying MGARCH and SGARCH beta performed better with out-of-sample testing than the other ex-ante models. Although the simplest approach, constant ex-post beta, performed as well or better within this empirical study.

  6. Beta-hemolytic Streptococcal Bacteremia

    DEFF Research Database (Denmark)

    Nielsen, Hans Ulrik; Kolmos, Hans Jørn; Frimodt-Møller, Niels

    2002-01-01

    Bacteremia with beta-hemolytic Streptococci groups A, B, C and G has a mortality rate of approximately 20%. In this study we analyzed the association of various patient risk factors with mortality. Records from 241 patients with beta-hemolytic streptococcal bacteremia were reviewed with particular...... attention to which predisposing factors were predictors of death. A logistic regression model found age, burns, immunosuppressive treatment and iatrogenic procedures prior to the infection to be significant predictors of death, with odds ratios of 1.7 (per decade), 19.7, 3.6 and 6.8, respectively...

  7. [From clinical judgment to linear regression model.

    Science.gov (United States)

    Palacios-Cruz, Lino; Pérez, Marcela; Rivas-Ruiz, Rodolfo; Talavera, Juan O

    2013-01-01

    When we think about mathematical models, such as linear regression model, we think that these terms are only used by those engaged in research, a notion that is far from the truth. Legendre described the first mathematical model in 1805, and Galton introduced the formal term in 1886. Linear regression is one of the most commonly used regression models in clinical practice. It is useful to predict or show the relationship between two or more variables as long as the dependent variable is quantitative and has normal distribution. Stated in another way, the regression is used to predict a measure based on the knowledge of at least one other variable. Linear regression has as it's first objective to determine the slope or inclination of the regression line: Y = a + bx, where "a" is the intercept or regression constant and it is equivalent to "Y" value when "X" equals 0 and "b" (also called slope) indicates the increase or decrease that occurs when the variable "x" increases or decreases in one unit. In the regression line, "b" is called regression coefficient. The coefficient of determination (R 2 ) indicates the importance of independent variables in the outcome.

  8. Regression analysis by example

    CERN Document Server

    Chatterjee, Samprit

    2012-01-01

    Praise for the Fourth Edition: ""This book is . . . an excellent source of examples for regression analysis. It has been and still is readily readable and understandable."" -Journal of the American Statistical Association Regression analysis is a conceptually simple method for investigating relationships among variables. Carrying out a successful application of regression analysis, however, requires a balance of theoretical results, empirical rules, and subjective judgment. Regression Analysis by Example, Fifth Edition has been expanded

  9. Quantile Regression Methods

    DEFF Research Database (Denmark)

    Fitzenberger, Bernd; Wilke, Ralf Andreas

    2015-01-01

    if the mean regression model does not. We provide a short informal introduction into the principle of quantile regression which includes an illustrative application from empirical labor market research. This is followed by briefly sketching the underlying statistical model for linear quantile regression based......Quantile regression is emerging as a popular statistical approach, which complements the estimation of conditional mean models. While the latter only focuses on one aspect of the conditional distribution of the dependent variable, the mean, quantile regression provides more detailed insights...... by modeling conditional quantiles. Quantile regression can therefore detect whether the partial effect of a regressor on the conditional quantiles is the same for all quantiles or differs across quantiles. Quantile regression can provide evidence for a statistical relationship between two variables even...

  10. Understanding logistic regression analysis

    OpenAIRE

    Sperandei, Sandro

    2014-01-01

    Logistic regression is used to obtain odds ratio in the presence of more than one explanatory variable. The procedure is quite similar to multiple linear regression, with the exception that the response variable is binomial. The result is the impact of each variable on the odds ratio of the observed event of interest. The main advantage is to avoid confounding effects by analyzing the association of all variables together. In this article, we explain the logistic regression procedure using ex...

  11. Introduction to regression graphics

    CERN Document Server

    Cook, R Dennis

    2009-01-01

    Covers the use of dynamic and interactive computer graphics in linear regression analysis, focusing on analytical graphics. Features new techniques like plot rotation. The authors have composed their own regression code, using Xlisp-Stat language called R-code, which is a nearly complete system for linear regression analysis and can be utilized as the main computer program in a linear regression course. The accompanying disks, for both Macintosh and Windows computers, contain the R-code and Xlisp-Stat. An Instructor's Manual presenting detailed solutions to all the problems in the book is ava

  12. Alternative Methods of Regression

    CERN Document Server

    Birkes, David

    2011-01-01

    Of related interest. Nonlinear Regression Analysis and its Applications Douglas M. Bates and Donald G. Watts ".an extraordinary presentation of concepts and methods concerning the use and analysis of nonlinear regression models.highly recommend[ed].for anyone needing to use and/or understand issues concerning the analysis of nonlinear regression models." --Technometrics This book provides a balance between theory and practice supported by extensive displays of instructive geometrical constructs. Numerous in-depth case studies illustrate the use of nonlinear regression analysis--with all data s

  13. Beta Emission and Bremsstrahlung

    Energy Technology Data Exchange (ETDEWEB)

    Karpius, Peter Joseph [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-11-13

    Bremsstrahlung is continuous radiation produced by beta particles decelerating in matter; different beta emitters have different endpoint energies; high-energy betas interacting with high-Z materials will more likely produce bremsstrahlung; depending on the data, sometimes all you can say is that a beta emitter is present.

  14. Transport Coefficients of Fluids

    CERN Document Server

    Eu, Byung Chan

    2006-01-01

    Until recently the formal statistical mechanical approach offered no practicable method for computing the transport coefficients of liquids, and so most practitioners had to resort to empirical fitting formulas. This has now changed, as demonstrated in this innovative monograph. The author presents and applies new methods based on statistical mechanics for calculating the transport coefficients of simple and complex liquids over wide ranges of density and temperature. These molecular theories enable the transport coefficients to be calculated in terms of equilibrium thermodynamic properties, and the results are shown to account satisfactorily for experimental observations, including even the non-Newtonian behavior of fluids far from equilibrium.

  15. Tracking time-varying coefficient-functions

    DEFF Research Database (Denmark)

    Nielsen, Henrik Aalborg; Nielsen, Torben Skov; Joensen, Alfred K.

    2000-01-01

    is a combination of recursive least squares with exponential forgetting and local polynomial regression. It is argued, that it is appropriate to let the forgetting factor vary with the value of the external signal which is the argument of the coefficient functions. Some of the key properties of the modified method...... are studied by simulation...

  16. Understanding logistic regression analysis.

    Science.gov (United States)

    Sperandei, Sandro

    2014-01-01

    Logistic regression is used to obtain odds ratio in the presence of more than one explanatory variable. The procedure is quite similar to multiple linear regression, with the exception that the response variable is binomial. The result is the impact of each variable on the odds ratio of the observed event of interest. The main advantage is to avoid confounding effects by analyzing the association of all variables together. In this article, we explain the logistic regression procedure using examples to make it as simple as possible. After definition of the technique, the basic interpretation of the results is highlighted and then some special issues are discussed.

  17. Applied linear regression

    CERN Document Server

    Weisberg, Sanford

    2013-01-01

    Praise for the Third Edition ""...this is an excellent book which could easily be used as a course text...""-International Statistical Institute The Fourth Edition of Applied Linear Regression provides a thorough update of the basic theory and methodology of linear regression modeling. Demonstrating the practical applications of linear regression analysis techniques, the Fourth Edition uses interesting, real-world exercises and examples. Stressing central concepts such as model building, understanding parameters, assessing fit and reliability, and drawing conclusions, the new edition illus

  18. Applied logistic regression

    CERN Document Server

    Hosmer, David W; Sturdivant, Rodney X

    2013-01-01

     A new edition of the definitive guide to logistic regression modeling for health science and other applications This thoroughly expanded Third Edition provides an easily accessible introduction to the logistic regression (LR) model and highlights the power of this model by examining the relationship between a dichotomous outcome and a set of covariables. Applied Logistic Regression, Third Edition emphasizes applications in the health sciences and handpicks topics that best suit the use of modern statistical software. The book provides readers with state-of-

  19. The photodiodes response in beta dosimetry

    International Nuclear Information System (INIS)

    Khoury, Helen; Amaral, Ademir; Hazin, Clovis; Melo, Francisco

    1996-01-01

    The response of the photodiodes BPY-12, BPW-34 and SFH-206 is tested for use as beta dosimeters. The results obtained show a dose-response relationships as well as less than 1% of coefficient of variation for the reproducibility of their responses. The photodiode BPY-12 has presented a better response in comparison with the others

  20. Significance testing in ridge regression for genetic data

    Directory of Open Access Journals (Sweden)

    De Iorio Maria

    2011-09-01

    Full Text Available Abstract Background Technological developments have increased the feasibility of large scale genetic association studies. Densely typed genetic markers are obtained using SNP arrays, next-generation sequencing technologies and imputation. However, SNPs typed using these methods can be highly correlated due to linkage disequilibrium among them, and standard multiple regression techniques fail with these data sets due to their high dimensionality and correlation structure. There has been increasing interest in using penalised regression in the analysis of high dimensional data. Ridge regression is one such penalised regression technique which does not perform variable selection, instead estimating a regression coefficient for each predictor variable. It is therefore desirable to obtain an estimate of the significance of each ridge regression coefficient. Results We develop and evaluate a test of significance for ridge regression coefficients. Using simulation studies, we demonstrate that the performance of the test is comparable to that of a permutation test, with the advantage of a much-reduced computational cost. We introduce the p-value trace, a plot of the negative logarithm of the p-values of ridge regression coefficients with increasing shrinkage parameter, which enables the visualisation of the change in p-value of the regression coefficients with increasing penalisation. We apply the proposed method to a lung cancer case-control data set from EPIC, the European Prospective Investigation into Cancer and Nutrition. Conclusions The proposed test is a useful alternative to a permutation test for the estimation of the significance of ridge regression coefficients, at a much-reduced computational cost. The p-value trace is an informative graphical tool for evaluating the results of a test of significance of ridge regression coefficients as the shrinkage parameter increases, and the proposed test makes its production computationally feasible.

  1. On Weighted Support Vector Regression

    DEFF Research Database (Denmark)

    Han, Xixuan; Clemmensen, Line Katrine Harder

    2014-01-01

    We propose a new type of weighted support vector regression (SVR), motivated by modeling local dependencies in time and space in prediction of house prices. The classic weights of the weighted SVR are added to the slack variables in the objective function (OF‐weights). This procedure directly...... shrinks the coefficient of each observation in the estimated functions; thus, it is widely used for minimizing influence of outliers. We propose to additionally add weights to the slack variables in the constraints (CF‐weights) and call the combination of weights the doubly weighted SVR. We illustrate...... the differences and similarities of the two types of weights by demonstrating the connection between the Least Absolute Shrinkage and Selection Operator (LASSO) and the SVR. We show that an SVR problem can be transformed to a LASSO problem plus a linear constraint and a box constraint. We demonstrate...

  2. Understanding poisson regression.

    Science.gov (United States)

    Hayat, Matthew J; Higgins, Melinda

    2014-04-01

    Nurse investigators often collect study data in the form of counts. Traditional methods of data analysis have historically approached analysis of count data either as if the count data were continuous and normally distributed or with dichotomization of the counts into the categories of occurred or did not occur. These outdated methods for analyzing count data have been replaced with more appropriate statistical methods that make use of the Poisson probability distribution, which is useful for analyzing count data. The purpose of this article is to provide an overview of the Poisson distribution and its use in Poisson regression. Assumption violations for the standard Poisson regression model are addressed with alternative approaches, including addition of an overdispersion parameter or negative binomial regression. An illustrative example is presented with an application from the ENSPIRE study, and regression modeling of comorbidity data is included for illustrative purposes. Copyright 2014, SLACK Incorporated.

  3. Standardization of low energy beta and beta-gamma complex emitters by the tracer and the efficiency extrapolation methods

    International Nuclear Information System (INIS)

    Sahagia, M.

    1978-01-01

    The absolute standardization of radioactive solutions of low energy beta emitters and beta-gamma emitters with a high probability of disintegration to the ground state is described; the tracer and the efficiency extrapolation methods were used. Both types of radionuclides were mathematically and physically treated in an unified manner. The theoretical relations between different beta spectra were calculated according to Williams' model and experimentally verified for: 35 S + 60 Co, 35 S + 95 Nb, 147 Pm + 60 Co, 14 C + 95 Nb and two beta branches of 99 Mo. The optimum range of beta efficiency variation was indicated. The basic supposition that all beta efficieny tend to unity in the same time was experimentally verified, using two 192 Ir beta branches. Four computer programs, written in the FORTRAN IV language, were elaborated, for the adequate processing of the experimental data. Good precision coefficients according to international standards were obtained in the absolute standardization of 35 S, 147 Pm, 99 Mo solutions. (author)

  4. Fracture risk in perimenopausal women treated with beta-blockers

    DEFF Research Database (Denmark)

    Rejnmark, Lars; Vestergaard, Peter; Kassem, M.

    2004-01-01

    beta2-Adrenergic receptors have been identified on human osteoblastic and osteoclastic cells, raising the question of a sympathetic regulation of bone metabolism. We investigated effects of treatment with beta-adrenergic receptor antagonists (beta-blockers) on bone turnover, bone mineral density...... (BMD), and fracture risk. Within the Danish Osteoporosis Prevention Study (DOPS) a population based, comprehensive cohort study of 2016 perimenopausal women, associations between treatment with beta-blockers and bone turnover and BMD were assessed in a cross-sectional design at the start of study....... Moreover, in a nested case-control design, fracture risk during the subsequent 5 years was assessed in relation to treatment with beta-blockers at baseline. Multiple regression- and logistic regression-analyses were performed. Treatment with beta-blockers was associated with a threefold increased fracture...

  5. Discharge Coefficient of Rectangular Short-Crested Weir with Varying Slope Coefficients

    Directory of Open Access Journals (Sweden)

    Yuejun Chen

    2018-02-01

    Full Text Available Rectangular short-crested weirs are widely used for simple structure and high discharge capacity. As one of the most important and influential factors of discharge capacity, side slope can improve the hydraulic characteristics of weirs at special conditions. In order to systemically study the effects of upstream and downstream slope coefficients S1 and S2 on overflow discharge coefficient in a rectangular short-crested weir the Volume of Fluid (VOF method and the Renormalization Group (RNG κ-ε turbulence model are used. In this study, the slope coefficient ranges from V to 3H:1V and each model corresponds to five total energy heads of H0 ranging from 8.0 to 24.0 cm. Comparisons of discharge coefficients and free surface profiles between simulated and laboratory results display a good agreement. The simulated results show that the difference of discharge coefficients will decrease with upstream slopes and increase with downstream slopes as H0 increases. For a given H0, the discharge coefficient has a convex parabolic relation with S1 and a piecewise linearity relation with S2. The maximum discharge coefficient is always obtained at S2 = 0.8. There exists a difference between upstream and downstream slope coefficients in the influence range of free surface curvatures. Furthermore, a proposed discharge coefficient equation by nonlinear regression is a function of upstream and downstream slope coefficients.

  6. Neutrosophic Correlation and Simple Linear Regression

    Directory of Open Access Journals (Sweden)

    A. A. Salama

    2014-09-01

    Full Text Available Since the world is full of indeterminacy, the neutrosophics found their place into contemporary research. The fundamental concepts of neutrosophic set, introduced by Smarandache. Recently, Salama et al., introduced the concept of correlation coefficient of neutrosophic data. In this paper, we introduce and study the concepts of correlation and correlation coefficient of neutrosophic data in probability spaces and study some of their properties. Also, we introduce and study the neutrosophic simple linear regression model. Possible applications to data processing are touched upon.

  7. Minimax Regression Quantiles

    DEFF Research Database (Denmark)

    Bache, Stefan Holst

    A new and alternative quantile regression estimator is developed and it is shown that the estimator is root n-consistent and asymptotically normal. The estimator is based on a minimax ‘deviance function’ and has asymptotically equivalent properties to the usual quantile regression estimator. It is......, however, a different and therefore new estimator. It allows for both linear- and nonlinear model specifications. A simple algorithm for computing the estimates is proposed. It seems to work quite well in practice but whether it has theoretical justification is still an open question....

  8. riskRegression

    DEFF Research Database (Denmark)

    Ozenne, Brice; Sørensen, Anne Lyngholm; Scheike, Thomas

    2017-01-01

    In the presence of competing risks a prediction of the time-dynamic absolute risk of an event can be based on cause-specific Cox regression models for the event and the competing risks (Benichou and Gail, 1990). We present computationally fast and memory optimized C++ functions with an R interface...... for predicting the covariate specific absolute risks, their confidence intervals, and their confidence bands based on right censored time to event data. We provide explicit formulas for our implementation of the estimator of the (stratified) baseline hazard function in the presence of tied event times. As a by...... functionals. The software presented here is implemented in the riskRegression package....

  9. Levered and unlevered Beta

    OpenAIRE

    Fernandez, Pablo

    2003-01-01

    We prove that in a world without leverage cost the relationship between the levered beta ( L) and the unlevered beta ( u) is the No-costs-of-leverage formula: L = u + ( u - d) D (1 - T) / E. We also analyze 6 alternative valuation theories proposed in the literature to estimate the relationship between the levered beta and the unlevered beta (Harris and Pringle (1985), Modigliani and Miller (1963), Damodaran (1994), Myers (1974), Miles and Ezzell (1980), and practitioners) and prove that all ...

  10. Sigma beta decay

    International Nuclear Information System (INIS)

    Newman, D.E.

    1975-01-01

    Describes an experiment to measure beta decays of the sigma particle. Sigmas produced by stopping a K - beam in a liquid hydrogen target decayed in the following reactions: Kp → Σπ; Σ → Neν. The electron and pion were detected by wire spark chambers in a magnetic spectrometer and by plastic scintillators, and were differentiated by a threshold gas Cherenkov counter. The neutron was detected by liquid scintillation counters. The data (n = 3) shell electrons or the highly excited electrons decay first. Instead, it is suggested that when there are two to five electrons in highly excited states immediately after a heavy ion--atom collision the first transitions to occur will be among highly excited Rydberg states in a cascade down to the 4s, 4p, and 3d-subshells. If one of the long lived states becomes occupied by electrons promoted during the collision or by electrons falling from higher levels, it will not decay until after the valence shell decays. LMM rates calculated to test the methods used are compared to previous works. The mixing coefficients are given in terms of the states 4s4p, 45sp+-, and 5s5p. The applicability of Cooper, Fano, and Prats' discussion of the energies and transition rates of doubly excited states is considered

  11. Beta Thalassemia (For Parents)

    Science.gov (United States)

    ... Safe Videos for Educators Search English Español Beta Thalassemia KidsHealth / For Parents / Beta Thalassemia What's in this ... Symptoms Diagnosis Treatment Print en español Beta talasemia Thalassemias Thalassemias are a group of blood disorders that ...

  12. Bayesian logistic regression analysis

    NARCIS (Netherlands)

    Van Erp, H.R.N.; Van Gelder, P.H.A.J.M.

    2012-01-01

    In this paper we present a Bayesian logistic regression analysis. It is found that if one wishes to derive the posterior distribution of the probability of some event, then, together with the traditional Bayes Theorem and the integrating out of nuissance parameters, the Jacobian transformation is an

  13. Linear Regression Analysis

    CERN Document Server

    Seber, George A F

    2012-01-01

    Concise, mathematically clear, and comprehensive treatment of the subject.* Expanded coverage of diagnostics and methods of model fitting.* Requires no specialized knowledge beyond a good grasp of matrix algebra and some acquaintance with straight-line regression and simple analysis of variance models.* More than 200 problems throughout the book plus outline solutions for the exercises.* This revision has been extensively class-tested.

  14. Nonlinear Regression with R

    CERN Document Server

    Ritz, Christian; Parmigiani, Giovanni

    2009-01-01

    R is a rapidly evolving lingua franca of graphical display and statistical analysis of experiments from the applied sciences. This book provides a coherent treatment of nonlinear regression with R by means of examples from a diversity of applied sciences such as biology, chemistry, engineering, medicine and toxicology.

  15. Bayesian ARTMAP for regression.

    Science.gov (United States)

    Sasu, L M; Andonie, R

    2013-10-01

    Bayesian ARTMAP (BA) is a recently introduced neural architecture which uses a combination of Fuzzy ARTMAP competitive learning and Bayesian learning. Training is generally performed online, in a single-epoch. During training, BA creates input data clusters as Gaussian categories, and also infers the conditional probabilities between input patterns and categories, and between categories and classes. During prediction, BA uses Bayesian posterior probability estimation. So far, BA was used only for classification. The goal of this paper is to analyze the efficiency of BA for regression problems. Our contributions are: (i) we generalize the BA algorithm using the clustering functionality of both ART modules, and name it BA for Regression (BAR); (ii) we prove that BAR is a universal approximator with the best approximation property. In other words, BAR approximates arbitrarily well any continuous function (universal approximation) and, for every given continuous function, there is one in the set of BAR approximators situated at minimum distance (best approximation); (iii) we experimentally compare the online trained BAR with several neural models, on the following standard regression benchmarks: CPU Computer Hardware, Boston Housing, Wisconsin Breast Cancer, and Communities and Crime. Our results show that BAR is an appropriate tool for regression tasks, both for theoretical and practical reasons. Copyright © 2013 Elsevier Ltd. All rights reserved.

  16. Bounded Gaussian process regression

    DEFF Research Database (Denmark)

    Jensen, Bjørn Sand; Nielsen, Jens Brehm; Larsen, Jan

    2013-01-01

    We extend the Gaussian process (GP) framework for bounded regression by introducing two bounded likelihood functions that model the noise on the dependent variable explicitly. This is fundamentally different from the implicit noise assumption in the previously suggested warped GP framework. We...... with the proposed explicit noise-model extension....

  17. and Multinomial Logistic Regression

    African Journals Online (AJOL)

    This work presented the results of an experimental comparison of two models: Multinomial Logistic Regression (MLR) and Artificial Neural Network (ANN) for classifying students based on their academic performance. The predictive accuracy for each model was measured by their average Classification Correct Rate (CCR).

  18. Mechanisms of neuroblastoma regression

    Science.gov (United States)

    Brodeur, Garrett M.; Bagatell, Rochelle

    2014-01-01

    Recent genomic and biological studies of neuroblastoma have shed light on the dramatic heterogeneity in the clinical behaviour of this disease, which spans from spontaneous regression or differentiation in some patients, to relentless disease progression in others, despite intensive multimodality therapy. This evidence also suggests several possible mechanisms to explain the phenomena of spontaneous regression in neuroblastomas, including neurotrophin deprivation, humoral or cellular immunity, loss of telomerase activity and alterations in epigenetic regulation. A better understanding of the mechanisms of spontaneous regression might help to identify optimal therapeutic approaches for patients with these tumours. Currently, the most druggable mechanism is the delayed activation of developmentally programmed cell death regulated by the tropomyosin receptor kinase A pathway. Indeed, targeted therapy aimed at inhibiting neurotrophin receptors might be used in lieu of conventional chemotherapy or radiation in infants with biologically favourable tumours that require treatment. Alternative approaches consist of breaking immune tolerance to tumour antigens or activating neurotrophin receptor pathways to induce neuronal differentiation. These approaches are likely to be most effective against biologically favourable tumours, but they might also provide insights into treatment of biologically unfavourable tumours. We describe the different mechanisms of spontaneous neuroblastoma regression and the consequent therapeutic approaches. PMID:25331179

  19. Comparison of partial least squares and lasso regression techniques as applied to laser-induced breakdown spectroscopy of geological samples

    Energy Technology Data Exchange (ETDEWEB)

    Dyar, M.D., E-mail: mdyar@mtholyoke.edu [Dept. of Astronomy, Mount Holyoke College, 50 College St., South Hadley, MA 01075 (United States); Carmosino, M.L.; Breves, E.A.; Ozanne, M.V. [Dept. of Astronomy, Mount Holyoke College, 50 College St., South Hadley, MA 01075 (United States); Clegg, S.M.; Wiens, R.C. [Los Alamos National Laboratory, P.O. Box 1663, MS J565, Los Alamos, NM 87545 (United States)

    2012-04-15

    response variables as possible while avoiding multicollinearity between principal components. When the selected number of principal components is projected back into the original feature space of the spectra, 6144 correlation coefficients are generated, a small fraction of which are mathematically significant to the regression. In contrast, the lasso models require only a small number (< 24) of non-zero correlation coefficients ({beta} values) to determine the concentration of each of the ten major elements. Causality between the positively-correlated emission lines chosen by the lasso and the elemental concentration was examined. In general, the higher the lasso coefficient ({beta}), the greater the likelihood that the selected line results from an emission of that element. Emission lines with negative {beta} values should arise from elements that are anti-correlated with the element being predicted. For elements except Fe, Al, Ti, and P, the lasso-selected wavelength with the highest {beta} value corresponds to the element being predicted, e.g. 559.8 nm for neutral Ca. However, the specific lines chosen by the lasso with positive {beta} values are not always those from the element being predicted. Other wavelengths and the elements that most strongly correlate with them to predict concentration are obviously related to known geochemical correlations or close overlap of emission lines, while others must result from matrix effects. Use of the lasso technique thus directly informs our understanding of the underlying physical processes that give rise to LIBS emissions by determining which lines can best represent concentration, and which lines from other elements are causing matrix effects. - Highlights: Black-Right-Pointing-Pointer Compositions of 100 rocks are predicted from LIBS with PLS-1, PLS-2, and the lasso. Black-Right-Pointing-Pointer All yield comparable results in terms of accuracy, but not interpretability. Black-Right-Pointing-Pointer Lasso chooses channels from

  20. Forward-Looking Betas

    DEFF Research Database (Denmark)

    Christoffersen, Peter; Jacobs, Kris; Vainberg, Gregory

    Few issues are more important for finance practice than the computation of market betas. Existing approaches compute market betas using historical data. While these approaches differ in terms of statistical sophistication and the modeling of the time-variation in the betas, they are all backward......-looking. This paper introduces a radically different approach to estimating market betas. Using the tools in Bakshi and Madan (2000) and Bakshi, Kapadia and Madan (2003) we employ the information embedded in the prices of individual stock options and index options to compute our forward-looking market beta...

  1. Forward-Looking Beta Estimates:Evidence from an Emerging Market

    OpenAIRE

    Onour, Ibrahim

    2008-01-01

    Results in this paper support evidence of time-varying beta coefficients for five sectors in Kuwait Stock Market. The paper indicates banks, food, and service sectors exhibit relatively wider range of variation compared to industry and real estate sectors. Results of time-varying betas invalidate the standard application of Capital Asset Pricing model that assumes constant beta. In terms of risk exposure, banks and industrial sectors reflect higher risk as their average betas exceed the mark...

  2. Ridge Regression Signal Processing

    Science.gov (United States)

    Kuhl, Mark R.

    1990-01-01

    The introduction of the Global Positioning System (GPS) into the National Airspace System (NAS) necessitates the development of Receiver Autonomous Integrity Monitoring (RAIM) techniques. In order to guarantee a certain level of integrity, a thorough understanding of modern estimation techniques applied to navigational problems is required. The extended Kalman filter (EKF) is derived and analyzed under poor geometry conditions. It was found that the performance of the EKF is difficult to predict, since the EKF is designed for a Gaussian environment. A novel approach is implemented which incorporates ridge regression to explain the behavior of an EKF in the presence of dynamics under poor geometry conditions. The basic principles of ridge regression theory are presented, followed by the derivation of a linearized recursive ridge estimator. Computer simulations are performed to confirm the underlying theory and to provide a comparative analysis of the EKF and the recursive ridge estimator.

  3. Subset selection in regression

    CERN Document Server

    Miller, Alan

    2002-01-01

    Originally published in 1990, the first edition of Subset Selection in Regression filled a significant gap in the literature, and its critical and popular success has continued for more than a decade. Thoroughly revised to reflect progress in theory, methods, and computing power, the second edition promises to continue that tradition. The author has thoroughly updated each chapter, incorporated new material on recent developments, and included more examples and references. New in the Second Edition:A separate chapter on Bayesian methodsComplete revision of the chapter on estimationA major example from the field of near infrared spectroscopyMore emphasis on cross-validationGreater focus on bootstrappingStochastic algorithms for finding good subsets from large numbers of predictors when an exhaustive search is not feasible Software available on the Internet for implementing many of the algorithms presentedMore examplesSubset Selection in Regression, Second Edition remains dedicated to the techniques for fitting...

  4. Better Autologistic Regression

    Directory of Open Access Journals (Sweden)

    Mark A. Wolters

    2017-11-01

    Full Text Available Autologistic regression is an important probability model for dichotomous random variables observed along with covariate information. It has been used in various fields for analyzing binary data possessing spatial or network structure. The model can be viewed as an extension of the autologistic model (also known as the Ising model, quadratic exponential binary distribution, or Boltzmann machine to include covariates. It can also be viewed as an extension of logistic regression to handle responses that are not independent. Not all authors use exactly the same form of the autologistic regression model. Variations of the model differ in two respects. First, the variable coding—the two numbers used to represent the two possible states of the variables—might differ. Common coding choices are (zero, one and (minus one, plus one. Second, the model might appear in either of two algebraic forms: a standard form, or a recently proposed centered form. Little attention has been paid to the effect of these differences, and the literature shows ambiguity about their importance. It is shown here that changes to either coding or centering in fact produce distinct, non-nested probability models. Theoretical results, numerical studies, and analysis of an ecological data set all show that the differences among the models can be large and practically significant. Understanding the nature of the differences and making appropriate modeling choices can lead to significantly improved autologistic regression analyses. The results strongly suggest that the standard model with plus/minus coding, which we call the symmetric autologistic model, is the most natural choice among the autologistic variants.

  5. Regression in organizational leadership.

    Science.gov (United States)

    Kernberg, O F

    1979-02-01

    The choice of good leaders is a major task for all organizations. Inforamtion regarding the prospective administrator's personality should complement questions regarding his previous experience, his general conceptual skills, his technical knowledge, and the specific skills in the area for which he is being selected. The growing psychoanalytic knowledge about the crucial importance of internal, in contrast to external, object relations, and about the mutual relationships of regression in individuals and in groups, constitutes an important practical tool for the selection of leaders.

  6. Classification and regression trees

    CERN Document Server

    Breiman, Leo; Olshen, Richard A; Stone, Charles J

    1984-01-01

    The methodology used to construct tree structured rules is the focus of this monograph. Unlike many other statistical procedures, which moved from pencil and paper to calculators, this text's use of trees was unthinkable before computers. Both the practical and theoretical sides have been developed in the authors' study of tree methods. Classification and Regression Trees reflects these two sides, covering the use of trees as a data analysis method, and in a more mathematical framework, proving some of their fundamental properties.

  7. Attenuation coefficients of soils

    International Nuclear Information System (INIS)

    Martini, E.; Naziry, M.J.

    1989-01-01

    As a prerequisite to the interpretation of gamma-spectrometric in situ measurements of activity concentrations of soil radionuclides the attenuation of 60 to 1332 keV gamma radiation by soil samples varying in water content and density has been investigated. A useful empirical equation could be set up to describe the dependence of the mass attenuation coefficient upon photon energy for soil with a mean water content of 10%, with the results comparing well with data in the literature. The mean density of soil in the GDR was estimated at 1.6 g/cm 3 . This value was used to derive the linear attenuation coefficients, their range of variation being 10%. 7 figs., 5 tabs. (author)

  8. Logistic regression models

    CERN Document Server

    Hilbe, Joseph M

    2009-01-01

    This book really does cover everything you ever wanted to know about logistic regression … with updates available on the author's website. Hilbe, a former national athletics champion, philosopher, and expert in astronomy, is a master at explaining statistical concepts and methods. Readers familiar with his other expository work will know what to expect-great clarity.The book provides considerable detail about all facets of logistic regression. No step of an argument is omitted so that the book will meet the needs of the reader who likes to see everything spelt out, while a person familiar with some of the topics has the option to skip "obvious" sections. The material has been thoroughly road-tested through classroom and web-based teaching. … The focus is on helping the reader to learn and understand logistic regression. The audience is not just students meeting the topic for the first time, but also experienced users. I believe the book really does meet the author's goal … .-Annette J. Dobson, Biometric...

  9. Predicting Bond Betas using Macro-Finance Variables

    DEFF Research Database (Denmark)

    Aslanidis, Nektarios; Christiansen, Charlotte; Cipollini, Andrea

    We conduct in-sample and out-of-sample forecasting using the new approach of combining explanatory variables through complete subset regressions (CSR). We predict bond CAPM betas and bond returns conditioning on various macro-fi…nance variables. We explore differences across long-term government ...... bonds, investment grade corporate bonds, and high-yield corporate bonds. The CSR method performs well in predicting bond betas, especially in-sample, and, mainly high-yield bond betas when the focus is out-of-sample. Bond returns are less predictable than bond betas....

  10. Interpreting Multiple Linear Regression: A Guidebook of Variable Importance

    Science.gov (United States)

    Nathans, Laura L.; Oswald, Frederick L.; Nimon, Kim

    2012-01-01

    Multiple regression (MR) analyses are commonly employed in social science fields. It is also common for interpretation of results to typically reflect overreliance on beta weights, often resulting in very limited interpretations of variable importance. It appears that few researchers employ other methods to obtain a fuller understanding of what…

  11. Testing the equality of nonparametric regression curves based on ...

    African Journals Online (AJOL)

    Abstract. In this work we propose a new methodology for the comparison of two regression functions f1 and f2 in the case of homoscedastic error structure and a fixed design. Our approach is based on the empirical Fourier coefficients of the regression functions f1 and f2 respectively. As our main results we obtain the ...

  12. Implicit collinearity effect in linear regression: Application to basal ...

    African Journals Online (AJOL)

    Collinearity of predictor variables is a severe problem in the least square regression analysis. It contributes to the instability of regression coefficients and leads to a wrong prediction accuracy. Despite these problems, studies are conducted with a large number of observed and derived variables linked with a response ...

  13. Changes in persistence, spurious regressions and the Fisher hypothesis

    DEFF Research Database (Denmark)

    Kruse, Robinson; Ventosa-Santaulària, Daniel; Noriega, Antonio E.

    Declining inflation persistence has been documented in numerous studies. When such series are analyzed in a regression framework in conjunction with other persistent time series, spurious regressions are likely to occur. We propose to use the coefficient of determination R2 as a test statistic to...

  14. Steganalysis using logistic regression

    Science.gov (United States)

    Lubenko, Ivans; Ker, Andrew D.

    2011-02-01

    We advocate Logistic Regression (LR) as an alternative to the Support Vector Machine (SVM) classifiers commonly used in steganalysis. LR offers more information than traditional SVM methods - it estimates class probabilities as well as providing a simple classification - and can be adapted more easily and efficiently for multiclass problems. Like SVM, LR can be kernelised for nonlinear classification, and it shows comparable classification accuracy to SVM methods. This work is a case study, comparing accuracy and speed of SVM and LR classifiers in detection of LSB Matching and other related spatial-domain image steganography, through the state-of-art 686-dimensional SPAM feature set, in three image sets.

  15. SEPARATION PHENOMENA LOGISTIC REGRESSION

    Directory of Open Access Journals (Sweden)

    Ikaro Daniel de Carvalho Barreto

    2014-03-01

    Full Text Available This paper proposes an application of concepts about the maximum likelihood estimation of the binomial logistic regression model to the separation phenomena. It generates bias in the estimation and provides different interpretations of the estimates on the different statistical tests (Wald, Likelihood Ratio and Score and provides different estimates on the different iterative methods (Newton-Raphson and Fisher Score. It also presents an example that demonstrates the direct implications for the validation of the model and validation of variables, the implications for estimates of odds ratios and confidence intervals, generated from the Wald statistics. Furthermore, we present, briefly, the Firth correction to circumvent the phenomena of separation.

  16. riskRegression

    DEFF Research Database (Denmark)

    Ozenne, Brice; Sørensen, Anne Lyngholm; Scheike, Thomas

    2017-01-01

    In the presence of competing risks a prediction of the time-dynamic absolute risk of an event can be based on cause-specific Cox regression models for the event and the competing risks (Benichou and Gail, 1990). We present computationally fast and memory optimized C++ functions with an R interface......-product we obtain fast access to the baseline hazards (compared to survival::basehaz()) and predictions of survival probabilities, their confidence intervals and confidence bands. Confidence intervals and confidence bands are based on point-wise asymptotic expansions of the corresponding statistical...

  17. Adaptive metric kernel regression

    DEFF Research Database (Denmark)

    Goutte, Cyril; Larsen, Jan

    2000-01-01

    Kernel smoothing is a widely used non-parametric pattern recognition technique. By nature, it suffers from the curse of dimensionality and is usually difficult to apply to high input dimensions. In this contribution, we propose an algorithm that adapts the input metric used in multivariate...... regression by minimising a cross-validation estimate of the generalisation error. This allows to automatically adjust the importance of different dimensions. The improvement in terms of modelling performance is illustrated on a variable selection task where the adaptive metric kernel clearly outperforms...

  18. Adaptive Metric Kernel Regression

    DEFF Research Database (Denmark)

    Goutte, Cyril; Larsen, Jan

    1998-01-01

    Kernel smoothing is a widely used nonparametric pattern recognition technique. By nature, it suffers from the curse of dimensionality and is usually difficult to apply to high input dimensions. In this paper, we propose an algorithm that adapts the input metric used in multivariate regression...... by minimising a cross-validation estimate of the generalisation error. This allows one to automatically adjust the importance of different dimensions. The improvement in terms of modelling performance is illustrated on a variable selection task where the adaptive metric kernel clearly outperforms the standard...

  19. The Truth About Ballistic Coefficients

    OpenAIRE

    Courtney, Michael; Courtney, Amy

    2007-01-01

    The ballistic coefficient of a bullet describes how it slows in flight due to air resistance. This article presents experimental determinations of ballistic coefficients showing that the majority of bullets tested have their previously published ballistic coefficients exaggerated from 5-25% by the bullet manufacturers. These exaggerated ballistic coefficients lead to inaccurate predictions of long range bullet drop, retained energy and wind drift.

  20. Beta decay to the second 2+ excited state of 122Te

    International Nuclear Information System (INIS)

    Hayashi, Takeo; Yamada, Shigeru

    1976-01-01

    The first-forbidden beta transition in Sb-122 was studied by the angular correlation experiment and the beta-spectra. The special precautions were paid for counting the beta particles having energy lower than 750 keV in the beta-gamma angular correlation measurement. The sources of Sb-122 were obtained by irradiating enriched Sb-121 in the Kyoto University reactor. The reduced beta coefficient R(E) was obtained from the angular correlation function. The beta spectrum measurement was performed with a sector type double focusing beta-ray spectrometer. The R(E) values for the beta transitions were analyzed by using the simplex method as used by Manthuruthil and Poirier to compare the angular correlation data with the exact formula given by Morita and Morita. Sets of the nuclear matrix parameters thus obtained show that the condition for the cancellation effect is satisfied in the beta transition. (Kato, T.)

  1. On Solving Lq-Penalized Regressions

    Directory of Open Access Journals (Sweden)

    Tracy Zhou Wu

    2007-01-01

    Full Text Available Lq-penalized regression arises in multidimensional statistical modelling where all or part of the regression coefficients are penalized to achieve both accuracy and parsimony of statistical models. There is often substantial computational difficulty except for the quadratic penalty case. The difficulty is partly due to the nonsmoothness of the objective function inherited from the use of the absolute value. We propose a new solution method for the general Lq-penalized regression problem based on space transformation and thus efficient optimization algorithms. The new method has immediate applications in statistics, notably in penalized spline smoothing problems. In particular, the LASSO problem is shown to be polynomial time solvable. Numerical studies show promise of our approach.

  2. Influence diagnostics in meta-regression model.

    Science.gov (United States)

    Shi, Lei; Zuo, ShanShan; Yu, Dalei; Zhou, Xiaohua

    2017-09-01

    This paper studies the influence diagnostics in meta-regression model including case deletion diagnostic and local influence analysis. We derive the subset deletion formulae for the estimation of regression coefficient and heterogeneity variance and obtain the corresponding influence measures. The DerSimonian and Laird estimation and maximum likelihood estimation methods in meta-regression are considered, respectively, to derive the results. Internal and external residual and leverage measure are defined. The local influence analysis based on case-weights perturbation scheme, responses perturbation scheme, covariate perturbation scheme, and within-variance perturbation scheme are explored. We introduce a method by simultaneous perturbing responses, covariate, and within-variance to obtain the local influence measure, which has an advantage of capable to compare the influence magnitude of influential studies from different perturbations. An example is used to illustrate the proposed methodology. Copyright © 2017 John Wiley & Sons, Ltd.

  3. Betting Against Beta

    DEFF Research Database (Denmark)

    Frazzini, Andrea; Heje Pedersen, Lasse

    We present a model with leverage and margin constraints that vary across investors and time. We find evidence consistent with each of the model’s five central predictions: (1) Since constrained investors bid up high-beta assets, high beta is associated with low alpha, as we find empirically for U...... of the BAB factor is low; (4) Increased funding liquidity risk compresses betas toward one; (5) More constrained investors hold riskier assets........S. equities, 20 international equity markets, Treasury bonds, corporate bonds, and futures; (2) A betting-against-beta (BAB) factor, which is long leveraged low beta assets and short high-beta assets, produces significant positive risk-adjusted returns; (3) When funding constraints tighten, the return...

  4. Roughing up Beta

    DEFF Research Database (Denmark)

    Bollerslev, Tim; Li, Sophia Zhengzi; Todorov, Viktor

    -section. An investment strategy that goes long stocks with high jump betas and short stocks with low jump betas produces significant average excess returns. These higher risk premiums for the discontinuous and overnight market betas remain significant after controlling for a long list of other firm characteristics......Motivated by the implications from a stylized equilibrium pricing framework, we investigate empirically how individual equity prices respond to continuous, or \\smooth," and jumpy, or \\rough," market price moves, and how these different market price risks, or betas, are priced in the cross......-section of expected returns. Based on a novel highfrequency dataset of almost one-thousand individual stocks over two decades, we find that the two rough betas associated with intraday discontinuous and overnight returns entail significant risk premiums, while the intraday continuous beta is not priced in the cross...

  5. Beta adrenergic receptors in human cavernous tissue

    Energy Technology Data Exchange (ETDEWEB)

    Dhabuwala, C.B.; Ramakrishna, C.V.; Anderson, G.F.

    1985-04-01

    Beta adrenergic receptor binding was performed with /sup 125/I iodocyanopindolol on human cavernous tissue membrane fractions from normal tissue and transsexual procedures obtained postoperatively, as well as from postmortem sources. Isotherm binding studies on normal fresh tissues indicated that the receptor density was 9.1 fmoles/mg. with a KD of 23 pM. Tissue stored at room temperature for 4 to 6 hours, then at 4C in saline solution for 19 to 20 hours before freezing showed no significant changes in receptor density or affinity, and provided evidence for the stability of postmortem tissue obtained within the same time period. Beta receptor density of 2 cavernous preparations from transsexual procedures was not significantly different from normal control tissues, and showed that high concentrations of estrogen received by these patients had no effect on beta adrenergic receptor density. Displacement of /sup 125/iodocyanopindolol by 5 beta adrenergic agents demonstrated that 1-propranolol had the greatest affinity followed by ICI 118,551, zinterol, metoprolol and practolol. When the results of these displacement studies were subjected to Scatfit, non- linear regression line analysis, a single binding site was described. Based on the relative potency of the selective beta adrenergic agents it appears that these receptors were of the beta 2 subtype.

  6. Beta limits for ETF

    International Nuclear Information System (INIS)

    Helton, F.J.; Miller, R.L.

    1982-01-01

    ETF (Engineering Test Facility) one-dimensional transport simulations indicate that a volume-average beta of 4% is required for ignition. It is therefore important that theoretical beta limits, determined by requiring equilibria to be stable to all ideal modes, exceed 4%. This paper documents an ideal MHD analysis wherein it is shown that, with appropriate plasma cross-sectional shape and current profile optimization, operation near 5% is possible. The critical beta value, however, depends on the functional form used for ff', which suggests that higher critical betas could be achieved by directly optimizing the safety factor profile. (author)

  7. Beta-energy averaging and beta spectra

    International Nuclear Information System (INIS)

    Stamatelatos, M.G.; England, T.R.

    1976-07-01

    A simple yet highly accurate method for approximately calculating spectrum-averaged beta energies and beta spectra for radioactive nuclei is presented. This method should prove useful for users who wish to obtain accurate answers without complicated calculations of Fermi functions, complex gamma functions, and time-consuming numerical integrations as required by the more exact theoretical expressions. Therefore, this method should be a good time-saving alternative for investigators who need to make calculations involving large numbers of nuclei (e.g., fission products) as well as for occasional users interested in restricted number of nuclides. The average beta-energy values calculated by this method differ from those calculated by ''exact'' methods by no more than 1 percent for nuclides with atomic numbers in the 20 to 100 range and which emit betas of energies up to approximately 8 MeV. These include all fission products and the actinides. The beta-energy spectra calculated by the present method are also of the same quality

  8. Investment Volatility: A Critique of Standard Beta Estimation and a Simple Way Forward

    OpenAIRE

    Chris Tofallis

    2011-01-01

    Beta is a widely used quantity in investment analysis. We review the common interpretations that are applied to beta in finance and show that the standard method of estimation - least squares regression - is inconsistent with these interpretations. We present the case for an alternative beta estimator which is more appropriate, as well as being easier to understand and to calculate. Unlike regression, the line fit we propose treats both variables in the same way. Remarkably, it provides a slo...

  9. Aid and growth regressions

    DEFF Research Database (Denmark)

    Hansen, Henrik; Tarp, Finn

    2001-01-01

    This paper examines the relationship between foreign aid and growth in real GDP per capita as it emerges from simple augmentations of popular cross country growth specifications. It is shown that aid in all likelihood increases the growth rate, and this result is not conditional on ‘good’ policy....... investment. We conclude by stressing the need for more theoretical work before this kind of cross-country regressions are used for policy purposes.......This paper examines the relationship between foreign aid and growth in real GDP per capita as it emerges from simple augmentations of popular cross country growth specifications. It is shown that aid in all likelihood increases the growth rate, and this result is not conditional on ‘good’ policy...

  10. Inbreeding coefficients and degree of consanguineous marriages in Spain: a review.

    Science.gov (United States)

    Fuster, Vicente; Colantonio, Sonia Edith

    2003-01-01

    The contribution of consanguineous marriages corresponding to uncle-niece or aunt-nephew (C12), first cousin (C22), first cousin once removed (C23), and second cousin (C33) to the inbreeding coefficient (alpha) was analyzed from a sample of Spanish areas and periods. Multiple regressions were performed taking as independent variables the different degrees of consanguinity previously selected (C12, C22, C23, and C33) and as dependent variable the inbreeding coefficient (alpha). According to the results obtained for any degree and period, rural frequencies always surpass urban. However, the pattern is similar in both areas. In the period where consanguinity was more elevated (1890-1929) the C22/C33 ratio increased. Its variation is not due to C22 and C33 changes in the same way. In rural areas, this ratio surpasses the expected value by a factor of 2-3, but in urban areas it was 7-10 times larger, in some cases due to migration. While in rural Spain the C33 frequency was approximately 1.5 times C22, in cities C22 was 1.5 times C33. The best fit among the various types of consanguineous matings and alpha involves a lineal relationship. Regardless of the number of variables contributing significantly to alpha, C22 matings are always present. Moreover, their standardized (beta) coefficients are the highest. The above indicates that this consanguineous relationship conditions the inbreeding coefficient the most. In the period of greater consanguinity, close relationships, uncle-niece C12, and first cousin once removed (C23) make a significant contribution to alpha. In rural Spain second cousins (C33) always significantly determined alpha; however, in cities the inbreeding variation was mainly due to C12 and C23. Copyright 2003 Wiley-Liss, Inc.

  11. The microcomputer scientific software series 2: general linear model--regression.

    Science.gov (United States)

    Harold M. Rauscher

    1983-01-01

    The general linear model regression (GLMR) program provides the microcomputer user with a sophisticated regression analysis capability. The output provides a regression ANOVA table, estimators of the regression model coefficients, their confidence intervals, confidence intervals around the predicted Y-values, residuals for plotting, a check for multicollinearity, a...

  12. Quantum Non-Markovian Langevin Equations and Transport Coefficients

    International Nuclear Information System (INIS)

    Sargsyan, V.V.; Antonenko, N.V.; Kanokov, Z.; Adamian, G.G.

    2005-01-01

    Quantum diffusion equations featuring explicitly time-dependent transport coefficients are derived from generalized non-Markovian Langevin equations. Generalized fluctuation-dissipation relations and analytic expressions for calculating the friction and diffusion coefficients in nuclear processes are obtained. The asymptotic behavior of the transport coefficients and correlation functions for a damped harmonic oscillator that is linearly coupled in momentum to a heat bath is studied. The coupling to a heat bath in momentum is responsible for the appearance of the diffusion coefficient in coordinate. The problem of regression of correlations in quantum dissipative systems is analyzed

  13. Use of beta-blockers and risk of serious upper gastrointestinal bleeding

    DEFF Research Database (Denmark)

    Reilev, Mette; Damkier, Per; Rasmussen, Lotte

    2017-01-01

    Background: Some studies indicate a reduced risk of serious upper gastrointestinal bleeding (UGIB) for users of beta-blockers, but the association remains to be confirmed in larger studies and characterized with respect to differences among beta-blockers. We aimed to assess whether beta-blocker use...... and adjusted odds ratios (ORs) of the association between current beta-blocker use and the risk of UGIB by using conditional logistic regression and further stratified by selective and non-selective beta-blockers, respectively. Results: We identified 3571 UGIB cases and 35,582 controls. Use of beta-blockers...... was not found to be associated with a decreased risk of UGIB (adjusted OR 1.10; 95% CI: 1.00-1.21). The association remained neutral after stratification by selective and non-selective beta-blockers, and by single beta-blocker substances. Similarly, we found no association between current beta-blocker use...

  14. High beta tokamaks

    International Nuclear Information System (INIS)

    Dory, R.A.; Berger, D.P.; Charlton, L.A.; Hogan, J.T.; Munro, J.K.; Nelson, D.B.; Peng, Y.K.M.; Sigmar, D.J.; Strickler, D.J.

    1978-01-01

    MHD equilibrium, stability, and transport calculations are made to study the accessibility and behavior of ''high beta'' tokamak plasmas in the range β approximately 5 to 15 percent. For next generation devices, beta values of at least 8 percent appear to be accessible and stable if there is a conducting surface nearby

  15. Sorting out Downside Beta

    NARCIS (Netherlands)

    G.T. Post (Thierry); P. van Vliet (Pim); S.D. Lansdorp (Simon)

    2009-01-01

    textabstractDownside risk, when properly defined and estimated, helps to explain the cross-section of US stock returns. Sorting stocks by a proper estimate of downside market beta leads to a substantially larger cross-sectional spread in average returns than sorting on regular market beta. This

  16. Betting against Beta

    DEFF Research Database (Denmark)

    Frazzini, Andrea; Heje Pedersen, Lasse

    2014-01-01

    We present a model with leverage and margin constraints that vary across investors and time. We find evidence consistent with each of the model's five central predictions: (1) Because constrained investors bid up high-beta assets, high beta is associated with low alpha, as we find empirically...

  17. Canonical variate regression.

    Science.gov (United States)

    Luo, Chongliang; Liu, Jin; Dey, Dipak K; Chen, Kun

    2016-07-01

    In many fields, multi-view datasets, measuring multiple distinct but interrelated sets of characteristics on the same set of subjects, together with data on certain outcomes or phenotypes, are routinely collected. The objective in such a problem is often two-fold: both to explore the association structures of multiple sets of measurements and to develop a parsimonious model for predicting the future outcomes. We study a unified canonical variate regression framework to tackle the two problems simultaneously. The proposed criterion integrates multiple canonical correlation analysis with predictive modeling, balancing between the association strength of the canonical variates and their joint predictive power on the outcomes. Moreover, the proposed criterion seeks multiple sets of canonical variates simultaneously to enable the examination of their joint effects on the outcomes, and is able to handle multivariate and non-Gaussian outcomes. An efficient algorithm based on variable splitting and Lagrangian multipliers is proposed. Simulation studies show the superior performance of the proposed approach. We demonstrate the effectiveness of the proposed approach in an [Formula: see text] intercross mice study and an alcohol dependence study. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  18. Power output and efficiency of beta-emitting microspheres

    International Nuclear Information System (INIS)

    Cheneler, David; Ward, Michael

    2015-01-01

    Current standard methods to calculate the dose of radiation emitted during medical applications by beta-minus emitting microspheres rely on an over-simplistic formalism. This formalism is a function of the average activity of the radioisotope used and the physiological dimensions of the patient only. It neglects the variation in energy of the emitted beta particle due to self-attenuation, or self-absorption, effects related to the finite size of the sphere. Here it is assumed the sphere is comprised of a pure radioisotope with beta particles being emitted isotropically throughout the material. The full initial possible kinetic energy distribution of a beta particle is taken into account as well as the energy losses due to scattering by other atoms in the microsphere and bremsstrahlung radiation. By combining Longmire’s theory of the mean forward range of charged particles and the Rayleigh distribution to take into account the statistical nature of scattering and energy straggling, the linear attenuation, or self-absorption, coefficient for beta-emitting radioisotopes has been deduced. By analogy with gamma radiation transport in spheres, this result was used to calculate the rate of energy emitted by a beta-emitting microsphere and its efficiency. Comparisons to standard point dose kernel formulations generated using Monte Carlo data show the efficacy of the proposed method. Yttrium-90 is used as a specific example throughout, as a medically significant radioisotope, frequently used in radiation therapy for treating cancer. - Highlights: • Range-energy relationship for the beta particles in yttrium-90 is calculated. • Formalism for the semi-analytical calculation of self-absorption coefficients. • Energy-dependent self-absorption coefficient calculated for yttrium-90. • Flux rate of beta particles from a self-attenuating radioactive sphere is shown. • The efficiency of beta particle emitting radioactive microspheres is calculated

  19. Pancreatic enzyme replacement therapy in cystic fibrosis: dose, variability and coefficient of fat absorption.

    Science.gov (United States)

    Calvo-Lerma, Joaquim; Martínez-Barona, Sandra; Masip, Etna; Fornés, Victoria; Ribes-Koninckx, Carmen

    2017-10-01

    Pancreatic enzyme replacement therapy (PERT) remains a backbone in the nutritional treatment of cystic fibrosis. Currently, there is a lack of an evidence-based tool that allows dose adjustment. To date, no studies have found an association between PERT dose and fat absorption. Therefore, the aim of the study was to assess the influence of both the PERT dose and the variability in this dose on the coefficient of fat absorption (CFA). This is a retrospective longitudinal study of 16 pediatric patients (192 food records) with three consecutive visits to the hospital over a twelve-month period. Dietary fat intake and PERT were assessed via a four-day food record and fat content in stools was determined by means of a three-day stool sample collection. A beta regression model was built to explain the association between the CFA and the interaction between the PERT dose (lipase units [LU]/g dietary fat) and the variability in the PERT dose (standard deviation [SD]). The coefficient of fat absorption increased with the PERT dose when the variability in the dose was low. In contrast, even at the highest PERT dose values, the CFA decreased when the variability was high. The confidence interval suggested an association, although the analysis was not statistically significant. The variability in the PERT dose adjustment should be taken into consideration when performing studies on PERT efficiency. A clinical goal should be the maintenance of a constant PERT dose rather than trying to obtain an optimal value.

  20. Genetics Home Reference: beta thalassemia

    Science.gov (United States)

    ... Facebook Twitter Home Health Conditions Beta thalassemia Beta thalassemia Printable PDF Open All Close All Enable Javascript to view the expand/collapse boxes. Description Beta thalassemia is a blood disorder that reduces the production ...

  1. Application of logistic regression for landslide susceptibility zoning of Cekmece Area, Istanbul, Turkey

    Science.gov (United States)

    Duman, T. Y.; Can, T.; Gokceoglu, C.; Nefeslioglu, H. A.; Sonmez, H.

    2006-11-01

    As a result of industrialization, throughout the world, cities have been growing rapidly for the last century. One typical example of these growing cities is Istanbul, the population of which is over 10 million. Due to rapid urbanization, new areas suitable for settlement and engineering structures are necessary. The Cekmece area located west of the Istanbul metropolitan area is studied, because the landslide activity is extensive in this area. The purpose of this study is to develop a model that can be used to characterize landslide susceptibility in map form using logistic regression analysis of an extensive landslide database. A database of landslide activity was constructed using both aerial-photography and field studies. About 19.2% of the selected study area is covered by deep-seated landslides. The landslides that occur in the area are primarily located in sandstones with interbedded permeable and impermeable layers such as claystone, siltstone and mudstone. About 31.95% of the total landslide area is located at this unit. To apply logistic regression analyses, a data matrix including 37 variables was constructed. The variables used in the forwards stepwise analyses are different measures of slope, aspect, elevation, stream power index (SPI), plan curvature, profile curvature, geology, geomorphology and relative permeability of lithological units. A total of 25 variables were identified as exerting strong influence on landslide occurrence, and included by the logistic regression equation. Wald statistics values indicate that lithology, SPI and slope are more important than the other parameters in the equation. Beta coefficients of the 25 variables included the logistic regression equation provide a model for landslide susceptibility in the Cekmece area. This model is used to generate a landslide susceptibility map that correctly classified 83.8% of the landslide-prone areas.

  2. On the Kendall Correlation Coefficient

    OpenAIRE

    Stepanov, Alexei

    2015-01-01

    In the present paper, we first discuss the Kendall rank correlation coefficient. In continuous case, we define the Kendall rank correlation coefficient in terms of the concomitants of order statistics, find the expected value of the Kendall rank correlation coefficient and show that the later is free of n. We also prove that in continuous case the Kendall correlation coefficient converges in probability to its expected value. We then propose to consider the expected value of the Kendall rank ...

  3. Rapid synthesis of beta zeolites

    Science.gov (United States)

    Fan, Wei; Chang, Chun -Chih; Dornath, Paul; Wang, Zhuopeng

    2015-08-18

    The invention provides methods for rapidly synthesizing heteroatom containing zeolites including Sn-Beta, Si-Beta, Ti-Beta, Zr-Beta and Fe-Beta. The methods for synthesizing heteroatom zeolites include using well-crystalline zeolite crystals as seeds and using a fluoride-free, caustic medium in a seeded dry-gel conversion method. The Beta zeolite catalysts made by the methods of the invention catalyze both isomerization and dehydration reactions.

  4. Regression and regression analysis time series prediction modeling on climate data of quetta, pakistan

    International Nuclear Information System (INIS)

    Jafri, Y.Z.; Kamal, L.

    2007-01-01

    Various statistical techniques was used on five-year data from 1998-2002 of average humidity, rainfall, maximum and minimum temperatures, respectively. The relationships to regression analysis time series (RATS) were developed for determining the overall trend of these climate parameters on the basis of which forecast models can be corrected and modified. We computed the coefficient of determination as a measure of goodness of fit, to our polynomial regression analysis time series (PRATS). The correlation to multiple linear regression (MLR) and multiple linear regression analysis time series (MLRATS) were also developed for deciphering the interdependence of weather parameters. Spearman's rand correlation and Goldfeld-Quandt test were used to check the uniformity or non-uniformity of variances in our fit to polynomial regression (PR). The Breusch-Pagan test was applied to MLR and MLRATS, respectively which yielded homoscedasticity. We also employed Bartlett's test for homogeneity of variances on a five-year data of rainfall and humidity, respectively which showed that the variances in rainfall data were not homogenous while in case of humidity, were homogenous. Our results on regression and regression analysis time series show the best fit to prediction modeling on climatic data of Quetta, Pakistan. (author)

  5. Beta Neutrino Correlation Measurement with Trapped Radioactive Ions

    International Nuclear Information System (INIS)

    Velten, Ph.; Ban, G.; Durand, D.; Flechard, X.; Lienard, E.; Mauger, F.; Naviliat-Cuncic, O.; Mery, A.; Rodriguez, D.; Thomas, J.-C.

    2010-01-01

    The beta-neutrino angular correlation coefficient provides a sensitive observable to search for physics beyond the standard electroweak model in nuclear beta decay. We address here the measurement of this parameter in the pure Gamow-Teller transition of 6 He. A deviation from the standard model prediction would indicate the existence of tensor like couplings, possibly mediated by new bosons like leptoquarks. The aim of the LPCTrap experiment is to measure this coefficient with a statistical uncertainty of 0.5% using a novel transparent Paul trap. The status of the experiment is briefly presented along with the work in progress.

  6. The Kerr nonlinearity of the beta-barium borate crystal

    DEFF Research Database (Denmark)

    Bache, Morten; Guo, Hairun; Zhou, Binbin

    2013-01-01

    A popular crystal for ultrafast cascading experiments is beta-barium-borate (β-BaB2O4, BBO). It has a decent quadratic nonlinear coefficient, and because the crystal is anisotropie it can be birefringence phase-matched for type I (oo → e) second-harmonic generation (SHG). For femtosecond experime......A popular crystal for ultrafast cascading experiments is beta-barium-borate (β-BaB2O4, BBO). It has a decent quadratic nonlinear coefficient, and because the crystal is anisotropie it can be birefringence phase-matched for type I (oo → e) second-harmonic generation (SHG). For femtosecond...

  7. SDE based regression for random PDEs

    KAUST Repository

    Bayer, Christian

    2016-01-01

    A simulation based method for the numerical solution of PDE with random coefficients is presented. By the Feynman-Kac formula, the solution can be represented as conditional expectation of a functional of a corresponding stochastic differential equation driven by independent noise. A time discretization of the SDE for a set of points in the domain and a subsequent Monte Carlo regression lead to an approximation of the global solution of the random PDE. We provide an initial error and complexity analysis of the proposed method along with numerical examples illustrating its behaviour.

  8. Fixed kernel regression for voltammogram feature extraction

    International Nuclear Information System (INIS)

    Acevedo Rodriguez, F J; López-Sastre, R J; Gil-Jiménez, P; Maldonado Bascón, S; Ruiz-Reyes, N

    2009-01-01

    Cyclic voltammetry is an electroanalytical technique for obtaining information about substances under analysis without the need for complex flow systems. However, classifying the information in voltammograms obtained using this technique is difficult. In this paper, we propose the use of fixed kernel regression as a method for extracting features from these voltammograms, reducing the information to a few coefficients. The proposed approach has been applied to a wine classification problem with accuracy rates of over 98%. Although the method is described here for extracting voltammogram information, it can be used for other types of signals

  9. SDE based regression for random PDEs

    KAUST Repository

    Bayer, Christian

    2016-01-06

    A simulation based method for the numerical solution of PDE with random coefficients is presented. By the Feynman-Kac formula, the solution can be represented as conditional expectation of a functional of a corresponding stochastic differential equation driven by independent noise. A time discretization of the SDE for a set of points in the domain and a subsequent Monte Carlo regression lead to an approximation of the global solution of the random PDE. We provide an initial error and complexity analysis of the proposed method along with numerical examples illustrating its behaviour.

  10. Polynomial regression analysis and significance test of the regression function

    International Nuclear Information System (INIS)

    Gao Zhengming; Zhao Juan; He Shengping

    2012-01-01

    In order to analyze the decay heating power of a certain radioactive isotope per kilogram with polynomial regression method, the paper firstly demonstrated the broad usage of polynomial function and deduced its parameters with ordinary least squares estimate. Then significance test method of polynomial regression function is derived considering the similarity between the polynomial regression model and the multivariable linear regression model. Finally, polynomial regression analysis and significance test of the polynomial function are done to the decay heating power of the iso tope per kilogram in accord with the authors' real work. (authors)

  11. Beta particle measurement fundamentals

    International Nuclear Information System (INIS)

    Alvarez, J.L.

    1986-01-01

    The necessary concepts for understanding beta particle behavior are stopping power, range, and scattering. Dose as a consequence of beta particle interaction with tissue can be derived and explained by these concepts. Any calculations of dose, however, assume or require detailed knowledge of the beta spectrum at the tissue depth of calculation. A rudimentary knowledge of the incident spectrum can be of use in estimating dose, interpretating dose measuring devices and designing protection. The stopping power and range based on the csda will give a conservative estimate in cases of protection design, as scattering will reduce the range. Estimates of dose may be low because scattering effects were neglected

  12. Estimation of added-mass and damping coefficients of a tethered spherical float using potential flow theory

    Digital Repository Service at National Institute of Oceanography (India)

    Vethamony, P.; Chandramohan, P.; Sastry, J.S.; Narasimhan, S.

    Added-mass (alpha) and damping coefficients (beta) of a tethered spherical float, undergoing oscillatory motion in sinusoidal waves, have been derived from the motion generated velocity potential for one degree-of-freedom (surge) using potential...

  13. Assessing risk factors for periodontitis using regression

    Science.gov (United States)

    Lobo Pereira, J. A.; Ferreira, Maria Cristina; Oliveira, Teresa

    2013-10-01

    Multivariate statistical analysis is indispensable to assess the associations and interactions between different factors and the risk of periodontitis. Among others, regression analysis is a statistical technique widely used in healthcare to investigate and model the relationship between variables. In our work we study the impact of socio-demographic, medical and behavioral factors on periodontal health. Using regression, linear and logistic models, we can assess the relevance, as risk factors for periodontitis disease, of the following independent variables (IVs): Age, Gender, Diabetic Status, Education, Smoking status and Plaque Index. The multiple linear regression analysis model was built to evaluate the influence of IVs on mean Attachment Loss (AL). Thus, the regression coefficients along with respective p-values will be obtained as well as the respective p-values from the significance tests. The classification of a case (individual) adopted in the logistic model was the extent of the destruction of periodontal tissues defined by an Attachment Loss greater than or equal to 4 mm in 25% (AL≥4mm/≥25%) of sites surveyed. The association measures include the Odds Ratios together with the correspondent 95% confidence intervals.

  14. Neutrinoless double beta decay

    Indian Academy of Sciences (India)

    2012-10-06

    Oct 6, 2012 ... Anyhow, the 'multi-isotope' ansatz is needed to compensate for matrix element ... The neccessary half-life requirement to touch this ... site energy depositions (like double beta decay) and multiple site interactions (most of.

  15. Beta-Carotene

    Science.gov (United States)

    ... disease (COPD). It is also used to improve memory and muscle strength. Some people use beta-carotene ... to reduce the chance of death and night blindness during pregnancy, as well as diarrhea and fever ...

  16. Double beta decay: experiments

    International Nuclear Information System (INIS)

    Fiorini, Ettore

    2006-01-01

    The results obtained so far and those of the running experiments on neutrinoless double beta decay are reviewed. The plans for second generation experiments, the techniques to be adopted and the expected sensitivities are compared and discussed

  17. Combining Alphas via Bounded Regression

    Directory of Open Access Journals (Sweden)

    Zura Kakushadze

    2015-11-01

    Full Text Available We give an explicit algorithm and source code for combining alpha streams via bounded regression. In practical applications, typically, there is insufficient history to compute a sample covariance matrix (SCM for a large number of alphas. To compute alpha allocation weights, one then resorts to (weighted regression over SCM principal components. Regression often produces alpha weights with insufficient diversification and/or skewed distribution against, e.g., turnover. This can be rectified by imposing bounds on alpha weights within the regression procedure. Bounded regression can also be applied to stock and other asset portfolio construction. We discuss illustrative examples.

  18. Regression in autistic spectrum disorders.

    Science.gov (United States)

    Stefanatos, Gerry A

    2008-12-01

    A significant proportion of children diagnosed with Autistic Spectrum Disorder experience a developmental regression characterized by a loss of previously-acquired skills. This may involve a loss of speech or social responsitivity, but often entails both. This paper critically reviews the phenomena of regression in autistic spectrum disorders, highlighting the characteristics of regression, age of onset, temporal course, and long-term outcome. Important considerations for diagnosis are discussed and multiple etiological factors currently hypothesized to underlie the phenomenon are reviewed. It is argued that regressive autistic spectrum disorders can be conceptualized on a spectrum with other regressive disorders that may share common pathophysiological features. The implications of this viewpoint are discussed.

  19. {beta} - amyloid imaging probes

    Energy Technology Data Exchange (ETDEWEB)

    Jeong, Jae Min [Seoul National University College of Medicine, Seoul (Korea, Republic of)

    2007-04-15

    Imaging distribution of {beta} - amyloid plaques in Alzheimer's disease is very important for early and accurate diagnosis. Early trial of the {beta} -amyloid plaques includes using radiolabeled peptides which can be only applied for peripheral {beta} - amyloid plaques due to limited penetration through the blood brain barrier (BBB). Congo red or Chrysamine G derivatives were labeled with Tc-99m for imaging {beta} - amyloid plaques of Alzheimer patient's brain without success due to problem with BBB penetration. Thioflavin T derivatives gave breakthrough for {beta} - amyloid imaging in vivo, and a benzothiazole derivative [C-11]6-OH-BTA-1 brought a great success. Many other benzothiazole, benzoxazole, benzofuran, imidazopyridine, and styrylbenzene derivatives have been labeled with F-18 and I-123 to improve the imaging quality. However, [C-11]6-OH-BTA-1 still remains as the best. However, short half-life of C-11 is a limitation of wide distribution of this agent. So, it is still required to develop an Tc-99m, F-18 or I-123 labeled agent for {beta} - amyloid imaging agent.

  20. Advanced statistics: linear regression, part I: simple linear regression.

    Science.gov (United States)

    Marill, Keith A

    2004-01-01

    Simple linear regression is a mathematical technique used to model the relationship between a single independent predictor variable and a single dependent outcome variable. In this, the first of a two-part series exploring concepts in linear regression analysis, the four fundamental assumptions and the mechanics of simple linear regression are reviewed. The most common technique used to derive the regression line, the method of least squares, is described. The reader will be acquainted with other important concepts in simple linear regression, including: variable transformations, dummy variables, relationship to inference testing, and leverage. Simplified clinical examples with small datasets and graphic models are used to illustrate the points. This will provide a foundation for the second article in this series: a discussion of multiple linear regression, in which there are multiple predictor variables.

  1. Analysis of quantile regression as alternative to ordinary least squares

    OpenAIRE

    Ibrahim Abdullahi; Abubakar Yahaya

    2015-01-01

    In this article, an alternative to ordinary least squares (OLS) regression based on analytical solution in the Statgraphics software is considered, and this alternative is no other than quantile regression (QR) model. We also present goodness of fit statistic as well as approximate distributions of the associated test statistics for the parameters. Furthermore, we suggest a goodness of fit statistic called the least absolute deviation (LAD) coefficient of determination. The procedure is well ...

  2. Quadrature formulas for Fourier coefficients

    KAUST Repository

    Bojanov, Borislav

    2009-09-01

    We consider quadrature formulas of high degree of precision for the computation of the Fourier coefficients in expansions of functions with respect to a system of orthogonal polynomials. In particular, we show the uniqueness of a multiple node formula for the Fourier-Tchebycheff coefficients given by Micchelli and Sharma and construct new Gaussian formulas for the Fourier coefficients of a function, based on the values of the function and its derivatives. © 2009 Elsevier B.V. All rights reserved.

  3. Coefficient Alpha: A Reliability Coefficient for the 21st Century?

    Science.gov (United States)

    Yang, Yanyun; Green, Samuel B.

    2011-01-01

    Coefficient alpha is almost universally applied to assess reliability of scales in psychology. We argue that researchers should consider alternatives to coefficient alpha. Our preference is for structural equation modeling (SEM) estimates of reliability because they are informative and allow for an empirical evaluation of the assumptions…

  4. Coefficient estimates of negative powers and inverse coefficients for ...

    Indian Academy of Sciences (India)

    and the inequality is sharp for the inverse of the Koebe function k(z) = z/(1 − z)2. An alternative approach to the inverse coefficient problem for functions in the class S has been investigated by Schaeffer and Spencer [27] and FitzGerald [6]. Although, the inverse coefficient problem for the class S has been completely solved ...

  5. Measuring of heat transfer coefficient

    DEFF Research Database (Denmark)

    Henningsen, Poul; Lindegren, Maria

    Subtask 3.4 Measuring of heat transfer coefficient Subtask 3.4.1 Design and setting up of tests to measure heat transfer coefficient Objective: Complementary testing methods together with the relevant experimental equipment are to be designed by the two partners involved in order to measure...... the heat transfer coefficient for a wide range of interface conditions in hot and warm forging processes. Subtask 3.4.2 Measurement of heat transfer coefficient The objective of subtask 3.4.2 is to determine heat transfer values for different interface conditions reflecting those typically operating in hot...

  6. Estimation of octanol/water partition coefficients using LSER parameters

    Science.gov (United States)

    Luehrs, Dean C.; Hickey, James P.; Godbole, Kalpana A.; Rogers, Tony N.

    1998-01-01

    The logarithms of octanol/water partition coefficients, logKow, were regressed against the linear solvation energy relationship (LSER) parameters for a training set of 981 diverse organic chemicals. The standard deviation for logKow was 0.49. The regression equation was then used to estimate logKow for a test of 146 chemicals which included pesticides and other diverse polyfunctional compounds. Thus the octanol/water partition coefficient may be estimated by LSER parameters without elaborate software but only moderate accuracy should be expected.

  7. Moderation analysis using a two-level regression model.

    Science.gov (United States)

    Yuan, Ke-Hai; Cheng, Ying; Maxwell, Scott

    2014-10-01

    Moderation analysis is widely used in social and behavioral research. The most commonly used model for moderation analysis is moderated multiple regression (MMR) in which the explanatory variables of the regression model include product terms, and the model is typically estimated by least squares (LS). This paper argues for a two-level regression model in which the regression coefficients of a criterion variable on predictors are further regressed on moderator variables. An algorithm for estimating the parameters of the two-level model by normal-distribution-based maximum likelihood (NML) is developed. Formulas for the standard errors (SEs) of the parameter estimates are provided and studied. Results indicate that, when heteroscedasticity exists, NML with the two-level model gives more efficient and more accurate parameter estimates than the LS analysis of the MMR model. When error variances are homoscedastic, NML with the two-level model leads to essentially the same results as LS with the MMR model. Most importantly, the two-level regression model permits estimating the percentage of variance of each regression coefficient that is due to moderator variables. When applied to data from General Social Surveys 1991, NML with the two-level model identified a significant moderation effect of race on the regression of job prestige on years of education while LS with the MMR model did not. An R package is also developed and documented to facilitate the application of the two-level model.

  8. Biostatistics Series Module 6: Correlation and Linear Regression.

    Science.gov (United States)

    Hazra, Avijit; Gogtay, Nithya

    2016-01-01

    Correlation and linear regression are the most commonly used techniques for quantifying the association between two numeric variables. Correlation quantifies the strength of the linear relationship between paired variables, expressing this as a correlation coefficient. If both variables x and y are normally distributed, we calculate Pearson's correlation coefficient ( r ). If normality assumption is not met for one or both variables in a correlation analysis, a rank correlation coefficient, such as Spearman's rho (ρ) may be calculated. A hypothesis test of correlation tests whether the linear relationship between the two variables holds in the underlying population, in which case it returns a P correlation coefficient can also be calculated for an idea of the correlation in the population. The value r 2 denotes the proportion of the variability of the dependent variable y that can be attributed to its linear relation with the independent variable x and is called the coefficient of determination. Linear regression is a technique that attempts to link two correlated variables x and y in the form of a mathematical equation ( y = a + bx ), such that given the value of one variable the other may be predicted. In general, the method of least squares is applied to obtain the equation of the regression line. Correlation and linear regression analysis are based on certain assumptions pertaining to the data sets. If these assumptions are not met, misleading conclusions may be drawn. The first assumption is that of linear relationship between the two variables. A scatter plot is essential before embarking on any correlation-regression analysis to show that this is indeed the case. Outliers or clustering within data sets can distort the correlation coefficient value. Finally, it is vital to remember that though strong correlation can be a pointer toward causation, the two are not synonymous.

  9. Rapid and simultaneous determination of lycopene and beta-carotene contents in tomato juice by infrared spectroscopy.

    Science.gov (United States)

    De Nardo, Thais; Shiroma-Kian, Cecilia; Halim, Yuwana; Francis, David; Rodriguez-Saona, Luis E

    2009-02-25

    The rapid quantification of lycopene and beta-carotene in tomato juices by attenuated total reflectance (ATR) infrared spectroscopy combined with multivariate analysis was evaluated. Two sample preparation methods were compared: a direct measurement of the tomato paste and an extraction method using hexane to isolate carotenoids. HPLC was used as the reference method. Cross-validated (leave-one-out) partial least-squares regression (PLSR) was used to create calibration models to predict these phytonutrient concentrations in blind test samples. The infrared spectra showed unique marker bands at 957 and 968 cm(-1) for lycopene and beta-carotene, respectively. Multivariate analysis of the infrared spectral data gave correlation coefficients (r values) of >0.9 between the ATR-IR predicted and HPLC reference values, and standard errors of cross-validation (SECV) of 0.5 and 0.04 mg/100 g of juice for lycopene and beta-carotene, respectively. ATR-IR could provide the tomato industry with a simple, rapid, and high-throughput technique for the determination of tomato quality.

  10. Linear regression in astronomy. II

    Science.gov (United States)

    Feigelson, Eric D.; Babu, Gutti J.

    1992-01-01

    A wide variety of least-squares linear regression procedures used in observational astronomy, particularly investigations of the cosmic distance scale, are presented and discussed. The classes of linear models considered are (1) unweighted regression lines, with bootstrap and jackknife resampling; (2) regression solutions when measurement error, in one or both variables, dominates the scatter; (3) methods to apply a calibration line to new data; (4) truncated regression models, which apply to flux-limited data sets; and (5) censored regression models, which apply when nondetections are present. For the calibration problem we develop two new procedures: a formula for the intercept offset between two parallel data sets, which propagates slope errors from one regression to the other; and a generalization of the Working-Hotelling confidence bands to nonstandard least-squares lines. They can provide improved error analysis for Faber-Jackson, Tully-Fisher, and similar cosmic distance scale relations.

  11. Time-adaptive quantile regression

    DEFF Research Database (Denmark)

    Møller, Jan Kloppenborg; Nielsen, Henrik Aalborg; Madsen, Henrik

    2008-01-01

    and an updating procedure are combined into a new algorithm for time-adaptive quantile regression, which generates new solutions on the basis of the old solution, leading to savings in computation time. The suggested algorithm is tested against a static quantile regression model on a data set with wind power......An algorithm for time-adaptive quantile regression is presented. The algorithm is based on the simplex algorithm, and the linear optimization formulation of the quantile regression problem is given. The observations have been split to allow a direct use of the simplex algorithm. The simplex method...... production, where the models combine splines and quantile regression. The comparison indicates superior performance for the time-adaptive quantile regression in all the performance parameters considered....

  12. Quantile regression theory and applications

    CERN Document Server

    Davino, Cristina; Vistocco, Domenico

    2013-01-01

    A guide to the implementation and interpretation of Quantile Regression models This book explores the theory and numerous applications of quantile regression, offering empirical data analysis as well as the software tools to implement the methods. The main focus of this book is to provide the reader with a comprehensivedescription of the main issues concerning quantile regression; these include basic modeling, geometrical interpretation, estimation and inference for quantile regression, as well as issues on validity of the model, diagnostic tools. Each methodological aspect is explored and

  13. Distributing Correlation Coefficients of Linear Structure-Activity/Property Models

    Directory of Open Access Journals (Sweden)

    Sorana D. BOLBOACA

    2011-12-01

    Full Text Available Quantitative structure-activity/property relationships are mathematical relationships linking chemical structure and activity/property in a quantitative manner. These in silico approaches are frequently used to reduce animal testing and risk-assessment, as well as to increase time- and cost-effectiveness in characterization and identification of active compounds. The aim of our study was to investigate the pattern of correlation coefficients distribution associated to simple linear relationships linking the compounds structure with their activities. A set of the most common ordnance compounds found at naval facilities with a limited data set with a range of toxicities on aquatic ecosystem and a set of seven properties was studied. Statistically significant models were selected and investigated. The probability density function of the correlation coefficients was investigated using a series of possible continuous distribution laws. Almost 48% of the correlation coefficients proved fit Beta distribution, 40% fit Generalized Pareto distribution, and 12% fit Pert distribution.

  14. Linear regression and the normality assumption.

    Science.gov (United States)

    Schmidt, Amand F; Finan, Chris

    2017-12-16

    Researchers often perform arbitrary outcome transformations to fulfill the normality assumption of a linear regression model. This commentary explains and illustrates that in large data settings, such transformations are often unnecessary, and worse may bias model estimates. Linear regression assumptions are illustrated using simulated data and an empirical example on the relation between time since type 2 diabetes diagnosis and glycated hemoglobin levels. Simulation results were evaluated on coverage; i.e., the number of times the 95% confidence interval included the true slope coefficient. Although outcome transformations bias point estimates, violations of the normality assumption in linear regression analyses do not. The normality assumption is necessary to unbiasedly estimate standard errors, and hence confidence intervals and P-values. However, in large sample sizes (e.g., where the number of observations per variable is >10) violations of this normality assumption often do not noticeably impact results. Contrary to this, assumptions on, the parametric model, absence of extreme observations, homoscedasticity, and independency of the errors, remain influential even in large sample size settings. Given that modern healthcare research typically includes thousands of subjects focusing on the normality assumption is often unnecessary, does not guarantee valid results, and worse may bias estimates due to the practice of outcome transformations. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. Bayesian Inference of a Multivariate Regression Model

    Directory of Open Access Journals (Sweden)

    Marick S. Sinay

    2014-01-01

    Full Text Available We explore Bayesian inference of a multivariate linear regression model with use of a flexible prior for the covariance structure. The commonly adopted Bayesian setup involves the conjugate prior, multivariate normal distribution for the regression coefficients and inverse Wishart specification for the covariance matrix. Here we depart from this approach and propose a novel Bayesian estimator for the covariance. A multivariate normal prior for the unique elements of the matrix logarithm of the covariance matrix is considered. Such structure allows for a richer class of prior distributions for the covariance, with respect to strength of beliefs in prior location hyperparameters, as well as the added ability, to model potential correlation amongst the covariance structure. The posterior moments of all relevant parameters of interest are calculated based upon numerical results via a Markov chain Monte Carlo procedure. The Metropolis-Hastings-within-Gibbs algorithm is invoked to account for the construction of a proposal density that closely matches the shape of the target posterior distribution. As an application of the proposed technique, we investigate a multiple regression based upon the 1980 High School and Beyond Survey.

  16. Geographically weighted regression model on poverty indicator

    Science.gov (United States)

    Slamet, I.; Nugroho, N. F. T. A.; Muslich

    2017-12-01

    In this research, we applied geographically weighted regression (GWR) for analyzing the poverty in Central Java. We consider Gaussian Kernel as weighted function. The GWR uses the diagonal matrix resulted from calculating kernel Gaussian function as a weighted function in the regression model. The kernel weights is used to handle spatial effects on the data so that a model can be obtained for each location. The purpose of this paper is to model of poverty percentage data in Central Java province using GWR with Gaussian kernel weighted function and to determine the influencing factors in each regency/city in Central Java province. Based on the research, we obtained geographically weighted regression model with Gaussian kernel weighted function on poverty percentage data in Central Java province. We found that percentage of population working as farmers, population growth rate, percentage of households with regular sanitation, and BPJS beneficiaries are the variables that affect the percentage of poverty in Central Java province. In this research, we found the determination coefficient R2 are 68.64%. There are two categories of district which are influenced by different of significance factors.

  17. General regression and representation model for classification.

    Directory of Open Access Journals (Sweden)

    Jianjun Qian

    Full Text Available Recently, the regularized coding-based classification methods (e.g. SRC and CRC show a great potential for pattern classification. However, most existing coding methods assume that the representation residuals are uncorrelated. In real-world applications, this assumption does not hold. In this paper, we take account of the correlations of the representation residuals and develop a general regression and representation model (GRR for classification. GRR not only has advantages of CRC, but also takes full use of the prior information (e.g. the correlations between representation residuals and representation coefficients and the specific information (weight matrix of image pixels to enhance the classification performance. GRR uses the generalized Tikhonov regularization and K Nearest Neighbors to learn the prior information from the training data. Meanwhile, the specific information is obtained by using an iterative algorithm to update the feature (or image pixel weights of the test sample. With the proposed model as a platform, we design two classifiers: basic general regression and representation classifier (B-GRR and robust general regression and representation classifier (R-GRR. The experimental results demonstrate the performance advantages of proposed methods over state-of-the-art algorithms.

  18. Varying coefficients model with measurement error.

    Science.gov (United States)

    Li, Liang; Greene, Tom

    2008-06-01

    We propose a semiparametric partially varying coefficient model to study the relationship between serum creatinine concentration and the glomerular filtration rate (GFR) among kidney donors and patients with chronic kidney disease. A regression model is used to relate serum creatinine to GFR and demographic factors in which coefficient of GFR is expressed as a function of age to allow its effect to be age dependent. GFR measurements obtained from the clearance of a radioactively labeled isotope are assumed to be a surrogate for the true GFR, with the relationship between measured and true GFR expressed using an additive error model. We use locally corrected score equations to estimate parameters and coefficient functions, and propose an expected generalized cross-validation (EGCV) method to select the kernel bandwidth. The performance of the proposed methods, which avoid distributional assumptions on the true GFR and residuals, is investigated by simulation. Accounting for measurement error using the proposed model reduced apparent inconsistencies in the relationship between serum creatinine and GFR among different clinical data sets derived from kidney donor and chronic kidney disease source populations.

  19. Prediction of aged red wine aroma properties from aroma chemical composition. Partial least squares regression models.

    Science.gov (United States)

    Aznar, Margarita; López, Ricardo; Cacho, Juan; Ferreira, Vicente

    2003-04-23

    Partial least squares regression (PLSR) models able to predict some of the wine aroma nuances from its chemical composition have been developed. The aromatic sensory characteristics of 57 Spanish aged red wines were determined by 51 experts from the wine industry. The individual descriptions given by the experts were recorded, and the frequency with which a sensory term was used to define a given wine was taken as a measurement of its intensity. The aromatic chemical composition of the wines was determined by already published gas chromatography (GC)-flame ionization detector and GC-mass spectrometry methods. In the whole, 69 odorants were analyzed. Both matrixes, the sensory and chemical data, were simplified by grouping and rearranging correlated sensory terms or chemical compounds and by the exclusion of secondary aroma terms or of weak aroma chemicals. Finally, models were developed for 18 sensory terms and 27 chemicals or groups of chemicals. Satisfactory models, explaining more than 45% of the original variance, could be found for nine of the most important sensory terms (wood-vanillin-cinnamon, animal-leather-phenolic, toasted-coffee, old wood-reduction, vegetal-pepper, raisin-flowery, sweet-candy-cacao, fruity, and berry fruit). For this set of terms, the correlation coefficients between the measured and predicted Y (determined by cross-validation) ranged from 0.62 to 0.81. Models confirmed the existence of complex multivariate relationships between chemicals and odors. In general, pleasant descriptors were positively correlated to chemicals with pleasant aroma, such as vanillin, beta damascenone, or (E)-beta-methyl-gamma-octalactone, and negatively correlated to compounds showing less favorable odor properties, such as 4-ethyl and vinyl phenols, 3-(methylthio)-1-propanol, or phenylacetaldehyde.

  20. Application of random regression models to the genetic evaluation ...

    African Journals Online (AJOL)

    The model included fixed regression on AM (range from 30 to 138 mo) and the effect of herd-measurement date concatenation. Random parts of the model were RRM coefficients for additive and permanent environmental effects, while residual effects were modelled to account for heterogeneity of variance by AY. Estimates ...

  1. Testing discontinuities in nonparametric regression

    KAUST Repository

    Dai, Wenlin

    2017-01-19

    In nonparametric regression, it is often needed to detect whether there are jump discontinuities in the mean function. In this paper, we revisit the difference-based method in [13 H.-G. Müller and U. Stadtmüller, Discontinuous versus smooth regression, Ann. Stat. 27 (1999), pp. 299–337. doi: 10.1214/aos/1018031100

  2. Testing discontinuities in nonparametric regression

    KAUST Repository

    Dai, Wenlin; Zhou, Yuejin; Tong, Tiejun

    2017-01-01

    In nonparametric regression, it is often needed to detect whether there are jump discontinuities in the mean function. In this paper, we revisit the difference-based method in [13 H.-G. Müller and U. Stadtmüller, Discontinuous versus smooth regression, Ann. Stat. 27 (1999), pp. 299–337. doi: 10.1214/aos/1018031100

  3. Logistic Regression: Concept and Application

    Science.gov (United States)

    Cokluk, Omay

    2010-01-01

    The main focus of logistic regression analysis is classification of individuals in different groups. The aim of the present study is to explain basic concepts and processes of binary logistic regression analysis intended to determine the combination of independent variables which best explain the membership in certain groups called dichotomous…

  4. Sabine absorption coefficients to random incidence absorption coefficients

    DEFF Research Database (Denmark)

    Jeong, Cheol-Ho

    2014-01-01

    into random incidence absorption coefficients for porous absorbers are investigated. Two optimization-based conversion methods are suggested: the surface impedance estimation for locally reacting absorbers and the flow resistivity estimation for extendedly reacting absorbers. The suggested conversion methods...

  5. Labelling of. beta. -endorphin (. beta. -END) and. beta. -lipotropin (. beta. -LPH) by /sup 125/I

    Energy Technology Data Exchange (ETDEWEB)

    Deby-Dupont, G.; Joris, J.; Franchimont, P. (Universite de Liege (Belgique)); Reuter, A.M.; Vrindts-Gevaert, Y. (Institut des Radioelements, Fleurus (Belgique))

    1983-01-01

    5 ..mu..g of human ..beta..-endorphin were labelled with 2 mCi /sup 125/I by the chloramine T technique. After two gel filtrations on Sephadex G-15 and on Sephadex G-50 in phosphate buffer with EDTA, Trasylol and mercapto-ethanol, a pure tracer was obtained with a specific activity about 150 ..mu..Ci/..mu..g.Kept at + 4/sup 0/C, the tracer remained utilizable for 30 days without loss of immunoreactivity. The labelling with lactoperoxydase and the use of another gel filtration method (filtration on Aca 202) gave a /sup 125/I ..beta..-END tracer with the same immunoreactivity. The binding of this tracer to the antibody of an anti-..beta..-END antiserum diluted at 1/8000 was 32% with a non specific binding of 2%. 5 ..mu..g of human ..beta..-lipotropin were labelled with 0.5 mCi /sup 125/I by the lactoperoxydase method. After two gel filtrations on Sephadex G-25 and on Sephadex G-75 in phosphate buffer with EDTA, Trasylol and mercapto-ethanol, a pure tracer with a specific activity of 140 ..mu..Ci/..mu..g was obtained. It remained utilizable for 30 days when kept at + 4/sup 0/C. Gel filtration on Aca 202 did not give good purification, while gel filtration on Aca 54 was good but slower than on Sephadex G-75. The binding to antibody in absence of unlabelled ..beta..-LPH was 32% for an anti-..beta..-LPH antiserum diluted at 1/4000. The non specific binding was 2.5%.

  6. Background stratified Poisson regression analysis of cohort data.

    Science.gov (United States)

    Richardson, David B; Langholz, Bryan

    2012-03-01

    Background stratified Poisson regression is an approach that has been used in the analysis of data derived from a variety of epidemiologically important studies of radiation-exposed populations, including uranium miners, nuclear industry workers, and atomic bomb survivors. We describe a novel approach to fit Poisson regression models that adjust for a set of covariates through background stratification while directly estimating the radiation-disease association of primary interest. The approach makes use of an expression for the Poisson likelihood that treats the coefficients for stratum-specific indicator variables as 'nuisance' variables and avoids the need to explicitly estimate the coefficients for these stratum-specific parameters. Log-linear models, as well as other general relative rate models, are accommodated. This approach is illustrated using data from the Life Span Study of Japanese atomic bomb survivors and data from a study of underground uranium miners. The point estimate and confidence interval obtained from this 'conditional' regression approach are identical to the values obtained using unconditional Poisson regression with model terms for each background stratum. Moreover, it is shown that the proposed approach allows estimation of background stratified Poisson regression models of non-standard form, such as models that parameterize latency effects, as well as regression models in which the number of strata is large, thereby overcoming the limitations of previously available statistical software for fitting background stratified Poisson regression models.

  7. Background stratified Poisson regression analysis of cohort data

    International Nuclear Information System (INIS)

    Richardson, David B.; Langholz, Bryan

    2012-01-01

    Background stratified Poisson regression is an approach that has been used in the analysis of data derived from a variety of epidemiologically important studies of radiation-exposed populations, including uranium miners, nuclear industry workers, and atomic bomb survivors. We describe a novel approach to fit Poisson regression models that adjust for a set of covariates through background stratification while directly estimating the radiation-disease association of primary interest. The approach makes use of an expression for the Poisson likelihood that treats the coefficients for stratum-specific indicator variables as 'nuisance' variables and avoids the need to explicitly estimate the coefficients for these stratum-specific parameters. Log-linear models, as well as other general relative rate models, are accommodated. This approach is illustrated using data from the Life Span Study of Japanese atomic bomb survivors and data from a study of underground uranium miners. The point estimate and confidence interval obtained from this 'conditional' regression approach are identical to the values obtained using unconditional Poisson regression with model terms for each background stratum. Moreover, it is shown that the proposed approach allows estimation of background stratified Poisson regression models of non-standard form, such as models that parameterize latency effects, as well as regression models in which the number of strata is large, thereby overcoming the limitations of previously available statistical software for fitting background stratified Poisson regression models. (orig.)

  8. Fungible weights in logistic regression.

    Science.gov (United States)

    Jones, Jeff A; Waller, Niels G

    2016-06-01

    In this article we develop methods for assessing parameter sensitivity in logistic regression models. To set the stage for this work, we first review Waller's (2008) equations for computing fungible weights in linear regression. Next, we describe 2 methods for computing fungible weights in logistic regression. To demonstrate the utility of these methods, we compute fungible logistic regression weights using data from the Centers for Disease Control and Prevention's (2010) Youth Risk Behavior Surveillance Survey, and we illustrate how these alternate weights can be used to evaluate parameter sensitivity. To make our work accessible to the research community, we provide R code (R Core Team, 2015) that will generate both kinds of fungible logistic regression weights. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  9. Ordinary least square regression, orthogonal regression, geometric mean regression and their applications in aerosol science

    International Nuclear Information System (INIS)

    Leng Ling; Zhang Tianyi; Kleinman, Lawrence; Zhu Wei

    2007-01-01

    Regression analysis, especially the ordinary least squares method which assumes that errors are confined to the dependent variable, has seen a fair share of its applications in aerosol science. The ordinary least squares approach, however, could be problematic due to the fact that atmospheric data often does not lend itself to calling one variable independent and the other dependent. Errors often exist for both measurements. In this work, we examine two regression approaches available to accommodate this situation. They are orthogonal regression and geometric mean regression. Comparisons are made theoretically as well as numerically through an aerosol study examining whether the ratio of organic aerosol to CO would change with age

  10. Plasma beta HCG determination

    International Nuclear Information System (INIS)

    Amaral, L.B.D.; Pinto, J.C.M.; Linhares, E.; Linhares, Estevao

    1981-01-01

    There are three important indications for the early diagnosis of pregnancy through the determination of the beta sub-unit of chorionic gonadotrophin using radioimmunoassay: 1) some patient's or doctor's anxiety to discover the problem; 2) when it will be necessary to employ diagnostic or treatment procedures susceptible to affect the ovum; and 3) in the differential diagnosis of amenorrhoea, uterine hemorrhage and abdominal tumors. Other user's are the diagnosis of missed absortion, and the diagnosis and follow-up of chrorioncarcinoma. The AA. studied 200 determinations of plasma beta-HCG, considering the main difficulties occuring in the clinical use of this relevant laboratory tool in actual Obstetrics. (author) [pt

  11. Relation between the 2{nu}{beta}{beta} and 0{nu}{beta}{beta} nuclear matrix elements

    Energy Technology Data Exchange (ETDEWEB)

    Vogel, Petr [Kellogg Radiation Laboratory, Caltech, Pasadena, CA 91125 (United States); Simkovic, Fedor [Department of Nuclear Physics and Biophysics, Comenius University, Mlynska dolina F1, SK-84248 Bratislava (Slovakia)

    2011-12-16

    A formal relation between the GT part of the nuclear matrix elements M{sub GT}{sup 0{nu}} of 0{nu}{beta}{beta} decay and the closure matrix elements M{sub cl}{sup 2{nu}} of 2{nu}{beta}{beta} decay is established. This relation is based on the integral representation of these quantities in terms of their dependence on the distance r between the two nucleons undergoing transformation. We also discuss the difficulties in determining the correct values of the closure 2{nu}{beta}{beta} decay matrix elements.

  12. Tumor regression patterns in retinoblastoma

    International Nuclear Information System (INIS)

    Zafar, S.N.; Siddique, S.N.; Zaheer, N.

    2016-01-01

    To observe the types of tumor regression after treatment, and identify the common pattern of regression in our patients. Study Design: Descriptive study. Place and Duration of Study: Department of Pediatric Ophthalmology and Strabismus, Al-Shifa Trust Eye Hospital, Rawalpindi, Pakistan, from October 2011 to October 2014. Methodology: Children with unilateral and bilateral retinoblastoma were included in the study. Patients were referred to Pakistan Institute of Medical Sciences, Islamabad, for chemotherapy. After every cycle of chemotherapy, dilated funds examination under anesthesia was performed to record response of the treatment. Regression patterns were recorded on RetCam II. Results: Seventy-four tumors were included in the study. Out of 74 tumors, 3 were ICRB group A tumors, 43 were ICRB group B tumors, 14 tumors belonged to ICRB group C, and remaining 14 were ICRB group D tumors. Type IV regression was seen in 39.1% (n=29) tumors, type II in 29.7% (n=22), type III in 25.6% (n=19), and type I in 5.4% (n=4). All group A tumors (100%) showed type IV regression. Seventeen (39.5%) group B tumors showed type IV regression. In group C, 5 tumors (35.7%) showed type II regression and 5 tumors (35.7%) showed type IV regression. In group D, 6 tumors (42.9%) regressed to type II non-calcified remnants. Conclusion: The response and success of the focal and systemic treatment, as judged by the appearance of different patterns of tumor regression, varies with the ICRB grouping of the tumor. (author)

  13. Noninvasive spectral imaging of skin chromophores based on multiple regression analysis aided by Monte Carlo simulation

    Science.gov (United States)

    Nishidate, Izumi; Wiswadarma, Aditya; Hase, Yota; Tanaka, Noriyuki; Maeda, Takaaki; Niizeki, Kyuichi; Aizu, Yoshihisa

    2011-08-01

    In order to visualize melanin and blood concentrations and oxygen saturation in human skin tissue, a simple imaging technique based on multispectral diffuse reflectance images acquired at six wavelengths (500, 520, 540, 560, 580 and 600nm) was developed. The technique utilizes multiple regression analysis aided by Monte Carlo simulation for diffuse reflectance spectra. Using the absorbance spectrum as a response variable and the extinction coefficients of melanin, oxygenated hemoglobin, and deoxygenated hemoglobin as predictor variables, multiple regression analysis provides regression coefficients. Concentrations of melanin and total blood are then determined from the regression coefficients using conversion vectors that are deduced numerically in advance, while oxygen saturation is obtained directly from the regression coefficients. Experiments with a tissue-like agar gel phantom validated the method. In vivo experiments with human skin of the human hand during upper limb occlusion and of the inner forearm exposed to UV irradiation demonstrated the ability of the method to evaluate physiological reactions of human skin tissue.

  14. Probabilistic optimization of safety coefficients

    International Nuclear Information System (INIS)

    Marques, M.; Devictor, N.; Magistris, F. de

    1999-01-01

    This article describes a reliability-based method for the optimization of safety coefficients defined and used in design codes. The purpose of the optimization is to determine the partial safety coefficients which minimize an objective function for sets of components and loading situations covered by a design rule. This objective function is a sum of distances between the reliability of the components designed using the safety coefficients and a target reliability. The advantage of this method is shown on the examples of the reactor vessel, a vapour pipe and the safety injection circuit. (authors)

  15. Recursive least squares method of regression coefficients estimation as a special case of Kalman filter

    Science.gov (United States)

    Borodachev, S. M.

    2016-06-01

    The simple derivation of recursive least squares (RLS) method equations is given as special case of Kalman filter estimation of a constant system state under changing observation conditions. A numerical example illustrates application of RLS to multicollinearity problem.

  16. Deriving proper uniform priors for regression coefficients, Parts I, II, and III

    NARCIS (Netherlands)

    van Erp, H.R.N.; Linger, R.O.; van Gelder, P.H.A.J.M.

    2017-01-01

    It is a relatively well-known fact that in problems of Bayesian model selection, improper priors should, in general, be avoided. In this paper we will derive and discuss a collection of four proper uniform priors which lie on an ascending scale of informativeness. It will turn out that these

  17. Regression to Causality : Regression-style presentation influences causal attribution

    DEFF Research Database (Denmark)

    Bordacconi, Mats Joe; Larsen, Martin Vinæs

    2014-01-01

    of equivalent results presented as either regression models or as a test of two sample means. Our experiment shows that the subjects who were presented with results as estimates from a regression model were more inclined to interpret these results causally. Our experiment implies that scholars using regression...... models – one of the primary vehicles for analyzing statistical results in political science – encourage causal interpretation. Specifically, we demonstrate that presenting observational results in a regression model, rather than as a simple comparison of means, makes causal interpretation of the results...... more likely. Our experiment drew on a sample of 235 university students from three different social science degree programs (political science, sociology and economics), all of whom had received substantial training in statistics. The subjects were asked to compare and evaluate the validity...

  18. Regression analysis with categorized regression calibrated exposure: some interesting findings

    Directory of Open Access Journals (Sweden)

    Hjartåker Anette

    2006-07-01

    Full Text Available Abstract Background Regression calibration as a method for handling measurement error is becoming increasingly well-known and used in epidemiologic research. However, the standard version of the method is not appropriate for exposure analyzed on a categorical (e.g. quintile scale, an approach commonly used in epidemiologic studies. A tempting solution could then be to use the predicted continuous exposure obtained through the regression calibration method and treat it as an approximation to the true exposure, that is, include the categorized calibrated exposure in the main regression analysis. Methods We use semi-analytical calculations and simulations to evaluate the performance of the proposed approach compared to the naive approach of not correcting for measurement error, in situations where analyses are performed on quintile scale and when incorporating the original scale into the categorical variables, respectively. We also present analyses of real data, containing measures of folate intake and depression, from the Norwegian Women and Cancer study (NOWAC. Results In cases where extra information is available through replicated measurements and not validation data, regression calibration does not maintain important qualities of the true exposure distribution, thus estimates of variance and percentiles can be severely biased. We show that the outlined approach maintains much, in some cases all, of the misclassification found in the observed exposure. For that reason, regression analysis with the corrected variable included on a categorical scale is still biased. In some cases the corrected estimates are analytically equal to those obtained by the naive approach. Regression calibration is however vastly superior to the naive method when applying the medians of each category in the analysis. Conclusion Regression calibration in its most well-known form is not appropriate for measurement error correction when the exposure is analyzed on a

  19. Photon mass attenuation coefficients, effective atomic numbers and ...

    Indian Academy of Sciences (India)

    of atomic number Z was performed using the logarithmic regression analysis of the data measured by the authors and reported earlier. The best-fit coefficients so obtained in the photon ..... This photon build-up is a function of thickness and atomic number of the sample and also the incident photon energy, which combine to ...

  20. On the misinterpretation of the correlation coefficient in pharmaceutical sciences

    DEFF Research Database (Denmark)

    Sonnergaard, Jørn

    2006-01-01

    The correlation coefficient is often used and more often misused as a universal parameter expressing the quality in linear regression analysis. The popularity of this dimensionless quantity is evident as it is easy to communicate and considered to be unproblematic to comprehend. However, illustra...

  1. Logic regression and its extensions.

    Science.gov (United States)

    Schwender, Holger; Ruczinski, Ingo

    2010-01-01

    Logic regression is an adaptive classification and regression procedure, initially developed to reveal interacting single nucleotide polymorphisms (SNPs) in genetic association studies. In general, this approach can be used in any setting with binary predictors, when the interaction of these covariates is of primary interest. Logic regression searches for Boolean (logic) combinations of binary variables that best explain the variability in the outcome variable, and thus, reveals variables and interactions that are associated with the response and/or have predictive capabilities. The logic expressions are embedded in a generalized linear regression framework, and thus, logic regression can handle a variety of outcome types, such as binary responses in case-control studies, numeric responses, and time-to-event data. In this chapter, we provide an introduction to the logic regression methodology, list some applications in public health and medicine, and summarize some of the direct extensions and modifications of logic regression that have been proposed in the literature. Copyright © 2010 Elsevier Inc. All rights reserved.

  2. Quadrature formulas for Fourier coefficients

    KAUST Repository

    Bojanov, Borislav; Petrova, Guergana

    2009-01-01

    We consider quadrature formulas of high degree of precision for the computation of the Fourier coefficients in expansions of functions with respect to a system of orthogonal polynomials. In particular, we show the uniqueness of a multiple node

  3. Diffusion coefficient for anomalous transport

    International Nuclear Information System (INIS)

    1986-01-01

    A report on the progress towards the goal of estimating the diffusion coefficient for anomalous transport is given. The gyrokinetic theory is used to identify different time and length scale inherent to the characteristics of plasmas which exhibit anomalous transport

  4. Induced nuclear beta decay

    International Nuclear Information System (INIS)

    Reiss, H.R.

    1986-01-01

    Certain nuclear beta decay transitions normally inhibited by angular momentum or parity considerations can be induced to occur by the application of an electromagnetic field. Such decays can be useful in the controlled production of power, and in fission waste disposal

  5. Trichoderma .beta.-glucosidase

    Science.gov (United States)

    Dunn-Coleman, Nigel; Goedegebuur, Frits; Ward, Michael; Yao, Jian

    2006-01-03

    The present invention provides a novel .beta.-glucosidase nucleic acid sequence, designated bgl3, and the corresponding BGL3 amino acid sequence. The invention also provides expression vectors and host cells comprising a nucleic acid sequence encoding BGL3, recombinant BGL3 proteins and methods for producing the same.

  6. Applied Beta Dosimetry

    International Nuclear Information System (INIS)

    Rich, B.L.

    1986-01-01

    Measurements of beta and/or nonpenetrating exposure results is complicated and past techniques and capabilities have resulted in significant inaccuracies in recorded results. Current developments have resulted in increased capabilities which make the results more accurate and should result in less total exposure to the work force. Continued development of works in progress should provide equivalent future improvements

  7. Beta thalassemia - a review

    Directory of Open Access Journals (Sweden)

    R Jha

    2014-09-01

    Full Text Available Thalassemia is a globin gene disorder that results in a diminished rate of synthesis of one or more of the globin chains. About 1.5% of the global population (80 to 90 million people are carriers of beta Thalassemia. More than 200 mutations are described in beta thalassemia. However not all mutations are common in different ethnic groups. The only effective way to reduce burden of thalassemia is to prevent birth of homozygotes. Diagnosis of beta thalassemia can be done by fetal DNA analysis for molecular defects of beta thalassemia or by fetal blood analysis. Hematopoietic stem cell transplantation is the only available curative approach for Thalassemia. Many patients with thalassemia in underdeveloped nations die in childhood or adolescence. Programs that provide acceptable care, including transfusion of safe blood and supportive therapy including chelation must be established.DOI: http://dx.doi.org/10.3126/jpn.v4i8.11609 Journal of Pathology of Nepal; Vol.4,No. 8 (2014 663-671

  8. Double Beta Decay Experiments

    International Nuclear Information System (INIS)

    Piepke, A.

    2005-01-01

    The experimental observation of neutrino oscillations and thus neutrino mass and mixing gives a first hint at new particle physics. The absolute values of the neutrino mass and the properties of neutrinos under CP-conjugation remain unknown. The experimental investigation of the nuclear double beta decay is one of the key techniques for solving these open problems

  9. Fuel Temperature Coefficient of Reactivity

    Energy Technology Data Exchange (ETDEWEB)

    Loewe, W.E.

    2001-07-31

    A method for measuring the fuel temperature coefficient of reactivity in a heterogeneous nuclear reactor is presented. The method, which is used during normal operation, requires that calibrated control rods be oscillated in a special way at a high reactor power level. The value of the fuel temperature coefficient of reactivity is found from the measured flux responses to these oscillations. Application of the method in a Savannah River reactor charged with natural uranium is discussed.

  10. Properties of Traffic Risk Coefficient

    Science.gov (United States)

    Tang, Tie-Qiao; Huang, Hai-Jun; Shang, Hua-Yan; Xue, Yu

    2009-10-01

    We use the model with the consideration of the traffic interruption probability (Physica A 387(2008)6845) to study the relationship between the traffic risk coefficient and the traffic interruption probability. The analytical and numerical results show that the traffic interruption probability will reduce the traffic risk coefficient and that the reduction is related to the density, which shows that this model can improve traffic security.

  11. Beta cell adaptation in pregnancy

    DEFF Research Database (Denmark)

    Nielsen, Jens Høiriis

    2016-01-01

    Pregnancy is associated with a compensatory increase in beta cell mass. It is well established that somatolactogenic hormones contribute to the expansion both indirectly by their insulin antagonistic effects and directly by their mitogenic effects on the beta cells via receptors for prolactin...... and growth hormone expressed in rodent beta cells. However, the beta cell expansion in human pregnancy seems to occur by neogenesis of beta cells from putative progenitor cells rather than by proliferation of existing beta cells. Claes Hellerström has pioneered the research on beta cell growth for decades...... in the expansion of the beta cell mass in human pregnancy, and the relative roles of endocrine factors and nutrients....

  12. BANK FAILURE PREDICTION WITH LOGISTIC REGRESSION

    Directory of Open Access Journals (Sweden)

    Taha Zaghdoudi

    2013-04-01

    Full Text Available In recent years the economic and financial world is shaken by a wave of financial crisis and resulted in violent bank fairly huge losses. Several authors have focused on the study of the crises in order to develop an early warning model. It is in the same path that our work takes its inspiration. Indeed, we have tried to develop a predictive model of Tunisian bank failures with the contribution of the binary logistic regression method. The specificity of our prediction model is that it takes into account microeconomic indicators of bank failures. The results obtained using our provisional model show that a bank's ability to repay its debt, the coefficient of banking operations, bank profitability per employee and leverage financial ratio has a negative impact on the probability of failure.

  13. Abstract Expression Grammar Symbolic Regression

    Science.gov (United States)

    Korns, Michael F.

    This chapter examines the use of Abstract Expression Grammars to perform the entire Symbolic Regression process without the use of Genetic Programming per se. The techniques explored produce a symbolic regression engine which has absolutely no bloat, which allows total user control of the search space and output formulas, which is faster, and more accurate than the engines produced in our previous papers using Genetic Programming. The genome is an all vector structure with four chromosomes plus additional epigenetic and constraint vectors, allowing total user control of the search space and the final output formulas. A combination of specialized compiler techniques, genetic algorithms, particle swarm, aged layered populations, plus discrete and continuous differential evolution are used to produce an improved symbolic regression sytem. Nine base test cases, from the literature, are used to test the improvement in speed and accuracy. The improved results indicate that these techniques move us a big step closer toward future industrial strength symbolic regression systems.

  14. Quantile Regression With Measurement Error

    KAUST Repository

    Wei, Ying; Carroll, Raymond J.

    2009-01-01

    . The finite sample performance of the proposed method is investigated in a simulation study, and compared to the standard regression calibration approach. Finally, we apply our methodology to part of the National Collaborative Perinatal Project growth data, a

  15. From Rasch scores to regression

    DEFF Research Database (Denmark)

    Christensen, Karl Bang

    2006-01-01

    Rasch models provide a framework for measurement and modelling latent variables. Having measured a latent variable in a population a comparison of groups will often be of interest. For this purpose the use of observed raw scores will often be inadequate because these lack interval scale propertie....... This paper compares two approaches to group comparison: linear regression models using estimated person locations as outcome variables and latent regression models based on the distribution of the score....

  16. Testing Heteroscedasticity in Robust Regression

    Czech Academy of Sciences Publication Activity Database

    Kalina, Jan

    2011-01-01

    Roč. 1, č. 4 (2011), s. 25-28 ISSN 2045-3345 Grant - others:GA ČR(CZ) GA402/09/0557 Institutional research plan: CEZ:AV0Z10300504 Keywords : robust regression * heteroscedasticity * regression quantiles * diagnostics Subject RIV: BB - Applied Statistics , Operational Research http://www.researchjournals.co.uk/documents/Vol4/06%20Kalina.pdf

  17. Regression methods for medical research

    CERN Document Server

    Tai, Bee Choo

    2013-01-01

    Regression Methods for Medical Research provides medical researchers with the skills they need to critically read and interpret research using more advanced statistical methods. The statistical requirements of interpreting and publishing in medical journals, together with rapid changes in science and technology, increasingly demands an understanding of more complex and sophisticated analytic procedures.The text explains the application of statistical models to a wide variety of practical medical investigative studies and clinical trials. Regression methods are used to appropriately answer the

  18. Forecasting with Dynamic Regression Models

    CERN Document Server

    Pankratz, Alan

    2012-01-01

    One of the most widely used tools in statistical forecasting, single equation regression models is examined here. A companion to the author's earlier work, Forecasting with Univariate Box-Jenkins Models: Concepts and Cases, the present text pulls together recent time series ideas and gives special attention to possible intertemporal patterns, distributed lag responses of output to input series and the auto correlation patterns of regression disturbance. It also includes six case studies.

  19. Clustering Coefficients for Correlation Networks.

    Science.gov (United States)

    Masuda, Naoki; Sakaki, Michiko; Ezaki, Takahiro; Watanabe, Takamitsu

    2018-01-01

    Graph theory is a useful tool for deciphering structural and functional networks of the brain on various spatial and temporal scales. The clustering coefficient quantifies the abundance of connected triangles in a network and is a major descriptive statistics of networks. For example, it finds an application in the assessment of small-worldness of brain networks, which is affected by attentional and cognitive conditions, age, psychiatric disorders and so forth. However, it remains unclear how the clustering coefficient should be measured in a correlation-based network, which is among major representations of brain networks. In the present article, we propose clustering coefficients tailored to correlation matrices. The key idea is to use three-way partial correlation or partial mutual information to measure the strength of the association between the two neighboring nodes of a focal node relative to the amount of pseudo-correlation expected from indirect paths between the nodes. Our method avoids the difficulties of previous applications of clustering coefficient (and other) measures in defining correlational networks, i.e., thresholding on the correlation value, discarding of negative correlation values, the pseudo-correlation problem and full partial correlation matrices whose estimation is computationally difficult. For proof of concept, we apply the proposed clustering coefficient measures to functional magnetic resonance imaging data obtained from healthy participants of various ages and compare them with conventional clustering coefficients. We show that the clustering coefficients decline with the age. The proposed clustering coefficients are more strongly correlated with age than the conventional ones are. We also show that the local variants of the proposed clustering coefficients (i.e., abundance of triangles around a focal node) are useful in characterizing individual nodes. In contrast, the conventional local clustering coefficients were strongly

  20. Clustering Coefficients for Correlation Networks

    Directory of Open Access Journals (Sweden)

    Naoki Masuda

    2018-03-01

    Full Text Available Graph theory is a useful tool for deciphering structural and functional networks of the brain on various spatial and temporal scales. The clustering coefficient quantifies the abundance of connected triangles in a network and is a major descriptive statistics of networks. For example, it finds an application in the assessment of small-worldness of brain networks, which is affected by attentional and cognitive conditions, age, psychiatric disorders and so forth. However, it remains unclear how the clustering coefficient should be measured in a correlation-based network, which is among major representations of brain networks. In the present article, we propose clustering coefficients tailored to correlation matrices. The key idea is to use three-way partial correlation or partial mutual information to measure the strength of the association between the two neighboring nodes of a focal node relative to the amount of pseudo-correlation expected from indirect paths between the nodes. Our method avoids the difficulties of previous applications of clustering coefficient (and other measures in defining correlational networks, i.e., thresholding on the correlation value, discarding of negative correlation values, the pseudo-correlation problem and full partial correlation matrices whose estimation is computationally difficult. For proof of concept, we apply the proposed clustering coefficient measures to functional magnetic resonance imaging data obtained from healthy participants of various ages and compare them with conventional clustering coefficients. We show that the clustering coefficients decline with the age. The proposed clustering coefficients are more strongly correlated with age than the conventional ones are. We also show that the local variants of the proposed clustering coefficients (i.e., abundance of triangles around a focal node are useful in characterizing individual nodes. In contrast, the conventional local clustering coefficients

  1. Clustering Coefficients for Correlation Networks

    Science.gov (United States)

    Masuda, Naoki; Sakaki, Michiko; Ezaki, Takahiro; Watanabe, Takamitsu

    2018-01-01

    Graph theory is a useful tool for deciphering structural and functional networks of the brain on various spatial and temporal scales. The clustering coefficient quantifies the abundance of connected triangles in a network and is a major descriptive statistics of networks. For example, it finds an application in the assessment of small-worldness of brain networks, which is affected by attentional and cognitive conditions, age, psychiatric disorders and so forth. However, it remains unclear how the clustering coefficient should be measured in a correlation-based network, which is among major representations of brain networks. In the present article, we propose clustering coefficients tailored to correlation matrices. The key idea is to use three-way partial correlation or partial mutual information to measure the strength of the association between the two neighboring nodes of a focal node relative to the amount of pseudo-correlation expected from indirect paths between the nodes. Our method avoids the difficulties of previous applications of clustering coefficient (and other) measures in defining correlational networks, i.e., thresholding on the correlation value, discarding of negative correlation values, the pseudo-correlation problem and full partial correlation matrices whose estimation is computationally difficult. For proof of concept, we apply the proposed clustering coefficient measures to functional magnetic resonance imaging data obtained from healthy participants of various ages and compare them with conventional clustering coefficients. We show that the clustering coefficients decline with the age. The proposed clustering coefficients are more strongly correlated with age than the conventional ones are. We also show that the local variants of the proposed clustering coefficients (i.e., abundance of triangles around a focal node) are useful in characterizing individual nodes. In contrast, the conventional local clustering coefficients were strongly

  2. Converting Sabine absorption coefficients to random incidence absorption coefficients

    DEFF Research Database (Denmark)

    Jeong, Cheol-Ho

    2013-01-01

    are suggested: An optimization method for the surface impedances for locally reacting absorbers, the flow resistivity for extendedly reacting absorbers, and the flow resistance for fabrics. With four porous type absorbers, the conversion methods are validated. For absorbers backed by a rigid wall, the surface...... coefficients to random incidence absorption coefficients are proposed. The overestimations of the Sabine absorption coefficient are investigated theoretically based on Miki's model for porous absorbers backed by a rigid wall or an air cavity, resulting in conversion factors. Additionally, three optimizations...... impedance optimization produces the best results, while the flow resistivity optimization also yields reasonable results. The flow resistivity and flow resistance optimization for extendedly reacting absorbers are also found to be successful. However, the theoretical conversion factors based on Miki's model...

  3. Determination of the Accommodation Coefficient Using Vapor/gas Bubble Dynamics in an Acoustic Field

    Science.gov (United States)

    Gumerov, Nail A.; Hsiao, Chao-Tsung; Goumilevski, Alexei G.; Allen, Jeff (Technical Monitor)

    2001-01-01

    Nonequilibrium liquid/vapor phase transformations can occur in superheated or subcooled liquids in fast processes such as in evaporation in a vacuum. The rate at which such a phase transformation occurs depends on the "condensation" or "accommodation" coefficient, Beta, which is a property of the interface. Existing measurement techniques for Beta are complex and expensive. The development of a relatively inexpensive and reliable technique for measurement of Beta for a wide range of substances and temperatures is of great practical importance. The dynamics of a bubble in an acoustic field strongly depends on the value of Beta. It is known that near the saturation temperature, small vapor bubbles grow under the action of an acoustic field due to "rectified heat transfer." This finding can be used as the basis for an effective measurement technique of Beta. We developed a theory of vapor bubble behavior in an isotropic acoustic wave and in a plane standing acoustic wave. A numerical code was developed which enables simulation of a variety of experimental situations and accurately takes into account slowly evolving temperature. A parametric study showed that the measurement of Beta can be made over a broad range of frequencies and bubble sizes. We found several interesting regimes and conditions which can be efficiently used for measurements of Beta. Measurements of Beta can be performed in both reduced and normal gravity environments.

  4. Regression analysis of sparse asynchronous longitudinal data.

    Science.gov (United States)

    Cao, Hongyuan; Zeng, Donglin; Fine, Jason P

    2015-09-01

    We consider estimation of regression models for sparse asynchronous longitudinal observations, where time-dependent responses and covariates are observed intermittently within subjects. Unlike with synchronous data, where the response and covariates are observed at the same time point, with asynchronous data, the observation times are mismatched. Simple kernel-weighted estimating equations are proposed for generalized linear models with either time invariant or time-dependent coefficients under smoothness assumptions for the covariate processes which are similar to those for synchronous data. For models with either time invariant or time-dependent coefficients, the estimators are consistent and asymptotically normal but converge at slower rates than those achieved with synchronous data. Simulation studies evidence that the methods perform well with realistic sample sizes and may be superior to a naive application of methods for synchronous data based on an ad hoc last value carried forward approach. The practical utility of the methods is illustrated on data from a study on human immunodeficiency virus.

  5. Interaction with beta-arrestin determines the difference in internalization behavor between beta1- and beta2-adrenergic receptors.

    Science.gov (United States)

    Shiina, T; Kawasaki, A; Nagao, T; Kurose, H

    2000-09-15

    The beta(1)-adrenergic receptor (beta(1)AR) shows the resistance to agonist-induced internalization. As beta-arrestin is important for internalization, we examine the interaction of beta-arrestin with beta(1)AR with three different methods: intracellular trafficking of beta-arrestin, binding of in vitro translated beta-arrestin to intracellular domains of beta(1)- and beta(2)ARs, and inhibition of betaAR-stimulated adenylyl cyclase activities by beta-arrestin. The green fluorescent protein-tagged beta-arrestin 2 translocates to and stays at the plasma membrane by beta(2)AR stimulation. Although green fluorescent protein-tagged beta-arrestin 2 also translocates to the plasma membrane, it returns to the cytoplasm 10-30 min after beta(1)AR stimulation. The binding of in vitro translated beta-arrestin 1 and beta-arrestin 2 to the third intracellular loop and the carboxyl tail of beta(1)AR is lower than that of beta(2)AR. The fusion protein of beta-arrestin 1 with glutathione S-transferase inhibits the beta(1)- and beta(2)AR-stimulated adenylyl cyclase activities, although inhibition of the beta(1)AR-stimulated activity requires a higher concentration of the fusion protein than that of the beta(2)AR-stimulated activity. These results suggest that weak interaction of beta(1)AR with beta-arrestins explains the resistance to agonist-induced internalization. This is further supported by the finding that beta-arrestin can induce internalization of beta(1)AR when beta-arrestin 1 does not dissociate from beta(1)AR by fusing to the carboxyl tail of beta(1)AR.

  6. Correcting for binomial measurement error in predictors in regression with application to analysis of DNA methylation rates by bisulfite sequencing.

    Science.gov (United States)

    Buonaccorsi, John; Prochenka, Agnieszka; Thoresen, Magne; Ploski, Rafal

    2016-09-30

    Motivated by a genetic application, this paper addresses the problem of fitting regression models when the predictor is a proportion measured with error. While the problem of dealing with additive measurement error in fitting regression models has been extensively studied, the problem where the additive error is of a binomial nature has not been addressed. The measurement errors here are heteroscedastic for two reasons; dependence on the underlying true value and changing sampling effort over observations. While some of the previously developed methods for treating additive measurement error with heteroscedasticity can be used in this setting, other methods need modification. A new version of simulation extrapolation is developed, and we also explore a variation on the standard regression calibration method that uses a beta-binomial model based on the fact that the true value is a proportion. Although most of the methods introduced here can be used for fitting non-linear models, this paper will focus primarily on their use in fitting a linear model. While previous work has focused mainly on estimation of the coefficients, we will, with motivation from our example, also examine estimation of the variance around the regression line. In addressing these problems, we also discuss the appropriate manner in which to bootstrap for both inferences and bias assessment. The various methods are compared via simulation, and the results are illustrated using our motivating data, for which the goal is to relate the methylation rate of a blood sample to the age of the individual providing the sample. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  7. Path coefficient analysis of zinc dynamics in varying soil environment

    International Nuclear Information System (INIS)

    Rattan, R.K.; Phung, C.V.; Singhal, S.K.; Deb, D.L.; Singh, A.K.

    1994-01-01

    Influence of soil properties on labile zinc, as measured by diethylene-triamine pentaacetic acid (DTPA) and zinc-65, and self-diffusion coefficients of zinc was assessed on 22 surface soil samples varying widely in their characteristics following linear regression and path coefficient analysis techniques. DTPA extractable zinc could be predicted from organic carbon status and pH of the soil with a highly significant coefficient of determination (R 2 =0.84 ** ). Ninety seven per cent variation in isotopically exchangeable zinc was explained by pH, clay content and cation exchange capacity (CEC) of soil. The self-diffusion coefficients (DaZn and DpZn) and buffer power of zinc exhibited exponential relationship with soil properties, pH being the most dominant one. Soil properties like organic matter, clay content etc. exhibited indirect effects on zinc diffusion rates via pH only. (author). 13 refs., 6 tabs

  8. Power coefficient anomaly in JOYO

    Energy Technology Data Exchange (ETDEWEB)

    Yamamoto, H

    1980-12-15

    Operation of the JOYO experimental fast reactor with the MK-I core has been divided into two phases: (1) 50 MWt power ascension and operation; and (2) 75 MWt power ascension and operation. The 50 MWt power-up tests were conducted in August 1978. In these tests, the measured reactivity loss due to power increases from 15 MWt to 50 MWt was 0.28% ..delta.. K/K, and agreed well with the predicted value of 0.27% ..delta.. K/K. The 75 MWt power ascension tests were conducted in July-August 1979. In the process of the first power increase above 50 MWt to 65 MWt conducted on July 11, 1979, an anomalously large negative power coefficient was observed. The value was about twice the power coefficient values measured in the tests below 50 MW. In order to reproduce the anomaly, the reactor power was decreased and again increased up to the maximum power of 65 MWt. However, the large negative power coefficient was not observed at this time. In the succeeding power increase from 65 MWt to 75 MWt, a similar anomalous power coefficient was again observed. This anomaly disappeared in the subsequent power ascensions to 75 MWt, and the magnitude of the power coefficient gradually decreased with power cycles above the 50 MWt level.

  9. Nonparametric Regression Estimation for Multivariate Null Recurrent Processes

    Directory of Open Access Journals (Sweden)

    Biqing Cai

    2015-04-01

    Full Text Available This paper discusses nonparametric kernel regression with the regressor being a \\(d\\-dimensional \\(\\beta\\-null recurrent process in presence of conditional heteroscedasticity. We show that the mean function estimator is consistent with convergence rate \\(\\sqrt{n(Th^{d}}\\, where \\(n(T\\ is the number of regenerations for a \\(\\beta\\-null recurrent process and the limiting distribution (with proper normalization is normal. Furthermore, we show that the two-step estimator for the volatility function is consistent. The finite sample performance of the estimate is quite reasonable when the leave-one-out cross validation method is used for bandwidth selection. We apply the proposed method to study the relationship of Federal funds rate with 3-month and 5-year T-bill rates and discover the existence of nonlinearity of the relationship. Furthermore, the in-sample and out-of-sample performance of the nonparametric model is far better than the linear model.

  10. Low-beta investment strategies

    OpenAIRE

    Korn, Olaf; Kuntz, Laura-Chloé

    2015-01-01

    This paper investigates investment strategies that exploit the low-beta anomaly. Although the notion of buying low-beta stocks and selling high-beta stocks is natural, a choice is necessary with respect to the relative weighting of high-beta stocks and low-beta stocks in the investment portfolio. Our empirical results for US large-cap stocks show that this choice is very important for the risk-return characteristics of the resulting portfolios and their sensitivities to common risk factors. W...

  11. An improved multiple linear regression and data analysis computer program package

    Science.gov (United States)

    Sidik, S. M.

    1972-01-01

    NEWRAP, an improved version of a previous multiple linear regression program called RAPIER, CREDUC, and CRSPLT, allows for a complete regression analysis including cross plots of the independent and dependent variables, correlation coefficients, regression coefficients, analysis of variance tables, t-statistics and their probability levels, rejection of independent variables, plots of residuals against the independent and dependent variables, and a canonical reduction of quadratic response functions useful in optimum seeking experimentation. A major improvement over RAPIER is that all regression calculations are done in double precision arithmetic.

  12. Neutrophil beta-2 microglobulin: an inflammatory mediator

    DEFF Research Database (Denmark)

    Bjerrum, O W; Nissen, Mogens Holst; Borregaard, N

    1990-01-01

    Beta-2 microglobulin (beta 2m) constitutes the light invariant chain of HLA class I antigen, and is a constituent of mobilizable compartments of neutrophils. Two forms of beta 2m exist: native beta 2m and proteolytically modified beta 2m (Des-Lys58-beta 2m), which shows alpha mobility in crossed ...

  13. Pancreatic enzyme replacement therapy in cystic fibrosis: dose, variability and coefficient of fat absorption

    Directory of Open Access Journals (Sweden)

    Joaquim Calvo-Lerma

    Full Text Available Objectives: Pancreatic enzyme replacement therapy (PERT remains a backbone in the nutritional treatment of cystic fibrosis. Currently, there is a lack of an evidence-based tool that allows dose adjustment. To date, no studies have found an association between PERT dose and fat absorption. Therefore, the aim of the study was to assess the influence of both the PERT dose and the variability in this dose on the coefficient of fat absorption (CFA. Methods: This is a retrospective longitudinal study of 16 pediatric patients (192 food records with three consecutive visits to the hospital over a twelve-month period. Dietary fat intake and PERT were assessed via a four-day food record and fat content in stools was determined by means of a three-day stool sample collection. A beta regression model was built to explain the association between the CFA and the interaction between the PERT dose (lipase units [LU]/g dietary fat and the variability in the PERT dose (standard deviation [SD]. Results: The coefficient of fat absorption increased with the PERT dose when the variability in the dose was low. In contrast, even at the highest PERT dose values, the CFA decreased when the variability was high. The confidence interval suggested an association, although the analysis was not statistically significant. Conclusion: The variability in the PERT dose adjustment should be taken into consideration when performing studies on PERT efficiency. A clinical goal should be the maintenance of a constant PERT dose rather than trying to obtain an optimal value.

  14. Logistic regression for dichotomized counts.

    Science.gov (United States)

    Preisser, John S; Das, Kalyan; Benecha, Habtamu; Stamm, John W

    2016-12-01

    Sometimes there is interest in a dichotomized outcome indicating whether a count variable is positive or zero. Under this scenario, the application of ordinary logistic regression may result in efficiency loss, which is quantifiable under an assumed model for the counts. In such situations, a shared-parameter hurdle model is investigated for more efficient estimation of regression parameters relating to overall effects of covariates on the dichotomous outcome, while handling count data with many zeroes. One model part provides a logistic regression containing marginal log odds ratio effects of primary interest, while an ancillary model part describes the mean count of a Poisson or negative binomial process in terms of nuisance regression parameters. Asymptotic efficiency of the logistic model parameter estimators of the two-part models is evaluated with respect to ordinary logistic regression. Simulations are used to assess the properties of the models with respect to power and Type I error, the latter investigated under both misspecified and correctly specified models. The methods are applied to data from a randomized clinical trial of three toothpaste formulations to prevent incident dental caries in a large population of Scottish schoolchildren. © The Author(s) 2014.

  15. Beta and muon decays

    International Nuclear Information System (INIS)

    Galindo, A.; Pascual, P.

    1967-01-01

    These notes represent a series of lectures delivered by the authors in the Junta de Energia Nuclear, during the Spring term of 1965. They were devoted to graduate students interested in the Theory of Elementary Particles. Special emphasis was focussed into the computational problems. Chapter I is a review of basic principles (Dirac equation, transition probabilities, final state interactions.) which will be needed later. In Chapter II the four-fermion punctual Interaction is discussed, Chapter III is devoted to the study of beta-decay; the main emphasis is given to the deduction of the formulae corresponding to electron-antineutrino correlation, electron energy spectrum, lifetimes, asymmetry of electrons emitted from polarized nuclei, electron and neutrino polarization and time reversal invariance in beta decay. In Chapter IV we deal with the decay of polarized muons with radiative corrections. Chapter V is devoted to an introduction to C.V.C. theory. (Author)

  16. Beta-thalassemia

    Directory of Open Access Journals (Sweden)

    Origa Raffaella

    2010-05-01

    Full Text Available Abstract Beta-thalassemias are a group of hereditary blood disorders characterized by anomalies in the synthesis of the beta chains of hemoglobin resulting in variable phenotypes ranging from severe anemia to clinically asymptomatic individuals. The total annual incidence of symptomatic individuals is estimated at 1 in 100,000 throughout the world and 1 in 10,000 people in the European Union. Three main forms have been described: thalassemia major, thalassemia intermedia and thalassemia minor. Individuals with thalassemia major usually present within the first two years of life with severe anemia, requiring regular red blood cell (RBC transfusions. Findings in untreated or poorly transfused individuals with thalassemia major, as seen in some developing countries, are growth retardation, pallor, jaundice, poor musculature, hepatosplenomegaly, leg ulcers, development of masses from extramedullary hematopoiesis, and skeletal changes that result from expansion of the bone marrow. Regular transfusion therapy leads to iron overload-related complications including endocrine complication (growth retardation, failure of sexual maturation, diabetes mellitus, and insufficiency of the parathyroid, thyroid, pituitary, and less commonly, adrenal glands, dilated myocardiopathy, liver fibrosis and cirrhosis. Patients with thalassemia intermedia present later in life with moderate anemia and do not require regular transfusions. Main clinical features in these patients are hypertrophy of erythroid marrow with medullary and extramedullary hematopoiesis and its complications (osteoporosis, masses of erythropoietic tissue that primarily affect the spleen, liver, lymph nodes, chest and spine, and bone deformities and typical facial changes, gallstones, painful leg ulcers and increased predisposition to thrombosis. Thalassemia minor is clinically asymptomatic but some subjects may have moderate anemia. Beta-thalassemias are caused by point mutations or, more rarely

  17. Beta and Gamma Gradients

    DEFF Research Database (Denmark)

    Løvborg, Leif; Gaffney, C. F.; Clark, P. A.

    1985-01-01

    Experimental and/or theoretical estimates are presented concerning, (i) attenuation within the sample of beta and gamma radiation from the soil, (ii) the gamma dose within the sample due to its own radioactivity, and (iii) the soil gamma dose in the proximity of boundaries between regions...... of differing radioactivity. It is confirmed that removal of the outer 2 mm of sample is adequate to remove influence from soil beta dose and estimates are made of the error introduced by non-removal. Other evaluations include variation of the soil gamma dose near the ground surface and it appears...... that the present practice of avoiding samples above a depth of 0.3 m may be over-cautious...

  18. Beta and muon decays

    Energy Technology Data Exchange (ETDEWEB)

    Galindo, A; Pascual, P

    1967-07-01

    These notes represent a series of lectures delivered by the authors in the Junta de Energia Nuclear, during the Spring term of 1965. They were devoted to graduate students interested in the Theory of Elementary Particles. Special emphasis was focussed into the computational problems. Chapter I is a review of basic principles (Dirac equation, transition probabilities, final state interactions.) which will be needed later. In Chapter II the four-fermion punctual Interaction is discussed, Chapter III is devoted to the study of beta-decay; the main emphasis is given to the deduction of the formulae corresponding to electron-antineutrino correlation, electron energy spectrum, lifetimes, asymmetry of electrons emitted from polarized nuclei, electron and neutrino polarization and time reversal invariance in beta decay. In Chapter IV we deal with the decay of polarized muons with radiative corrections. Chapter V is devoted to an introduction to C.V.C. theory. (Author)

  19. Semisupervised Clustering by Iterative Partition and Regression with Neuroscience Applications

    Directory of Open Access Journals (Sweden)

    Guoqi Qian

    2016-01-01

    Full Text Available Regression clustering is a mixture of unsupervised and supervised statistical learning and data mining method which is found in a wide range of applications including artificial intelligence and neuroscience. It performs unsupervised learning when it clusters the data according to their respective unobserved regression hyperplanes. The method also performs supervised learning when it fits regression hyperplanes to the corresponding data clusters. Applying regression clustering in practice requires means of determining the underlying number of clusters in the data, finding the cluster label of each data point, and estimating the regression coefficients of the model. In this paper, we review the estimation and selection issues in regression clustering with regard to the least squares and robust statistical methods. We also provide a model selection based technique to determine the number of regression clusters underlying the data. We further develop a computing procedure for regression clustering estimation and selection. Finally, simulation studies are presented for assessing the procedure, together with analyzing a real data set on RGB cell marking in neuroscience to illustrate and interpret the method.

  20. Analysis of internal conversion coefficients

    International Nuclear Information System (INIS)

    Coursol, N.; Gorozhankin, V.M.; Yakushev, E.A.; Briancon, C.; Vylov, Ts.

    2000-01-01

    An extensive database has been assembled that contains the three most widely used sets of calculated internal conversion coefficients (ICC): [Hager R.S., Seltzer E.C., 1968. Internal conversion tables. K-, L-, M-shell Conversion coefficients for Z=30 to Z=103, Nucl. Data Tables A4, 1-237; Band I.M., Trzhaskovskaya M.B., 1978. Tables of gamma-ray internal conversion coefficients for the K-, L- and M-shells, 10≤Z≤104, Special Report of Leningrad Nuclear Physics Institute; Roesel F., Fries H.M., Alder K., Pauli H.C., 1978. Internal conversion coefficients for all atomic shells, At. Data Nucl. Data Tables 21, 91-289] and also includes new Dirac-Fock calculations [Band I.M. and Trzhaskovskaya M.B., 1993. Internal conversion coefficients for low-energy nuclear transitions, At. Data Nucl. Data Tables 55, 43-61]. This database is linked to a computer program to plot ICCs and their combinations (sums and ratios) as a function of Z and energy, as well as relative deviations of ICC or their combinations for any pair of tabulated data. Examples of these analyses are presented for the K-shell and total ICCs of the gamma-ray standards [Hansen H.H., 1985. Evaluation of K-shell and total internal conversion coefficients for some selected nuclear transitions, Eur. Appl. Res. Rept. Nucl. Sci. Tech. 11.6 (4) 777-816] and for the K-shell and total ICCs of high multipolarity transitions (total, K-, L-, M-shells of E3 and M3 and K-shell of M4). Experimental data sets are also compared with the theoretical values of these specific calculations

  1. Regulation of beta cell replication

    DEFF Research Database (Denmark)

    Lee, Ying C; Nielsen, Jens Høiriis

    2008-01-01

    Beta cell mass, at any given time, is governed by cell differentiation, neogenesis, increased or decreased cell size (cell hypertrophy or atrophy), cell death (apoptosis), and beta cell proliferation. Nutrients, hormones and growth factors coupled with their signalling intermediates have been...... suggested to play a role in beta cell mass regulation. In addition, genetic mouse model studies have indicated that cyclins and cyclin-dependent kinases that determine cell cycle progression are involved in beta cell replication, and more recently, menin in association with cyclin-dependent kinase...... inhibitors has been demonstrated to be important in beta cell growth. In this review, we consider and highlight some aspects of cell cycle regulation in relation to beta cell replication. The role of cell cycle regulation in beta cell replication is mostly from studies in rodent models, but whether...

  2. High beta experiments in CHS

    International Nuclear Information System (INIS)

    Okamura, S.; Matsuoka, K.; Nishimura, K.

    1994-09-01

    High beta experiments were performed in the low-aspect-ratio helical device CHS with the volume-averaged equilibrium beta up to 2.1 %. These values (highest for helical systems) are obtained for high density plasmas in low magnetic field heated with two tangential neutral beams. Confinement improvement given by means of turning off gas puffing helped significantly to make high betas. Magnetic fluctuations increased with increasing beta, but finally stopped to increase in the beta range > 1 %. The coherent modes appearing in the magnetic hill region showed strong dependence on the beta values. The dynamic poloidal field control was applied to suppress the outward plasma movement with the plasma pressure. Such an operation gave fixed boundary operations of high beta plasmas in helical systems. (author)

  3. Beta rays and neutrinos

    International Nuclear Information System (INIS)

    Adams, S.F.

    1992-01-01

    It was over 30 years between the first observation of the enigmatic process of beta decay and the first postulation of the neutrino. It took a further 26 years until the first neutrino was detected and yet another 27 until the electroweak theory was confirmed by the discovery of W and Z particles. This article traces some of the puzzles and paradoxes associated with the history of the neutrino. (author)

  4. Coroutine Sequencing in BETA

    DEFF Research Database (Denmark)

    Kristensen, Bent Bruun; Madsen, Ole Lehrmann; Møller-Pedersen, Birger

    In object-oriented programming, a program execution is viewed as a physical model of some real or imaginary part of the world. A language supporting object-oriented programming must therefore contain comprehensive facilities for modeling phenomena and concepts form the application domain. Many...... applications in the real world consist of objects carrying out sequential processes. Coroutines may be used for modeling objects that alternate between a number of sequential processes. The authors describe coroutines in BETA...

  5. COM Support in BETA

    DEFF Research Database (Denmark)

    Madsen, Ole Lehrmann

    1999-01-01

    Component technologies based on binary units of independent production are some of the most important contributions to software architecture and reuse during recent years. Especially the COM technologies and the CORBA standard from the Object Management Group have contributed new and interesting...... principles for software architecture, and proven to be useful in parctice. In this paper ongoing work with component support in the BETA language is described....

  6. Algebraic polynomials with random coefficients

    Directory of Open Access Journals (Sweden)

    K. Farahmand

    2002-01-01

    Full Text Available This paper provides an asymptotic value for the mathematical expected number of points of inflections of a random polynomial of the form a0(ω+a1(ω(n11/2x+a2(ω(n21/2x2+…an(ω(nn1/2xn when n is large. The coefficients {aj(w}j=0n, w∈Ω are assumed to be a sequence of independent normally distributed random variables with means zero and variance one, each defined on a fixed probability space (A,Ω,Pr. A special case of dependent coefficients is also studied.

  7. Sectoral Differences in the Choice of the Time Horizon during Estimation of the Unconditional Stock Beta

    Directory of Open Access Journals (Sweden)

    Dimitrios Dadakas

    2016-12-01

    Full Text Available The stock beta coefficient literature extensively discusses the proper methods for the estimation of beta as well as its use in asset valuation. However, there are fewer references with respect to the appropriate time horizon that investors should utilize when evaluating the risk-return relationship of a stock. We examine the appropriate time horizon for beta estimation, differentiating our results by sector according to the Industry Classification Benchmark. We employ data from the NYSE and estimate varying lengths of beta employing data from 30 to 250 trading days. The constructed beta series is then examined for the presence of breaks using the endogenous structural break literature. Results show evidence against the use of betas that employ more than 90 trading days of data provisional to the sector under study.

  8. Producing The New Regressive Left

    DEFF Research Database (Denmark)

    Crone, Christine

    members, this thesis investigates a growing political trend and ideological discourse in the Arab world that I have called The New Regressive Left. On the premise that a media outlet can function as a forum for ideology production, the thesis argues that an analysis of this material can help to trace...... the contexture of The New Regressive Left. If the first part of the thesis lays out the theoretical approach and draws the contextual framework, through an exploration of the surrounding Arab media-and ideoscapes, the second part is an analytical investigation of the discourse that permeates the programmes aired...... becomes clear from the analytical chapters is the emergence of the new cross-ideological alliance of The New Regressive Left. This emerging coalition between Shia Muslims, religious minorities, parts of the Arab Left, secular cultural producers, and the remnants of the political,strategic resistance...

  9. LHCb: $2\\beta_s$ measurement at LHCb

    CERN Multimedia

    Conti, G

    2009-01-01

    A measurement of $2\\beta_s$, the phase of the $B_s-\\bar{B_s}$ oscillation amplitude with respect to that of the ${\\rm b} \\rightarrow {\\rm c^{+}}{\\rm W^{-}}$ tree decay amplitude, is one of the key goals of the LHCb experiment with first data. In the Standard Model (SM), $2\\beta_s$ is predicted to be $0.0360^{+0.0020}_{-0.0016} \\rm rad$. The current constraints from the Tevatron are: $2\\beta_{s}\\in[0.32 ; 2.82]$ at 68$\\%$CL from the CDF experiment and $2\\beta_{s}=0.57^{+0.24}_{-0.30}$ from the D$\\oslash$ experiment. Although the statistical uncertainties are large, these results hint at the possible contribution of New Physics in the $B_s-\\bar{B_s}$ box diagram. After one year of data taking at LHCb at an average luminosity of $\\mathcal{L}\\sim2\\cdot10^{32}\\rm cm^{-2} \\rm s^{-1}$ (integrated luminosity $\\mathcal{L}_{\\rm int}\\sim 2 \\rm fb^{-1}$), the expected statistical uncertainty on the measurement is $\\sigma(2\\beta_s)\\simeq 0.03$. This uncertainty is similar to the $2\\beta_s$ value predicted by the SM.

  10. Research and analyze of physical health using multiple regression analysis

    Directory of Open Access Journals (Sweden)

    T. S. Kyi

    2014-01-01

    Full Text Available This paper represents the research which is trying to create a mathematical model of the "healthy people" using the method of regression analysis. The factors are the physical parameters of the person (such as heart rate, lung capacity, blood pressure, breath holding, weight height coefficient, flexibility of the spine, muscles of the shoulder belt, abdominal muscles, squatting, etc.., and the response variable is an indicator of physical working capacity. After performing multiple regression analysis, obtained useful multiple regression models that can predict the physical performance of boys the aged of fourteen to seventeen years. This paper represents the development of regression model for the sixteen year old boys and analyzed results.

  11. Measurement of A and B coefficients in the decay of polarized neutrons

    DEFF Research Database (Denmark)

    Christensen, Carl Jørgen; Krohn, V.E.; Ringo, G.R.

    1969-01-01

    The coefficients found were A = −0.115 ± 0.008 and B = 1.01 ± 0.04. The value of A leads to |GA/GV| = = 1.26 ± 0.02 for the ratio of coupling constants in beta decay, and B is consistent with GS = GT = 0....

  12. Beta limit of crescent and bean shaped tokamaks

    International Nuclear Information System (INIS)

    Naitou, H.; Yamazaki, K.

    1988-01-01

    The maximum attainable beta values which can be expected in tokamaks with crescent (BEAN 1) and rounded (BEAN 2) bean shaped cross-sections are obtained numerically by using the linear ideal MHD stability analysis code ERATO. The current profiles are optimized with a fixed pressure profile for high values of beta, keeping Mercier, high-n ballooning and n=1 kink modes stable. The poloidal plasma cross-sections are inscribed in a rectangle with an aspect ratio of three and an ellipticity of two. A confocal wall, the distance of which from the plasma surface is equal to the horizontal minor plasma radius, is present to stabilize against the kink mode. Depending on the shape and triangularity (indentation), a beta value of 10 to 17% is obtained. It is also shown that the coefficient of the Troyon-type beta scaling increases for an indented plasma. In the case of small indentation, the BEAN 1 type tokamaks show higher beta values than the BEAN 2 type. For strong indentation, the BEAN 2 type gives the highest beta value. (author). 29 refs, 15 figs

  13. A Matlab program for stepwise regression

    Directory of Open Access Journals (Sweden)

    Yanhong Qi

    2016-03-01

    Full Text Available The stepwise linear regression is a multi-variable regression for identifying statistically significant variables in the linear regression equation. In present study, we presented the Matlab program of stepwise regression.

  14. Regression filter for signal resolution

    International Nuclear Information System (INIS)

    Matthes, W.

    1975-01-01

    The problem considered is that of resolving a measured pulse height spectrum of a material mixture, e.g. gamma ray spectrum, Raman spectrum, into a weighed sum of the spectra of the individual constituents. The model on which the analytical formulation is based is described. The problem reduces to that of a multiple linear regression. A stepwise linear regression procedure was constructed. The efficiency of this method was then tested by transforming the procedure in a computer programme which was used to unfold test spectra obtained by mixing some spectra, from a library of arbitrary chosen spectra, and adding a noise component. (U.K.)

  15. Nonparametric Mixture of Regression Models.

    Science.gov (United States)

    Huang, Mian; Li, Runze; Wang, Shaoli

    2013-07-01

    Motivated by an analysis of US house price index data, we propose nonparametric finite mixture of regression models. We study the identifiability issue of the proposed models, and develop an estimation procedure by employing kernel regression. We further systematically study the sampling properties of the proposed estimators, and establish their asymptotic normality. A modified EM algorithm is proposed to carry out the estimation procedure. We show that our algorithm preserves the ascent property of the EM algorithm in an asymptotic sense. Monte Carlo simulations are conducted to examine the finite sample performance of the proposed estimation procedure. An empirical analysis of the US house price index data is illustrated for the proposed methodology.

  16. Parameter Selection Method for Support Vector Regression Based on Adaptive Fusion of the Mixed Kernel Function

    Directory of Open Access Journals (Sweden)

    Hailun Wang

    2017-01-01

    Full Text Available Support vector regression algorithm is widely used in fault diagnosis of rolling bearing. A new model parameter selection method for support vector regression based on adaptive fusion of the mixed kernel function is proposed in this paper. We choose the mixed kernel function as the kernel function of support vector regression. The mixed kernel function of the fusion coefficients, kernel function parameters, and regression parameters are combined together as the parameters of the state vector. Thus, the model selection problem is transformed into a nonlinear system state estimation problem. We use a 5th-degree cubature Kalman filter to estimate the parameters. In this way, we realize the adaptive selection of mixed kernel function weighted coefficients and the kernel parameters, the regression parameters. Compared with a single kernel function, unscented Kalman filter (UKF support vector regression algorithms, and genetic algorithms, the decision regression function obtained by the proposed method has better generalization ability and higher prediction accuracy.

  17. Irrational "Coefficients" in Renaissance Algebra.

    Science.gov (United States)

    Oaks, Jeffrey A

    2017-06-01

    Argument From the time of al-Khwārizmī in the ninth century to the beginning of the sixteenth century algebraists did not allow irrational numbers to serve as coefficients. To multiply by x, for instance, the result was expressed as the rhetorical equivalent of . The reason for this practice has to do with the premodern concept of a monomial. The coefficient, or "number," of a term was thought of as how many of that term are present, and not as the scalar multiple that we work with today. Then, in sixteenth-century Europe, a few algebraists began to allow for irrational coefficients in their notation. Christoff Rudolff (1525) was the first to admit them in special cases, and subsequently they appear more liberally in Cardano (1539), Scheubel (1550), Bombelli (1572), and others, though most algebraists continued to ban them. We survey this development by examining the texts that show irrational coefficients and those that argue against them. We show that the debate took place entirely in the conceptual context of premodern, "cossic" algebra, and persisted in the sixteenth century independent of the development of the new algebra of Viète, Decartes, and Fermat. This was a formal innovation violating prevailing concepts that we propose could only be introduced because of the growing autonomy of notation from rhetorical text.

  18. Integer Solutions of Binomial Coefficients

    Science.gov (United States)

    Gilbertson, Nicholas J.

    2016-01-01

    A good formula is like a good story, rich in description, powerful in communication, and eye-opening to readers. The formula presented in this article for determining the coefficients of the binomial expansion of (x + y)n is one such "good read." The beauty of this formula is in its simplicity--both describing a quantitative situation…

  19. Cactus: An Introduction to Regression

    Science.gov (United States)

    Hyde, Hartley

    2008-01-01

    When the author first used "VisiCalc," the author thought it a very useful tool when he had the formulas. But how could he design a spreadsheet if there was no known formula for the quantities he was trying to predict? A few months later, the author relates he learned to use multiple linear regression software and suddenly it all clicked into…

  20. Regression Models for Repairable Systems

    Czech Academy of Sciences Publication Activity Database

    Novák, Petr

    2015-01-01

    Roč. 17, č. 4 (2015), s. 963-972 ISSN 1387-5841 Institutional support: RVO:67985556 Keywords : Reliability analysis * Repair models * Regression Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.782, year: 2015 http://library.utia.cas.cz/separaty/2015/SI/novak-0450902.pdf

  1. Survival analysis II: Cox regression

    NARCIS (Netherlands)

    Stel, Vianda S.; Dekker, Friedo W.; Tripepi, Giovanni; Zoccali, Carmine; Jager, Kitty J.

    2011-01-01

    In contrast to the Kaplan-Meier method, Cox proportional hazards regression can provide an effect estimate by quantifying the difference in survival between patient groups and can adjust for confounding effects of other variables. The purpose of this article is to explain the basic concepts of the

  2. Kernel regression with functional response

    OpenAIRE

    Ferraty, Frédéric; Laksaci, Ali; Tadj, Amel; Vieu, Philippe

    2011-01-01

    We consider kernel regression estimate when both the response variable and the explanatory one are functional. The rates of uniform almost complete convergence are stated as function of the small ball probability of the predictor and as function of the entropy of the set on which uniformity is obtained.

  3. Determining Sample Size for Accurate Estimation of the Squared Multiple Correlation Coefficient.

    Science.gov (United States)

    Algina, James; Olejnik, Stephen

    2000-01-01

    Discusses determining sample size for estimation of the squared multiple correlation coefficient and presents regression equations that permit determination of the sample size for estimating this parameter for up to 20 predictor variables. (SLD)

  4. Simple and multiple linear regression: sample size considerations.

    Science.gov (United States)

    Hanley, James A

    2016-11-01

    The suggested "two subjects per variable" (2SPV) rule of thumb in the Austin and Steyerberg article is a chance to bring out some long-established and quite intuitive sample size considerations for both simple and multiple linear regression. This article distinguishes two of the major uses of regression models that imply very different sample size considerations, neither served well by the 2SPV rule. The first is etiological research, which contrasts mean Y levels at differing "exposure" (X) values and thus tends to focus on a single regression coefficient, possibly adjusted for confounders. The second research genre guides clinical practice. It addresses Y levels for individuals with different covariate patterns or "profiles." It focuses on the profile-specific (mean) Y levels themselves, estimating them via linear compounds of regression coefficients and covariates. By drawing on long-established closed-form variance formulae that lie beneath the standard errors in multiple regression, and by rearranging them for heuristic purposes, one arrives at quite intuitive sample size considerations for both research genres. Copyright © 2016 Elsevier Inc. All rights reserved.

  5. Assessment of deforestation using regression; Hodnotenie odlesnenia s vyuzitim regresie

    Energy Technology Data Exchange (ETDEWEB)

    Juristova, J. [Univerzita Komenskeho, Prirodovedecka fakulta, Katedra kartografie, geoinformatiky a DPZ, 84215 Bratislava (Slovakia)

    2013-04-16

    This work is devoted to the evaluation of deforestation using regression methods through software Idrisi Taiga. Deforestation is evaluated by the method of logistic regression. The dependent variable has discrete values '0' and '1', indicating that the deforestation occurred or not. Independent variables have continuous values, expressing the distance from the edge of the deforested areas of forests from urban areas, the river and the road network. The results were also used in predicting the probability of deforestation in subsequent periods. The result is a map showing the output probability of deforestation for the periods 1990/2000 and 200/2006 in accordance with predetermined coefficients (values of independent variables). (authors)

  6. Enantioselective synthesis of alpha,beta-disubstituted-beta-amino acids.

    Science.gov (United States)

    Sibi, Mukund P; Prabagaran, Narayanasamy; Ghorpade, Sandeep G; Jasperse, Craig P

    2003-10-01

    Highly diastereoselective and enantioselective addition of N-benzylhydroxylamine to imides 17 and 20-30 produces alpha,beta-trans-disubstituted N-benzylisoxazolidinones 19 and 31-41. These reactions proceed in 60-96% ee with 93-99% de's using 5 mol % of Mg(NTf2)2 and ligand 18. The product isoxazolidinones can be hydrogenolyzed directly to provide alpha,beta-disubstituted-beta-amino acids.

  7. A Predictive Logistic Regression Model of World Conflict Using Open Source Data

    Science.gov (United States)

    2015-03-26

    No correlation between the error terms and the independent variables 9. Absence of perfect multicollinearity (Menard, 2001) When assumptions are...some of the variables before initial model building. Multicollinearity , or near-linear dependence among the variables will cause problems in the...model. High multicollinearity tends to produce unreasonably high logistic regression coefficients and can result in coefficients that are not

  8. Impact of multicollinearity on small sample hydrologic regression models

    Science.gov (United States)

    Kroll, Charles N.; Song, Peter

    2013-06-01

    Often hydrologic regression models are developed with ordinary least squares (OLS) procedures. The use of OLS with highly correlated explanatory variables produces multicollinearity, which creates highly sensitive parameter estimators with inflated variances and improper model selection. It is not clear how to best address multicollinearity in hydrologic regression models. Here a Monte Carlo simulation is developed to compare four techniques to address multicollinearity: OLS, OLS with variance inflation factor screening (VIF), principal component regression (PCR), and partial least squares regression (PLS). The performance of these four techniques was observed for varying sample sizes, correlation coefficients between the explanatory variables, and model error variances consistent with hydrologic regional regression models. The negative effects of multicollinearity are magnified at smaller sample sizes, higher correlations between the variables, and larger model error variances (smaller R2). The Monte Carlo simulation indicates that if the true model is known, multicollinearity is present, and the estimation and statistical testing of regression parameters are of interest, then PCR or PLS should be employed. If the model is unknown, or if the interest is solely on model predictions, is it recommended that OLS be employed since using more complicated techniques did not produce any improvement in model performance. A leave-one-out cross-validation case study was also performed using low-streamflow data sets from the eastern United States. Results indicate that OLS with stepwise selection generally produces models across study regions with varying levels of multicollinearity that are as good as biased regression techniques such as PCR and PLS.

  9. Beta measurement evaluation and upgrade

    International Nuclear Information System (INIS)

    Swinth, K.L.; Rathbun, L.A.; Roberson, P.L.; Endres, G.W.R.

    1986-01-01

    This program focuses on the resolution of problems associated with the field measurement of the beta dose component at Department of Energy (DOE) facilities. The change in DOE programs, including increased efforts in improved waste management and decontamination and decommissioning (D and D) of facilities, coupled with beta measurement problems identified at Three Mile Island has increased the need to improve beta measurements. In FY 1982, work was initiated to provide a continuing effort to identify problems associated with beta dose assessment at DOE facilities. The problems identified resulted in the development of this program. The investigation includes (1) an assessment of measurement systems now in use, (2) development of improved calibration systems and procedures, (3) application of innovative beta dosimetry concepts, (4) investigation of new instruments or concepts for monitoring and spectroscopy, and (5) development of recommendations to assure an adequate beta measurement program within DOE facilities

  10. Conditional Betas and Investor Uncertainty

    OpenAIRE

    Fernando D. Chague

    2013-01-01

    We derive theoretical expressions for market betas from a rational expectation equilibrium model where the representative investor does not observe if the economy is in a recession or an expansion. Market betas in this economy are time-varying and related to investor uncertainty about the state of the economy. The dynamics of betas will also vary across assets according to the assets' cash-flow structure. In a calibration exercise, we show that value and growth firms have cash-flow structures...

  11. Quantile Regression With Measurement Error

    KAUST Repository

    Wei, Ying

    2009-08-27

    Regression quantiles can be substantially biased when the covariates are measured with error. In this paper we propose a new method that produces consistent linear quantile estimation in the presence of covariate measurement error. The method corrects the measurement error induced bias by constructing joint estimating equations that simultaneously hold for all the quantile levels. An iterative EM-type estimation algorithm to obtain the solutions to such joint estimation equations is provided. The finite sample performance of the proposed method is investigated in a simulation study, and compared to the standard regression calibration approach. Finally, we apply our methodology to part of the National Collaborative Perinatal Project growth data, a longitudinal study with an unusual measurement error structure. © 2009 American Statistical Association.

  12. Calibration factor or calibration coefficient?

    International Nuclear Information System (INIS)

    Meghzifene, A.; Shortt, K.R.

    2002-01-01

    Full text: The IAEA/WHO network of SSDLs was set up in order to establish links between SSDL members and the international measurement system. At the end of 2001, there were 73 network members in 63 Member States. The SSDL network members provide calibration services to end-users at the national or regional level. The results of the calibrations are summarized in a document called calibration report or calibration certificate. The IAEA has been using the term calibration certificate and will continue using the same terminology. The most important information in a calibration certificate is a list of calibration factors and their related uncertainties that apply to the calibrated instrument for the well-defined irradiation and ambient conditions. The IAEA has recently decided to change the term calibration factor to calibration coefficient, to be fully in line with ISO [ISO 31-0], which recommends the use of the term coefficient when it links two quantities A and B (equation 1) that have different dimensions. The term factor should only be used for k when it is used to link the terms A and B that have the same dimensions A=k.B. However, in a typical calibration, an ion chamber is calibrated in terms of a physical quantity such as air kerma, dose to water, ambient dose equivalent, etc. If the chamber is calibrated together with its electrometer, then the calibration refers to the physical quantity to be measured per electrometer unit reading. In this case, the terms referred have different dimensions. The adoption by the Agency of the term coefficient to express the results of calibrations is consistent with the 'International vocabulary of basic and general terms in metrology' prepared jointly by the BIPM, IEC, ISO, OIML and other organizations. The BIPM has changed from factor to coefficient. The authors believe that this is more than just a matter of semantics and recommend that the SSDL network members adopt this change in terminology. (author)

  13. Extinction Coefficient of Gold Nanostars

    OpenAIRE

    de Puig, Helena; Tam, Justina O.; Yen, Chun-Wan; Gehrke, Lee; Hamad-Schifferli, Kimberly

    2015-01-01

    Gold nanostars (NStars) are highly attractive for biological applications due to their surface chemistry, facile synthesis and optical properties. Here, we synthesize NStars in HEPES buffer at different HEPES/Au ratios, producing NStars of different sizes and shapes, and therefore varying optical properties. We measure the extinction coefficient of the synthesized NStars at their maximum surface plasmon resonances (SPR), which range from 5.7 × 108 to 26.8 × 108 M−1cm−1. Measured values correl...

  14. Multivariate and semiparametric kernel regression

    OpenAIRE

    Härdle, Wolfgang; Müller, Marlene

    1997-01-01

    The paper gives an introduction to theory and application of multivariate and semiparametric kernel smoothing. Multivariate nonparametric density estimation is an often used pilot tool for examining the structure of data. Regression smoothing helps in investigating the association between covariates and responses. We concentrate on kernel smoothing using local polynomial fitting which includes the Nadaraya-Watson estimator. Some theory on the asymptotic behavior and bandwidth selection is pro...

  15. Regression algorithm for emotion detection

    OpenAIRE

    Berthelon , Franck; Sander , Peter

    2013-01-01

    International audience; We present here two components of a computational system for emotion detection. PEMs (Personalized Emotion Maps) store links between bodily expressions and emotion values, and are individually calibrated to capture each person's emotion profile. They are an implementation based on aspects of Scherer's theoretical complex system model of emotion~\\cite{scherer00, scherer09}. We also present a regression algorithm that determines a person's emotional feeling from sensor m...

  16. Directional quantile regression in R

    Czech Academy of Sciences Publication Activity Database

    Boček, Pavel; Šiman, Miroslav

    2017-01-01

    Roč. 53, č. 3 (2017), s. 480-492 ISSN 0023-5954 R&D Projects: GA ČR GA14-07234S Institutional support: RVO:67985556 Keywords : multivariate quantile * regression quantile * halfspace depth * depth contour Subject RIV: BD - Theory of Information OBOR OECD: Applied mathematics Impact factor: 0.379, year: 2016 http://library.utia.cas.cz/separaty/2017/SI/bocek-0476587.pdf

  17. Correlation, Regression, and Cointegration of Nonstationary Economic Time Series

    DEFF Research Database (Denmark)

    Johansen, Søren

    ), and Phillips (1986) found the limit distributions. We propose to distinguish between empirical and population correlation coefficients and show in a bivariate autoregressive model for nonstationary variables that the empirical correlation and regression coefficients do not converge to the relevant population...... values, due to the trending nature of the data. We conclude by giving a simple cointegration analysis of two interests. The analysis illustrates that much more insight can be gained about the dynamic behavior of the nonstationary variables then simply by calculating a correlation coefficient......Yule (1926) introduced the concept of spurious or nonsense correlation, and showed by simulation that for some nonstationary processes, that the empirical correlations seem not to converge in probability even if the processes were independent. This was later discussed by Granger and Newbold (1974...

  18. Advanced colorectal neoplasia risk stratification by penalized logistic regression.

    Science.gov (United States)

    Lin, Yunzhi; Yu, Menggang; Wang, Sijian; Chappell, Richard; Imperiale, Thomas F

    2016-08-01

    Colorectal cancer is the second leading cause of death from cancer in the United States. To facilitate the efficiency of colorectal cancer screening, there is a need to stratify risk for colorectal cancer among the 90% of US residents who are considered "average risk." In this article, we investigate such risk stratification rules for advanced colorectal neoplasia (colorectal cancer and advanced, precancerous polyps). We use a recently completed large cohort study of subjects who underwent a first screening colonoscopy. Logistic regression models have been used in the literature to estimate the risk of advanced colorectal neoplasia based on quantifiable risk factors. However, logistic regression may be prone to overfitting and instability in variable selection. Since most of the risk factors in our study have several categories, it was tempting to collapse these categories into fewer risk groups. We propose a penalized logistic regression method that automatically and simultaneously selects variables, groups categories, and estimates their coefficients by penalizing the [Formula: see text]-norm of both the coefficients and their differences. Hence, it encourages sparsity in the categories, i.e. grouping of the categories, and sparsity in the variables, i.e. variable selection. We apply the penalized logistic regression method to our data. The important variables are selected, with close categories simultaneously grouped, by penalized regression models with and without the interactions terms. The models are validated with 10-fold cross-validation. The receiver operating characteristic curves of the penalized regression models dominate the receiver operating characteristic curve of naive logistic regressions, indicating a superior discriminative performance. © The Author(s) 2013.

  19. Polylinear regression analysis in radiochemistry

    International Nuclear Information System (INIS)

    Kopyrin, A.A.; Terent'eva, T.N.; Khramov, N.N.

    1995-01-01

    A number of radiochemical problems have been formulated in the framework of polylinear regression analysis, which permits the use of conventional mathematical methods for their solution. The authors have considered features of the use of polylinear regression analysis for estimating the contributions of various sources to the atmospheric pollution, for studying irradiated nuclear fuel, for estimating concentrations from spectral data, for measuring neutron fields of a nuclear reactor, for estimating crystal lattice parameters from X-ray diffraction patterns, for interpreting data of X-ray fluorescence analysis, for estimating complex formation constants, and for analyzing results of radiometric measurements. The problem of estimating the target parameters can be incorrect at certain properties of the system under study. The authors showed the possibility of regularization by adding a fictitious set of data open-quotes obtainedclose quotes from the orthogonal design. To estimate only a part of the parameters under consideration, the authors used incomplete rank models. In this case, it is necessary to take into account the possibility of confounding estimates. An algorithm for evaluating the degree of confounding is presented which is realized using standard software or regression analysis

  20. Dynamic returns of beta arbitrage

    OpenAIRE

    Nascimento, Mafalda

    2017-01-01

    This thesis studies the patterns of the abnormal returns of the beta strategy. The topic can be helpful for professional investors, who intend to achieve a better performance in their portfolios. Following the methodology of Lou, Polk, & Huang (2016), the COBAR measure is computed in order to determine the levels of beta arbitrage in the market in each point in time. It is argued that beta arbitrage activity can have impact on the returns of the beta strategy. In fact, it is demonstrated that...

  1. Integration of BETA with Eclipse

    DEFF Research Database (Denmark)

    Andersen, Peter; Madsen, Ole Lehrmann; Enevoldsen, Mads Brøgger

    2004-01-01

    This paper presents language interoperability issues appearing in order to implement support for the BETA language in the Java-based Eclipse integrated development environment. One of the challenges is to implement plug-ins in BETA and be able to load them in Eclipse. In order to do this, some fo...... it is possible to implement plug-ins in BETA and even inherit from Java classes. In the paper the two approaches are described together with part of the mapping from BETA to Java class files. http://www.sciencedirect.com/science/journal/15710661...

  2. Simultaneous beta and gamma spectroscopy

    Science.gov (United States)

    Farsoni, Abdollah T.; Hamby, David M.

    2010-03-23

    A phoswich radiation detector for simultaneous spectroscopy of beta rays and gamma rays includes three scintillators with different decay time characteristics. Two of the three scintillators are used for beta detection and the third scintillator is used for gamma detection. A pulse induced by an interaction of radiation with the detector is digitally analyzed to classify the type of event as beta, gamma, or unknown. A pulse is classified as a beta event if the pulse originated from just the first scintillator alone or from just the first and the second scintillator. A pulse from just the third scintillator is recorded as gamma event. Other pulses are rejected as unknown events.

  3. Gaussian Process Regression Model in Spatial Logistic Regression

    Science.gov (United States)

    Sofro, A.; Oktaviarina, A.

    2018-01-01

    Spatial analysis has developed very quickly in the last decade. One of the favorite approaches is based on the neighbourhood of the region. Unfortunately, there are some limitations such as difficulty in prediction. Therefore, we offer Gaussian process regression (GPR) to accommodate the issue. In this paper, we will focus on spatial modeling with GPR for binomial data with logit link function. The performance of the model will be investigated. We will discuss the inference of how to estimate the parameters and hyper-parameters and to predict as well. Furthermore, simulation studies will be explained in the last section.

  4. Form of multicomponent Fickian diffusion coefficients matrix

    International Nuclear Information System (INIS)

    Wambui Mutoru, J.; Firoozabadi, Abbas

    2011-01-01

    Highlights: → Irreversible thermodynamics establishes form of multicomponent diffusion coefficients. → Phenomenological coefficients and thermodynamic factors affect sign of diffusion coefficients. → Negative diagonal elements of diffusion coefficients matrix can occur in non-ideal mixtures. → Eigenvalues of the matrix of Fickian diffusion coefficients may not be all real. - Abstract: The form of multicomponent Fickian diffusion coefficients matrix in thermodynamically stable mixtures is established based on the form of phenomenological coefficients and thermodynamic factors. While phenomenological coefficients form a symmetric positive definite matrix, the determinant of thermodynamic factors matrix is positive. As a result, the Fickian diffusion coefficients matrix has a positive determinant, but its elements - including diagonal elements - can be negative. Comprehensive survey of reported diffusion coefficients data for ternary and quaternary mixtures, confirms that invariably the determinant of the Fickian diffusion coefficients matrix is positive.

  5. Sparse Reduced-Rank Regression for Simultaneous Dimension Reduction and Variable Selection

    KAUST Repository

    Chen, Lisha; Huang, Jianhua Z.

    2012-01-01

    and hence improves predictive accuracy. We propose to select relevant variables for reduced-rank regression by using a sparsity-inducing penalty. We apply a group-lasso type penalty that treats each row of the matrix of the regression coefficients as a group

  6. Supremum Norm Posterior Contraction and Credible Sets for Nonparametric Multivariate Regression

    NARCIS (Netherlands)

    Yoo, W.W.; Ghosal, S

    2016-01-01

    In the setting of nonparametric multivariate regression with unknown error variance, we study asymptotic properties of a Bayesian method for estimating a regression function f and its mixed partial derivatives. We use a random series of tensor product of B-splines with normal basis coefficients as a

  7. Easy methods for extracting individual regression slopes: Comparing SPSS, R, and Excel

    Directory of Open Access Journals (Sweden)

    Roland Pfister

    2013-10-01

    Full Text Available Three different methods for extracting coefficientsof linear regression analyses are presented. The focus is on automatic and easy-to-use approaches for common statistical packages: SPSS, R, and MS Excel / LibreOffice Calc. Hands-on examples are included for each analysis, followed by a brief description of how a subsequent regression coefficient analysis is performed.

  8. Modeling Group Differences in OLS and Orthogonal Regression: Implications for Differential Validity Studies

    Science.gov (United States)

    Kane, Michael T.; Mroch, Andrew A.

    2010-01-01

    In evaluating the relationship between two measures across different groups (i.e., in evaluating "differential validity") it is necessary to examine differences in correlation coefficients and in regression lines. Ordinary least squares (OLS) regression is the standard method for fitting lines to data, but its criterion for optimal fit…

  9. Identification of active anti-inflammatory principles of beta- beta ...

    African Journals Online (AJOL)

    chromatography. Components of the extracts were identified by thin layer chromatography (TLC) scanner and UV-visible spectroscopy, using scopoletin as standard. Results: ... basic coumarin skeleton ring structure reduce ... Figure 2: Thin-layer chromatogram: (1) Ethanol extract; (2) Dichloromethane fraction; (3) Beta-beta.

  10. Improved limits on beta(-) and beta(-) decays of Ca-48

    Czech Academy of Sciences Publication Activity Database

    Bakalyarov, A.; Balysh, A.; Barabash, AS.; Beneš, P.; Briancon, C.; Brudanin, V. B.; Čermák, P.; Egorov, V.; Hubert, F.; Hubert, P.; Korolev, NA.; Kosjakov, VN.; Kovalík, Alojz; Lebedev, NA.; Novgorodov, A. F.; Rukhadze, NI.; Štekl, NI.; Timkin, VV.; Veleshko, IE.; Vylov, T.; Umatov, VI.

    2002-01-01

    Roč. 76, č. 9 (2002), s. 545-547 ISSN 0021-3640 Institutional research plan: CEZ:AV0Z1048901 Keywords : beta decay * double beta decay * Ca-48 Subject RIV: BG - Nuclear, Atomic and Molecular Physics, Colliders Impact factor: 1.483, year: 2002

  11. Conversion of beta-methylbutyric acid to beta-hydroxy-beta-methylbutyric acid by Galactomyces reessii.

    OpenAIRE

    Lee, I Y; Nissen, S L; Rosazza, J P

    1997-01-01

    beta-Hydroxy-beta-methylbutyric acid (HMB) has been shown to increase strength and lean mass gains in humans undergoing resistance-exercise training. HMB is currently marketed as a calcium salt of HMB, and thus, environmentally sound and inexpensive methods of manufacture are being sought. This study investigates the microbial conversion of beta-methylbutyric acid (MBA) to HMB by cultures of Galactomyces reessii. Optimal concentrations of MBA were in the range of 5 to 20 g/liter for HMB produ...

  12. The destruction complex of beta-catenin in colorectal carcinoma and colonic adenoma.

    Science.gov (United States)

    Bourroul, Guilherme Muniz; Fragoso, Hélio José; Gomes, José Walter Feitosa; Bourroul, Vivian Sati Oba; Oshima, Celina Tizuko Fujiyama; Gomes, Thiago Simão; Saba, Gabriela Tognini; Palma, Rogério Tadeu; Waisberg, Jaques

    2016-01-01

    To evaluate the destruction complex of beta-catenin by the expression of the proteins beta-catetenin, adenomatous polyposis coli, GSK3β, axin and ubiquitin in colorectal carcinoma and colonic adenoma. Tissue samples from 64 patients with colorectal carcinoma and 53 patients with colonic adenoma were analyzed. Tissue microarray blocks and slides were prepared and subjected to immunohistochemistry with polyclonal antibodies in carcinoma, adjacent non-neoplastic mucosa, and adenoma tissues. The immunoreactivity was evaluated by the percentage of positive stained cells and by the intensity assessed through of the stained grade of proteins in the cytoplasm and nucleus of cells. In the statistical analysis, the Spearman correlation coefficient, Student's t, χ2, Mann-Whitney, and McNemar tests, and univariate logistic regression analysis were used. In colorectal carcinoma, the expressions of beta-catenin and adenomatous polyposis coli proteins were significantly higher than in colonic adenomas (pcitoplasma e no núcleo das células. Na análise estatística, foram utilizados o coeficiente de correlação de Spearman, os testes t de Student, χ2, Mann-Whitney e de McNemar, e a análise de regressão logística univariada. No carcinoma colorretal, as expressões da betacatenina e da adenomatous polyposis coli foram significativamente maiores do que em adenomas do colo (p<0,001 e p<0,0001, respectivamente). A imunorreatividade das proteínas GSK3β, axina 1 e ubiquitina foi significativamente maior (p=0,03, p=0,039 e p=0,03, respectivamente) no carcinoma colorretal do que no adenoma e na mucosa não neoplásica adjacente. A coloração imuno-histoquímica dessas proteínas não apresentou diferenças significantes em relação às características clinicopatológicas do câncer colorretal e do adenoma. Em adenomas, as menores expressões de betacatenina, axina 1 e GSK3β indicaram que o complexo de destruição da betacatenina estava conservado, enquanto que, no carcinoma

  13. A new correlation for two-phase critical discharge coefficient

    International Nuclear Information System (INIS)

    Park, Jong Woon; Chun, Moon Hyun

    1989-01-01

    A new simple correlation for subcooled and two-phase critical flow discharge coefficient has been developed by stepwise regression technique. The new discharge coefficient has three independent variables and they are length to hydraulic diameter ratio, degree of subcooling, and stagnation temperature. The new discharge coefficient is applied as a multiplier to homogeneous equilibrium model and Abauf's single phase critical mass flux calculation equation. This method has been tested for its accuracy by comparing with experimental data. Results of the comparison show that the agreement between the predictions with new correlation and the experimental data is good for pipes and nozzles with vertical upward flow for subcooled upstream condition and nozzles with horizontal configuration for two-phase upstream condition

  14. Study of transport coefficients of nanodiamond nanofluids

    Science.gov (United States)

    Pryazhnikov, M. I.; Minakov, A. V.; Guzei, D. V.

    2017-09-01

    Experimental data on the thermal conductivity coefficient and viscosity coefficient of nanodiamond nanofluids are presented. Distilled water and ethylene glycol were used as the base fluid. Dependences of transport coefficients on concentration are obtained. It was shown that the thermal conductivity coefficient increases with increasing nanodiamonds concentration. It was shown that base fluids properties and nanodiamonds concentration affect on the rheology of nanofluids.

  15. On concurvity in nonlinear and nonparametric regression models

    Directory of Open Access Journals (Sweden)

    Sonia Amodio

    2014-12-01

    Full Text Available When data are affected by multicollinearity in the linear regression framework, then concurvity will be present in fitting a generalized additive model (GAM. The term concurvity describes nonlinear dependencies among the predictor variables. As collinearity results in inflated variance of the estimated regression coefficients in the linear regression model, the result of the presence of concurvity leads to instability of the estimated coefficients in GAMs. Even if the backfitting algorithm will always converge to a solution, in case of concurvity the final solution of the backfitting procedure in fitting a GAM is influenced by the starting functions. While exact concurvity is highly unlikely, approximate concurvity, the analogue of multicollinearity, is of practical concern as it can lead to upwardly biased estimates of the parameters and to underestimation of their standard errors, increasing the risk of committing type I error. We compare the existing approaches to detect concurvity, pointing out their advantages and drawbacks, using simulated and real data sets. As a result, this paper will provide a general criterion to detect concurvity in nonlinear and non parametric regression models.

  16. New evidence on beta stationarity and forecast for belgian common stocks

    OpenAIRE

    Hawawini, Gabriel A; Michel, Pierre-Armand; Corhay, Albert

    1985-01-01

    Based on a comprehensive sample of 170 securities traded continuously on the Brussels Stock Exchange from December 1966 to December 1983 this paper presents evidence which indicates that the stationarity of beta-coefficients is not as strong as reported in previous studies which were based on smaller samples. It is shown, however, that beta forecast can be generally improved using an adjustment method and that the improvement is highest for portfolios of increasing size. Peer reviewed

  17. Evaluation of Rock Joint Coefficients

    Science.gov (United States)

    Audy, Ondřej; Ficker, Tomáš

    2017-10-01

    A computer method for evaluation of rock joint coefficients is described and several applications are presented. The method is based on two absolute numerical indicators that are formed by means of the Fourier replicas of rock joint profiles. The first indicator quantifies the vertical depth of profiles and the second indicator classifies wavy character of profiles. The absolute indicators have replaced the formerly used relative indicators that showed some artificial behavior in some cases. This contribution is focused on practical computations testing the functionality of the newly introduced indicators.

  18. Variable selection and model choice in geoadditive regression models.

    Science.gov (United States)

    Kneib, Thomas; Hothorn, Torsten; Tutz, Gerhard

    2009-06-01

    Model choice and variable selection are issues of major concern in practical regression analyses, arising in many biometric applications such as habitat suitability analyses, where the aim is to identify the influence of potentially many environmental conditions on certain species. We describe regression models for breeding bird communities that facilitate both model choice and variable selection, by a boosting algorithm that works within a class of geoadditive regression models comprising spatial effects, nonparametric effects of continuous covariates, interaction surfaces, and varying coefficients. The major modeling components are penalized splines and their bivariate tensor product extensions. All smooth model terms are represented as the sum of a parametric component and a smooth component with one degree of freedom to obtain a fair comparison between the model terms. A generic representation of the geoadditive model allows us to devise a general boosting algorithm that automatically performs model choice and variable selection.

  19. Direction of Effects in Multiple Linear Regression Models.

    Science.gov (United States)

    Wiedermann, Wolfgang; von Eye, Alexander

    2015-01-01

    Previous studies analyzed asymmetric properties of the Pearson correlation coefficient using higher than second order moments. These asymmetric properties can be used to determine the direction of dependence in a linear regression setting (i.e., establish which of two variables is more likely to be on the outcome side) within the framework of cross-sectional observational data. Extant approaches are restricted to the bivariate regression case. The present contribution extends the direction of dependence methodology to a multiple linear regression setting by analyzing distributional properties of residuals of competing multiple regression models. It is shown that, under certain conditions, the third central moments of estimated regression residuals can be used to decide upon direction of effects. In addition, three different approaches for statistical inference are discussed: a combined D'Agostino normality test, a skewness difference test, and a bootstrap difference test. Type I error and power of the procedures are assessed using Monte Carlo simulations, and an empirical example is provided for illustrative purposes. In the discussion, issues concerning the quality of psychological data, possible extensions of the proposed methods to the fourth central moment of regression residuals, and potential applications are addressed.

  20. The best-beta CAPM

    NARCIS (Netherlands)

    Zou, L.

    2006-01-01

    The issue of 'best-beta' arises as soon as potential errors in the Sharpe-Lintner-Black capital asset pricing model (CAPM) are acknowledged. By incorporating a target variable into the investor preferences, this study derives a best-beta CAPM (BCAPM) that maintains the CAPM's theoretical appeal and

  1. Beta decay of Cu-56

    NARCIS (Netherlands)

    Borcea, R; Aysto, J; Caurier, E; Dendooven, P; Doring, J; Gierlik, M; Gorska, M; Grawe, H; Hellstrom, M; Janas, Z; Jokinen, A; Karny, M; Kirchner, R; La Commara, M; Langanke, K; Martinez-Pinedo, G; Mayet, P; Nieminen, A; Nowacki, F; Penttila, H; Plochocki, A; Rejmund, M; Roeckl, E; Schlegel, C; Schmidt, K; Schwengner, R; Sawicka, M

    2001-01-01

    The proton-rich isotope Cu-56 was produced at the GSI On-Line Mass Separator by means of the Si-28(S-32, p3n) fusion-evaporation reaction. Its beta -decay properties were studied by detecting beta -delayed gamma rays and protons. A half-Life of 93 +/- 3 ms was determined for Cu-56. Compared to the

  2. BETA SPECTRA. I. Negatrons spectra

    International Nuclear Information System (INIS)

    Grau Malonda, A.; Garcia-Torano, E.

    1978-01-01

    Using the Fermi theory of beta decay, the beta spectra for 62 negatrons emitters have been computed introducing a correction factor for unique forbidden transitions. These spectra are plotted vs. energy, once normal i sed, and tabulated with the related Fermi functions. The average and median energies are calculated. (Author)

  3. Review of the beta situation

    International Nuclear Information System (INIS)

    Sheffield, J.

    1982-01-01

    This note lists some of the possible causes of beta limitation in tokamak and discusses what is known and what is involved in investigating them. The motivation for preparing this note is the observed degradation of confinement with increasing beta poloidal β/sub p/ and beam power P/sub b/ in ISX-B

  4. Partition Coefficients of Amino Acids, Peptides, and Enzymes in Dextran + Poly(Ethylene Glycol) + Water Aqueous Two-Phase Systems

    Energy Technology Data Exchange (ETDEWEB)

    Kakisaka, Keijiro.; Shindo, Takashi.; Iwai, Yoshio.; Arai, Yasuhiko. [Kyushu University, Fukuoka (Japan). Department of Chemical Systems and Engineering

    1998-12-01

    Partition coefficients are measured for five amino acids(aspartic acid, asparagine, methionine, cysteine and histidine) and tow peptides(glycyl-glycine and hexa-glycine) in dextran + poly(ethylene glycol) + water aqueous two-phase system. The partition coefficients of the amino acids and peptides are aorrelated using the osmotic virial equation. The interaction coefficients contained in the equation can be calculated by hydrophilic group parameters. The partition coefficients of {alpha}-amylase calculated by the osmotic virial equation with the hydrophilic group parameters are in fairly good agreement with the experimental data, though a relatively large discrepancy is shown for {beta}-amylase. (author)

  5. Partition Coefficients of Amino Acids, Peptides, and Enzymes in Dextran + Poly(Ethylene Glycol) + Water Aqueous Two-Phase Systems

    Energy Technology Data Exchange (ETDEWEB)

    Kakisaka, Keijiro.; Shindo, Takashi.; Iwai, Yoshio.; Arai, Yasuhiko. (Kyushu University, Fukuoka (Japan). Department of Chemical Systems and Engineering)

    1998-12-01

    Partition coefficients are measured for five amino acids(aspartic acid, asparagine, methionine, cysteine and histidine) and tow peptides(glycyl-glycine and hexa-glycine) in dextran + poly(ethylene glycol) + water aqueous two-phase system. The partition coefficients of the amino acids and peptides are aorrelated using the osmotic virial equation. The interaction coefficients contained in the equation can be calculated by hydrophilic group parameters. The partition coefficients of [alpha]-amylase calculated by the osmotic virial equation with the hydrophilic group parameters are in fairly good agreement with the experimental data, though a relatively large discrepancy is shown for [beta]-amylase. (author)

  6. RAVEN Beta Release

    International Nuclear Information System (INIS)

    Rabiti, Cristian; Alfonsi, Andrea; Cogliati, Joshua Joseph; Mandelli, Diego; Kinoshita, Robert Arthur; Wang, Congjian; Maljovec, Daniel Patrick; Talbot, Paul William

    2016-01-01

    This documents the release of the Risk Analysis Virtual Environment (RAVEN) code. A description of the RAVEN code is provided, and discussion of the release process for the M2LW-16IN0704045 milestone. The RAVEN code is a generic software framework to perform parametric and probabilistic analysis based on the response of complex system codes. RAVEN is capable of investigating the system response as well as the input space using Monte Carlo, Grid, or Latin Hyper Cube sampling schemes, but its strength is focused toward system feature discovery, such as limit surfaces, separating regions of the input space leading to system failure, using dynamic supervised learning techniques. RAVEN has now increased in maturity enough for the Beta 1.0 release.

  7. RAVEN Beta Release

    Energy Technology Data Exchange (ETDEWEB)

    Rabiti, Cristian [Idaho National Lab. (INL), Idaho Falls, ID (United States); Alfonsi, Andrea [Idaho National Lab. (INL), Idaho Falls, ID (United States); Cogliati, Joshua Joseph [Idaho National Lab. (INL), Idaho Falls, ID (United States); Mandelli, Diego [Idaho National Lab. (INL), Idaho Falls, ID (United States); Kinoshita, Robert Arthur [Idaho National Lab. (INL), Idaho Falls, ID (United States); Wang, Congjian [Idaho National Lab. (INL), Idaho Falls, ID (United States); Maljovec, Daniel Patrick [Idaho National Lab. (INL), Idaho Falls, ID (United States); Talbot, Paul William [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2016-02-01

    This documents the release of the Risk Analysis Virtual Environment (RAVEN) code. A description of the RAVEN code is provided, and discussion of the release process for the M2LW-16IN0704045 milestone. The RAVEN code is a generic software framework to perform parametric and probabilistic analysis based on the response of complex system codes. RAVEN is capable of investigating the system response as well as the input space using Monte Carlo, Grid, or Latin Hyper Cube sampling schemes, but its strength is focused toward system feature discovery, such as limit surfaces, separating regions of the input space leading to system failure, using dynamic supervised learning techniques. RAVEN has now increased in maturity enough for the Beta 1.0 release.

  8. Correlation coefficients in neutron β-decay

    International Nuclear Information System (INIS)

    Byrne, J.

    1978-01-01

    The various angular and polarisation coefficients in neutron decay are the principal sources of information on the β-interaction. Measurements of the electron-neutrino angular correlation coefficient (a), the neutron-spin-electron-momentum correlation coefficient (A), the neutron-spin-neutrino-momentum correlation coefficient (B), and the triple correlation coefficient D and time-reversal invariance are reviewed and the results discussed. (U.K.)

  9. Spontaneous regression of pulmonary bullae

    International Nuclear Information System (INIS)

    Satoh, H.; Ishikawa, H.; Ohtsuka, M.; Sekizawa, K.

    2002-01-01

    The natural history of pulmonary bullae is often characterized by gradual, progressive enlargement. Spontaneous regression of bullae is, however, very rare. We report a case in which complete resolution of pulmonary bullae in the left upper lung occurred spontaneously. The management of pulmonary bullae is occasionally made difficult because of gradual progressive enlargement associated with abnormal pulmonary function. Some patients have multiple bulla in both lungs and/or have a history of pulmonary emphysema. Others have a giant bulla without emphysematous change in the lungs. Our present case had treated lung cancer with no evidence of local recurrence. He had no emphysematous change in lung function test and had no complaints, although the high resolution CT scan shows evidence of underlying minimal changes of emphysema. Ortin and Gurney presented three cases of spontaneous reduction in size of bulla. Interestingly, one of them had a marked decrease in the size of a bulla in association with thickening of the wall of the bulla, which was observed in our patient. This case we describe is of interest, not only because of the rarity with which regression of pulmonary bulla has been reported in the literature, but also because of the spontaneous improvements in the radiological picture in the absence of overt infection or tumor. Copyright (2002) Blackwell Science Pty Ltd

  10. Quantum algorithm for linear regression

    Science.gov (United States)

    Wang, Guoming

    2017-07-01

    We present a quantum algorithm for fitting a linear regression model to a given data set using the least-squares approach. Differently from previous algorithms which yield a quantum state encoding the optimal parameters, our algorithm outputs these numbers in the classical form. So by running it once, one completely determines the fitted model and then can use it to make predictions on new data at little cost. Moreover, our algorithm works in the standard oracle model, and can handle data sets with nonsparse design matrices. It runs in time poly( log2(N ) ,d ,κ ,1 /ɛ ) , where N is the size of the data set, d is the number of adjustable parameters, κ is the condition number of the design matrix, and ɛ is the desired precision in the output. We also show that the polynomial dependence on d and κ is necessary. Thus, our algorithm cannot be significantly improved. Furthermore, we also give a quantum algorithm that estimates the quality of the least-squares fit (without computing its parameters explicitly). This algorithm runs faster than the one for finding this fit, and can be used to check whether the given data set qualifies for linear regression in the first place.

  11. Identify Beta-Hairpin Motifs with Quadratic Discriminant Algorithm Based on the Chemical Shifts.

    Directory of Open Access Journals (Sweden)

    Feng YongE

    Full Text Available Successful prediction of the beta-hairpin motif will be helpful for understanding the of the fold recognition. Some algorithms have been proposed for the prediction of beta-hairpin motifs. However, the parameters used by these methods were primarily based on the amino acid sequences. Here, we proposed a novel model for predicting beta-hairpin structure based on the chemical shift. Firstly, we analyzed the statistical distribution of chemical shifts of six nuclei in not beta-hairpin and beta-hairpin motifs. Secondly, we used these chemical shifts as features combined with three algorithms to predict beta-hairpin structure. Finally, we achieved the best prediction, namely sensitivity of 92%, the specificity of 94% with 0.85 of Mathew's correlation coefficient using quadratic discriminant analysis algorithm, which is clearly superior to the same method for the prediction of beta-hairpin structure from 20 amino acid compositions in the three-fold cross-validation. Our finding showed that the chemical shift is an effective parameter for beta-hairpin prediction, suggesting the quadratic discriminant analysis is a powerful algorithm for the prediction of beta-hairpin.

  12. Interactions between two beta-sheets. Energetics of beta/beta packing in proteins.

    Science.gov (United States)

    Chou, K C; Némethy, G; Rumsey, S; Tuttle, R W; Scheraga, H A

    1986-04-20

    The analysis of the interactions between regularly folded segments of the polypeptide chain contributes to an understanding of the energetics of protein folding. Conformational energy-minimization calculations have been carried out to determine the favorable ways of packing two right-twisted beta-sheets. The packing of two five-stranded beta-sheets was investigated, with the strands having the composition CH3CO-(L-Ile)6-NHCH3 in one beta-sheet and CH3CO-(L-Val)6-NHCH3 in the other. Two distinct classes of low-energy packing arrangements were found. In the class with lowest energies, the strands of the two beta-sheets are aligned nearly parallel (or antiparallel) with each other, with a preference for a negative orientation angle, because this arrangement corresponds to the best complementary packing of the two twisted saddle-shaped beta-sheets. In the second class, with higher interaction energies, the strands of the two beta-sheets are oriented nearly perpendicular to each other. While the surfaces of the two beta-sheets are not complementary in this arrangement, there is good packing between the corner of one beta-sheet and the interior part of the surface of the other, resulting in a favorable energy of packing. Both classes correspond to frequently observed orientations of beta-sheets in proteins. In proteins, the second class of packing is usually observed when the two beta-sheets are covalently linked, i.e. when a polypeptide strand passes from one beta-sheet to the other, but we have shown here that a large contribution to the stabilization of this packing arrangement arises from noncovalent interactions.

  13. Sparse Regression by Projection and Sparse Discriminant Analysis

    KAUST Repository

    Qi, Xin

    2015-04-03

    © 2015, © American Statistical Association, Institute of Mathematical Statistics, and Interface Foundation of North America. Recent years have seen active developments of various penalized regression methods, such as LASSO and elastic net, to analyze high-dimensional data. In these approaches, the direction and length of the regression coefficients are determined simultaneously. Due to the introduction of penalties, the length of the estimates can be far from being optimal for accurate predictions. We introduce a new framework, regression by projection, and its sparse version to analyze high-dimensional data. The unique nature of this framework is that the directions of the regression coefficients are inferred first, and the lengths and the tuning parameters are determined by a cross-validation procedure to achieve the largest prediction accuracy. We provide a theoretical result for simultaneous model selection consistency and parameter estimation consistency of our method in high dimension. This new framework is then generalized such that it can be applied to principal components analysis, partial least squares, and canonical correlation analysis. We also adapt this framework for discriminant analysis. Compared with the existing methods, where there is relatively little control of the dependency among the sparse components, our method can control the relationships among the components. We present efficient algorithms and related theory for solving the sparse regression by projection problem. Based on extensive simulations and real data analysis, we demonstrate that our method achieves good predictive performance and variable selection in the regression setting, and the ability to control relationships between the sparse components leads to more accurate classification. In supplementary materials available online, the details of the algorithms and theoretical proofs, and R codes for all simulation studies are provided.

  14. Performance of a parallel plate ionization chamber in beta radiation dosimetry

    Energy Technology Data Exchange (ETDEWEB)

    Antonio, Patricia L.; Caldas, Linda V.E., E-mail: patrilan@ipen.b, E-mail: lcaldas@ipen.b [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2011-07-01

    A homemade parallel plate ionization chamber with graphite collecting electrode, and developed for use in mammography beams, was tested in relation to its usefulness in beta radiation dosimetry at the Calibration Laboratory of IPEN. Characterization tests of this ionization chamber were performed, using the Sr-90 + Y-90, Kr-85 and Pm-147 sources of a beta secondary standard system. The results of saturation, leakage current, stabilization time, response stability, linearity, angular dependence, and calibration coefficients are within the recommended limits of international recommendations that indicate that this chamber may be used for beta radiation dosimetry. (author)

  15. Performance of a parallel plate ionization chamber in beta radiation dosimetry

    International Nuclear Information System (INIS)

    Antonio, Patricia L.; Caldas, Linda V.E.

    2011-01-01

    A homemade parallel plate ionization chamber with graphite collecting electrode, and developed for use in mammography beams, was tested in relation to its usefulness in beta radiation dosimetry at the Calibration Laboratory of IPEN. Characterization tests of this ionization chamber were performed, using the Sr-90 + Y-90, Kr-85 and Pm-147 sources of a beta secondary standard system. The results of saturation, leakage current, stabilization time, response stability, linearity, angular dependence, and calibration coefficients are within the recommended limits of international recommendations that indicate that this chamber may be used for beta radiation dosimetry. (author)

  16. The relative importance of relativistic induced interactions in the beta decay of 170Tm

    International Nuclear Information System (INIS)

    Bogdan, D.; Cristu, M.I.; Holan, S.; Faessler, A.

    1982-09-01

    The log ft-values, the spectrum shape functions, and the beta-gamma angular correlation coefficients of the 170 Tm beta decay are computed in the framework of relativistic formfactor formalism using asymmetric rotor model wavefunctions. Main vector and axial vector hadron currents being strongly hindered, the relative importance of induced interaction matrix elements is accurately estimated. Good agreement with experiment is obtained for the beta decay observables when the main induced interaction terms were taken into account. The contribution of the pseudoscalar term was found insignificant. (authors)

  17. Derivatives of the Incomplete Beta Function

    Directory of Open Access Journals (Sweden)

    Robert J. Boik

    1998-03-01

    Full Text Available The incomplete beta function is defined as where Beta(p, q is the beta function. Dutka (1981 gave a history of the development and numerical evaluation of this function. In this article, an algorithm for computing first and second derivatives of Ix,p,q with respect to p and q is described. The algorithm is useful, for example, when fitting parameters to a censored beta, truncated beta, or a truncated beta-binomial model.

  18. The number of subjects per variable required in linear regression analyses.

    Science.gov (United States)

    Austin, Peter C; Steyerberg, Ewout W

    2015-06-01

    To determine the number of independent variables that can be included in a linear regression model. We used a series of Monte Carlo simulations to examine the impact of the number of subjects per variable (SPV) on the accuracy of estimated regression coefficients and standard errors, on the empirical coverage of estimated confidence intervals, and on the accuracy of the estimated R(2) of the fitted model. A minimum of approximately two SPV tended to result in estimation of regression coefficients with relative bias of less than 10%. Furthermore, with this minimum number of SPV, the standard errors of the regression coefficients were accurately estimated and estimated confidence intervals had approximately the advertised coverage rates. A much higher number of SPV were necessary to minimize bias in estimating the model R(2), although adjusted R(2) estimates behaved well. The bias in estimating the model R(2) statistic was inversely proportional to the magnitude of the proportion of variation explained by the population regression model. Linear regression models require only two SPV for adequate estimation of regression coefficients, standard errors, and confidence intervals. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  19. Prediction, Regression and Critical Realism

    DEFF Research Database (Denmark)

    Næss, Petter

    2004-01-01

    This paper considers the possibility of prediction in land use planning, and the use of statistical research methods in analyses of relationships between urban form and travel behaviour. Influential writers within the tradition of critical realism reject the possibility of predicting social...... phenomena. This position is fundamentally problematic to public planning. Without at least some ability to predict the likely consequences of different proposals, the justification for public sector intervention into market mechanisms will be frail. Statistical methods like regression analyses are commonly...... seen as necessary in order to identify aggregate level effects of policy measures, but are questioned by many advocates of critical realist ontology. Using research into the relationship between urban structure and travel as an example, the paper discusses relevant research methods and the kinds...

  20. The adsorption coefficient (KOC) of chlorpyrifos in clay soil

    International Nuclear Information System (INIS)

    Halimah Muhamad; Nashriyah Mat; Tan Yew Ai; Ismail Sahid

    2005-01-01

    The purpose of this study was to determine the adsorption coefficient (KOC) of chlorpyrifos in clay soil by measuring the Freundlich adsorption coefficient (Kads(f)) and desorption coefficient (1/n value) of chlorpyrifos. It was found that the Freundlich adsorption coefficient (Kads(f)) and the linear regression (r2) of the Freundlich adsorption isotherm for chlorpyrifos in the clay soil were 52.6 L/kg and 0.5244, respectively. Adsorption equilibrium time was achieved within 24 hours for clay soil. This adsorption equilibrium time was used to determine the effect of concentration on adsorption. The adsorption coefficient (KOC) of clay soil was found to be 2783 L/kg with an initial concentration solution of 1 μg/g, soil-solution ratio (1:5) at 300 C when the equilibrium between the soil matrix and solution was 24 hours. The Kdes decreased over four repetitions of the desorption process. The chlorpyrifos residues may be strongly adsorbed onto the surface of clay. (Author)

  1. New Inference Procedures for Semiparametric Varying-Coefficient Partially Linear Cox Models

    Directory of Open Access Journals (Sweden)

    Yunbei Ma

    2014-01-01

    Full Text Available In biomedical research, one major objective is to identify risk factors and study their risk impacts, as this identification can help clinicians to both properly make a decision and increase efficiency of treatments and resource allocation. A two-step penalized-based procedure is proposed to select linear regression coefficients for linear components and to identify significant nonparametric varying-coefficient functions for semiparametric varying-coefficient partially linear Cox models. It is shown that the penalized-based resulting estimators of the linear regression coefficients are asymptotically normal and have oracle properties, and the resulting estimators of the varying-coefficient functions have optimal convergence rates. A simulation study and an empirical example are presented for illustration.

  2. Application of principal component regression and partial least squares regression in ultraviolet spectrum water quality detection

    Science.gov (United States)

    Li, Jiangtong; Luo, Yongdao; Dai, Honglin

    2018-01-01

    Water is the source of life and the essential foundation of all life. With the development of industrialization, the phenomenon of water pollution is becoming more and more frequent, which directly affects the survival and development of human. Water quality detection is one of the necessary measures to protect water resources. Ultraviolet (UV) spectral analysis is an important research method in the field of water quality detection, which partial least squares regression (PLSR) analysis method is becoming predominant technology, however, in some special cases, PLSR's analysis produce considerable errors. In order to solve this problem, the traditional principal component regression (PCR) analysis method was improved by using the principle of PLSR in this paper. The experimental results show that for some special experimental data set, improved PCR analysis method performance is better than PLSR. The PCR and PLSR is the focus of this paper. Firstly, the principal component analysis (PCA) is performed by MATLAB to reduce the dimensionality of the spectral data; on the basis of a large number of experiments, the optimized principal component is extracted by using the principle of PLSR, which carries most of the original data information. Secondly, the linear regression analysis of the principal component is carried out with statistic package for social science (SPSS), which the coefficients and relations of principal components can be obtained. Finally, calculating a same water spectral data set by PLSR and improved PCR, analyzing and comparing two results, improved PCR and PLSR is similar for most data, but improved PCR is better than PLSR for data near the detection limit. Both PLSR and improved PCR can be used in Ultraviolet spectral analysis of water, but for data near the detection limit, improved PCR's result better than PLSR.

  3. Extinction Coefficient of Gold Nanostars.

    Science.gov (United States)

    de Puig, Helena; Tam, Justina O; Yen, Chun-Wan; Gehrke, Lee; Hamad-Schifferli, Kimberly

    2015-07-30

    Gold nanostars (NStars) are highly attractive for biological applications due to their surface chemistry, facile synthesis and optical properties. Here, we synthesize NStars in HEPES buffer at different HEPES/Au ratios, producing NStars of different sizes and shapes, and therefore varying optical properties. We measure the extinction coefficient of the synthesized NStars at their maximum surface plasmon resonances (SPR), which range from 5.7 × 10 8 to 26.8 × 10 8 M -1 cm -1 . Measured values correlate with those obtained from theoretical models of the NStars using the discrete dipole approximation (DDA), which we use to simulate the extinction spectra of the nanostars. Finally, because NStars are typically used in biological applications, we conjugate DNA and antibodies to the NStars and calculate the footprint of the bound biomolecules.

  4. Kerr scattering coefficients via isomonodromy

    Energy Technology Data Exchange (ETDEWEB)

    Cunha, Bruno Carneiro da [Departamento de Física, Universidade Federal de Pernambuco,50670-901, Recife, Pernambuco (Brazil); Novaes, Fábio [International Institute of Physics, Federal University of Rio Grande do Norte,Av. Odilon Gomes de Lima 1722, Capim Macio, Natal-RN 59078-400 (Brazil)

    2015-11-23

    We study the scattering of a massless scalar field in a generic Kerr background. Using a particular gauge choice based on the current conservation of the radial equation, we give a generic formula for the scattering coefficient in terms of the composite monodromy parameter σ between the inner and the outer horizons. Using the isomonodromy flow, we calculate σ exactly in terms of the Painlevé V τ-function. We also show that the eigenvalue problem for the angular equation (spheroidal harmonics) can be calculated using the same techniques. We use recent developments relating the Painlevé V τ-function to Liouville irregular conformal blocks to claim that this scattering problem is solved in the combinatorial sense, with known expressions for the τ-function near the critical points.

  5. Development of beta reference radiations

    International Nuclear Information System (INIS)

    Wan Zhaoyong; Cai Shanyu; Li Yanbo; Yin Wei; Feng Jiamin; Sun Yuhua; Li Yongqiang

    1997-09-01

    A system of beta reference radiation has been developed, that is composed of 740 MBq 147 Pm beta source, 74 MBq and 740 MBq 90 Sr + 90 Y β sources, compensation filters, a source handling tool, a source jig, spacing bars, a shutter, a control unit and a beta dose meter calibration stand. For 740 MBq 147 Pm and 74 MBq 90 Sr + 90 Y beta reference radiations with compensation filters and 740 MBq 90 Sr + 90 Y beta reference radiation without compensation filter, at 20 cm, 30 cm and 30 cm distance separately; the residual energy of maximum is 0.14 MeV, 1.98 MeV and 2.18 MeV separately; the absorbed dose to tissue D (0.07) is 1.547 mGy/h (1996-05-20), 5.037 mGy/h (1996-05-10) and 93.57 mGy/h (1996-05-15) separately; the total uncertainty is 3.0%, 1.7% and 1.7% separately. For the first and the second beta reference radiation, the dose rate variability in the area of 18 cm diameter in the plane perpendicular to the beta-ray beam axis is within +-6% and +-3% separately. (3 refs., 2 tabs., 8 figs.)

  6. A semiconductor beta ray spectrometer

    International Nuclear Information System (INIS)

    Bom, V.R.

    1987-01-01

    Measurement of energy spectra of beta particles emitted from nuclei in beta-decay processes provides information concerning the mass difference of these nuclei between initial and final state. Moreover, experimental beta spectra yield information on the feeding of the levels in the daughter nucleus. Such data are valuable in the construction and checking of the level schemes. This thesis describes the design, construction, testing and usage of a detector for the accurate measurement of the mentioned spectra. In ch. 2 the design and construction of the beta spectrometer, which uses a hyper-pure germanium crystal for energy determination, is described. A simple wire chamber is used to discriminate beta particles from gamma radiation. Disadvantages arise from the large amounts of scattered beta particles deforming the continua. A method is described to minimize the scattering. In ch. 3 some theoretical aspects of data analysis are described and the results of Monte-Carlo simulations of the summation of annihilation radiation are compared with experiments. Ch. 4 comprises the results of the measurements of the beta decay energies of 103-108 In. 87 refs.; 34 figs.; 7 tabs

  7. BETA (Bitter Electromagnet Testing Apparatus)

    Science.gov (United States)

    Bates, Evan M.; Birmingham, William J.; Rivera, William F.; Romero-Talamas, Carlos A.

    2017-10-01

    The Bitter Electromagnet Testing Apparatus (BETA) is a 1-Tesla (T) prototype of the 10-T Adjustable Long Pulse High-Field Apparatus (ALPHA). These water-cooled resistive magnets use high DC currents to produce strong uniform magnetic fields. Presented here is the successful completion of the BETA project and experimental results validating analytical magnet designing methods developed at the Dusty Plasma Laboratory (DPL). BETA's final design specifications will be highlighted which include electromagnetic, thermal and stress analyses. The magnet core design will be explained which include: Bitter Arcs, helix starters, and clamping annuli. The final version of the magnet's vessel and cooling system are also presented, as well as the electrical system of BETA, which is composed of a unique solid-state breaker circuit. Experimental results presented will show the operation of BETA at 1 T. The results are compared to both analytical design methods and finite element analysis calculations. We also explore the steady state maximums and theoretical limits of BETA's design. The completion of BETA validates the design and manufacturing techniques that will be used in the succeeding magnet, ALPHA.

  8. A novel computer-assisted image analysis of [{sup 123}I]{beta}-CIT SPECT images improves the diagnostic accuracy of parkinsonian disorders

    Energy Technology Data Exchange (ETDEWEB)

    Goebel, Georg [Innsbruck Medical University, Department of Medical Statistics, Informatics and Health Economics, Innsbruck (Austria); Seppi, Klaus; Wenning, Gregor K.; Poewe, Werner; Scherfler, Christoph [Innsbruck Medical University, Department of Neurology, Innsbruck (Austria); Donnemiller, Eveline; Warwitz, Boris; Virgolini, Irene [Innsbruck Medical University, Department of Nuclear Medicine, Innsbruck (Austria)

    2011-04-15

    The purpose of this study was to develop an observer-independent algorithm for the correct classification of dopamine transporter SPECT images as Parkinson's disease (PD), multiple system atrophy parkinson variant (MSA-P), progressive supranuclear palsy (PSP) or normal. A total of 60 subjects with clinically probable PD (n = 15), MSA-P (n = 15) and PSP (n = 15), and 15 age-matched healthy volunteers, were studied with the dopamine transporter ligand [{sup 123}I]{beta}-CIT. Parametric images of the specific-to-nondisplaceable equilibrium partition coefficient (BP{sub ND}) were generated. Following a voxel-wise ANOVA, cut-off values were calculated from the voxel values of the resulting six post-hoc t-test maps. The percentages of the volume of an individual BP{sub ND} image remaining below and above the cut-off values were determined. The higher percentage of image volume from all six cut-off matrices was used to classify an individual's image. For validation, the algorithm was compared to a conventional region of interest analysis. The predictive diagnostic accuracy of the algorithm in the correct assignment of a [{sup 123}I]{beta}-CIT SPECT image was 83.3% and increased to 93.3% on merging the MSA-P and PSP groups. In contrast the multinomial logistic regression of mean region of interest values of the caudate, putamen and midbrain revealed a diagnostic accuracy of 71.7%. In contrast to a rater-driven approach, this novel method was superior in classifying [{sup 123}I]{beta}-CIT-SPECT images as one of four diagnostic entities. In combination with the investigator-driven visual assessment of SPECT images, this clinical decision support tool would help to improve the diagnostic yield of [{sup 123}I]{beta}-CIT SPECT in patients presenting with parkinsonism at their initial visit. (orig.)

  9. The transmission of differing energy beta particles through various materials

    International Nuclear Information System (INIS)

    Quayle, D.R.

    1996-04-01

    The transmission of beta particles is frequently calculated in the same fashion as that of gamma rays, where the mass attenuation coefficient is defined by the slope of the exponential function. Numerous authors have used this approximation including Evans (1955), Loevinger (1952), and Chabot et. al. (1988). Recent work by McCarthy et. al. (1995) indicated that the exponential function seemed to fit well over a particular region of the transmission curve. Upon further investigation, the author decided to verify McCarthy's results by the use of different absorber materials and attempt to reproduce the experiments. A theoretical method will be used to estimate the transmission of the beta particles through the three absorbers, aluminum, zirconium, and iron. An alternate Monte Carlo code, the Electron Gamma Shower version 4 code (EGS4) will also be used to verify that the experiment is approximating a pencil beam of beta particles. Although these two methods offer a good cross check for the experimental data, they pose a conflict in regards to the type of beam that is to be generated. The experimental lab setup uses a collimated beam of electrons that will impinge upon the absorber, while the codes are written using a pencil beam. A minor discrepancy is expected to be observed in the experimental results and is currently under investigation by McCarthy. The results of this project supported the theory that the beta mass attenuation coefficient was accurately represented by the slope of an exponential function, but only for that particular region of the transmission curve that has a minimal absorber thickness. By fitting the data beyond 50% of the beta particle range this theory does not hold true. The theory generated by McCarthy (1995) and the EGS4 Monte Carlo code indicated that the transmission curve for a pencil beam was not accurately represented by an exponential function. The results of this experiment appeared to provide additional support to this assumption

  10. Plasma beta-endorphin levels in obese and non-obese patients with polycystic ovary disease.

    Science.gov (United States)

    Martínez-Guisasola, J; Guerrero, M; Alonso, F; Díaz, F; Cordero, J; Ferrer, J

    2001-02-01

    The aim of this study was to determine the influence of body weight on circulating plasma levels of beta-endorphin and insulin in women with polycystic ovary disease (PCOD), as well as the correlation between the plasma levels of beta-endorphin and insulin. One-hundred and sixty-seven consecutive subjects with PCOD were recruited, 117 of whom had normal weight (body mass index (BMI) 25). A venous blood sample was taken and plasma concentrations of beta-endorphin, insulin, gonadotropins, prolactin, progesterone, 17 beta-estradiol, estrone, androgens, dehydroepiandrosterone sulfate and sex hormone-binding globulin (SHBG) were measured. Mean beta-endorphin and insulin plasma levels were significantly higher (p PCOD women than in non-obese ones. Correlation analysis showed a positive association between insulin and beta-endorphin, beta-endorphin and BMI (and weight), insulin and BMI (and weight), and a negative correlation was found between insulin and SHBG. A weak association was found between beta-endorphin and luteinizing hormone (LH) in peripheral plasma. Stratified and linear regression analysis showed that plasma beta-endorphin concentrations correlate more with BMI than with insulinemia.

  11. Experiments on double beta decay

    Energy Technology Data Exchange (ETDEWEB)

    Busto, J [Neuchatel Univ. (Switzerland). Inst. de Physique

    1996-11-01

    The Double Beta Decay, and especially ({beta}{beta}){sub 0{nu}} mode, is an excellent test of Standard Model as well as of neutrino physics. From experimental point of view, a very large number of different techniques are or have been used increasing the sensitivity of this experiments quite a lot (the factor of 10{sup 4} in the last 20 years). In future, in spite of several difficulties, the sensitivity would be increased further, keeping the interest of this very important process. (author) 4 figs., 5 tabs., 21 refs.

  12. The effect of postoperative medical treatment on left ventricular mass regression after aortic valve replacement.

    Science.gov (United States)

    Helder, Meghana R K; Ugur, Murat; Bavaria, Joseph E; Kshettry, Vibhu R; Groh, Mark A; Petracek, Michael R; Jones, Kent W; Suri, Rakesh M; Schaff, Hartzell V

    2015-03-01

    The study objective was to analyze factors associated with left ventricular mass regression in patients undergoing aortic valve replacement with a newer bioprosthesis, the Trifecta valve pericardial bioprosthesis (St Jude Medical Inc, St Paul, Minn). A total of 444 patients underwent aortic valve replacement with the Trifecta bioprosthesis from 2007 to 2009 at 6 US institutions. The clinical and echocardiographic data of 200 of these patients who had left ventricular hypertrophy and follow-up studies 1 year postoperatively were reviewed and compared to analyze factors affecting left ventricular mass regression. Mean (standard deviation) age of the 200 study patients was 73 (9) years, 66% were men, and 92% had pure or predominant aortic valve stenosis. Complete left ventricular mass regression was observed in 102 patients (51%) by 1 year postoperatively. In univariate analysis, male sex, implantation of larger valves, larger left ventricular end-diastolic volume, and beta-blocker or calcium-channel blocker treatment at dismissal were significantly associated with complete mass regression. In the multivariate model, odds ratios (95% confidence intervals) indicated that male sex (3.38 [1.39-8.26]) and beta-blocker or calcium-channel blocker treatment at dismissal (3.41 [1.40-8.34]) were associated with increased probability of complete left ventricular mass regression. Patients with higher preoperative systolic blood pressure were less likely to have complete left ventricular mass regression (0.98 [0.97-0.99]). Among patients with left ventricular hypertrophy, postoperative treatment with beta-blockers or calcium-channel blockers may enhance mass regression. This highlights the need for close medical follow-up after operation. Labeled valve size was not predictive of left ventricular mass regression. Copyright © 2015 The American Association for Thoracic Surgery. Published by Elsevier Inc. All rights reserved.

  13. Preventive Effects of Beta-Hydroxy-Beta-Methyl Butyrate

    OpenAIRE

    N. Ravanbakhsh; N. Torabi; M. Foadoddini

    2016-01-01

    Aims: One of the major factors in sudden cardiac arrest is the initiation and continuation of deadly arrhythmias during ischemia. It is known that beta-hydroxy-beta-methylbutyrate (HMB) has useful effects such as anti-inflammatory and anti-apoptosis effects in the skeletal muscles. The aim of this study was to investigate the preventive effects of HMB on the ventricular arrhythmias due to the ischemia. Materials & Methods: In the experimental study, 30 Wistar male rats were randomly div...

  14. Dosimetry of {beta} extensive sources; Dosimetria de fuentes {beta} extensas

    Energy Technology Data Exchange (ETDEWEB)

    Rojas C, E.L.; Lallena R, A.M. [Departamento de Fisica Moderna, Universidad de Granada, E-18071 Granada (Spain)

    2002-07-01

    In this work, we have been studied, making use of the Penelope Monte Carlo simulation code, the dosimetry of {beta} extensive sources in situations of spherical geometry including interfaces. These configurations are of interest in the treatment of the called cranealfaringyomes of some synovia leisure of knee and other problems of interest in medical physics. Therefore, its application can be extended toward problems of another areas with similar geometric situation and beta sources. (Author)

  15. Non-Markovian dynamics of quantum systems: formalism, transport coefficients

    International Nuclear Information System (INIS)

    Kanokov, Z.; Palchikov, Yu.V.; Antonenko, N.V.; Adamian, G.G.; Kanokov, Z.; Adamian, G.G.; Scheid, W.

    2004-01-01

    Full text: The generalized Linbland equations with non-stationary transport coefficients are derived from the Langevin equations for the case of nonlinear non-Markovian noise [1]. The equations of motion for the collective coordinates are consistent with the generalized quantum fluctuation dissipation relations. The microscopic justification of the Linbland axiomatic approach is performed. Explicit expressions for the time-dependent transport coefficients are presented for the case of FC- and RWA-oscillators and a general linear coupling in coordinate and in momentum between the collective subsystem and heat bath. The explicit equations for the correlation functions show that the Onsanger's regression hypothesis does not hold exactly for the non-Markovian equations of motion. However, under some conditions the regression of fluctuations goes to zero in the same manner as the average values. In the low and high temperature regimes we found that the dissipation leads to long-time tails in correlation functions in the RWA-oscillator. In the case of the FC-oscillator a non-exponential power-like decay of the correlation function in coordinate is only obtained only at the low temperature limit. The calculated results depend rather weakly on the memory time in many applications. The found transient times for diffusion coefficients D pp (t), D qp (t) and D qq (t) are quite short. The value of classical diffusion coefficients in momentum underestimates the asymptotic value of quantum one D pp (t), but the asymptotic values of classical σ qq c and quantum σ qq second moments are close due to the negativity of quantum mixed diffusion coefficient D qp (t)

  16. Credit Scoring Problem Based on Regression Analysis

    OpenAIRE

    Khassawneh, Bashar Suhil Jad Allah

    2014-01-01

    ABSTRACT: This thesis provides an explanatory introduction to the regression models of data mining and contains basic definitions of key terms in the linear, multiple and logistic regression models. Meanwhile, the aim of this study is to illustrate fitting models for the credit scoring problem using simple linear, multiple linear and logistic regression models and also to analyze the found model functions by statistical tools. Keywords: Data mining, linear regression, logistic regression....

  17. Model-based Quantile Regression for Discrete Data

    KAUST Repository

    Padellini, Tullia

    2018-04-10

    Quantile regression is a class of methods voted to the modelling of conditional quantiles. In a Bayesian framework quantile regression has typically been carried out exploiting the Asymmetric Laplace Distribution as a working likelihood. Despite the fact that this leads to a proper posterior for the regression coefficients, the resulting posterior variance is however affected by an unidentifiable parameter, hence any inferential procedure beside point estimation is unreliable. We propose a model-based approach for quantile regression that considers quantiles of the generating distribution directly, and thus allows for a proper uncertainty quantification. We then create a link between quantile regression and generalised linear models by mapping the quantiles to the parameter of the response variable, and we exploit it to fit the model with R-INLA. We extend it also in the case of discrete responses, where there is no 1-to-1 relationship between quantiles and distribution\\'s parameter, by introducing continuous generalisations of the most common discrete variables (Poisson, Binomial and Negative Binomial) to be exploited in the fitting.

  18. Double Beta Decay

    International Nuclear Information System (INIS)

    Fiorini, Ettore

    2008-01-01

    The importance of neutrinoless Double Beta Decay (DBD) is stressed in view of the recent results of experiments on neutrino oscillations which indicate that the difference between the squared masses of two neutrinos of different flavours is finite [For a recent review including neutrino properties and recent results see: Review of Particle Physics, J. of Phys. G: Nuclear and Particle Physics 33, 1]. As a consequence the mass of at least one neutrino has to be different from zero and it becomes imperative to determine its absolute value. The various experimental techniques to search for DBD are discussed together with the difficult problems of the evaluation of the corresponding nuclear matrix elements. The upper limits on neutrino mass coming from the results of the various experiments are reported together with the indication for a non zero value by one of them not confirmed so far. The two presently running experiments on neutrinoless DBD are briefly described together with the already approved or designed second generation searches aiming to reach the values on the absolute neutrino mass indicated by the results on neutrino oscillations

  19. Near zero coefficient of thermal expansion of beta-eucryptite without microcracking

    Science.gov (United States)

    Reimanis, Ivar; Ramalingam, Subramanian

    2015-06-16

    The present invention is drawn to a lithia alumina silica material that exhibits a low CTE over a broad temperature range and a method of making the same. The low CTE of the material allows for a decrease in microcracking within the material.

  20. Kinetics and thermodynamics of hydrogen absorption and release in {beta}-titanium alloys; Kinetik und Thermodynamik der Wasserstoffaufnahme und -abgabe von {beta}-Titanlegierungen

    Energy Technology Data Exchange (ETDEWEB)

    Decker, M.; Christ, H.J. [Siegen Univ. (Gesamthochschule) (Germany). Inst. fuer Werkstofftechnik

    1998-12-31

    The work reported was intended to yield results allowing to describe as completely as possible the processes of interaction of {beta}-titanium and hydrogen. Three alloys have been selected for the experiments which suitably differ in stability of the {beta} phase. The characterisation of the hydrogen/metal interactions is primarily based on gravimetric measurements. A method was found to determine the diffusion coefficient of hydrogen, which is a significant variable for quantitative characterisation of the hydrogen absorption rate. (orig./CB) [Deutsch] In der Arbeit wird eine moeglichst vollstaendige Beschreibung der Wechselwirkung von {beta}-Titanlegierungen mit Wasserstoff angestrebt. Hierfuer wird mit drei Legierungen gearbeitet, die sich hinsichtlich der Stabilitaet der {beta}-Phase in sinnvoll abgestufter Weise unterscheiden. Fuer die Charakterisierung der Wasserstoff/Metall-Wechselwirkung wurden insbesondere gravimetrische Messungen eingesetzt. Weiterhin wurde die fuer eine quantitative Beschreibung der Wasserstoffaufnahmegeschwindigkeit wichtige Groesse des Diffusionskoeffizienten von Wasserstoff mit einem Verfahren bestimmt. (orig./MM)

  1. Regularized Label Relaxation Linear Regression.

    Science.gov (United States)

    Fang, Xiaozhao; Xu, Yong; Li, Xuelong; Lai, Zhihui; Wong, Wai Keung; Fang, Bingwu

    2018-04-01

    Linear regression (LR) and some of its variants have been widely used for classification problems. Most of these methods assume that during the learning phase, the training samples can be exactly transformed into a strict binary label matrix, which has too little freedom to fit the labels adequately. To address this problem, in this paper, we propose a novel regularized label relaxation LR method, which has the following notable characteristics. First, the proposed method relaxes the strict binary label matrix into a slack variable matrix by introducing a nonnegative label relaxation matrix into LR, which provides more freedom to fit the labels and simultaneously enlarges the margins between different classes as much as possible. Second, the proposed method constructs the class compactness graph based on manifold learning and uses it as the regularization item to avoid the problem of overfitting. The class compactness graph is used to ensure that the samples sharing the same labels can be kept close after they are transformed. Two different algorithms, which are, respectively, based on -norm and -norm loss functions are devised. These two algorithms have compact closed-form solutions in each iteration so that they are easily implemented. Extensive experiments show that these two algorithms outperform the state-of-the-art algorithms in terms of the classification accuracy and running time.

  2. Factorization of Transport Coefficients in Macroporous Media

    DEFF Research Database (Denmark)

    Shapiro, Alexander; Stenby, Erling Halfdan

    2000-01-01

    We prove the fundamental theorem about factorization of the phenomenological coefficients for transport in macroporous media. By factorization we mean the representation of the transport coefficients as products of geometric parameters of the porous medium and the parameters characteristic...

  3. Development of a portable triple silicon detector telescope for beta spectroscopy and skin dosimetry

    International Nuclear Information System (INIS)

    Helt-Hansen, J.

    2000-11-01

    It is now recognized that beta radiation can be a significant radiation problem for exposure of the skin. There is thus a need for a portable and rugged active beta dosemeter-spectrometer to carry out immediate measurements of doses and energies of beta particles even in the presence of photon radiation. The main objective of this report is to describe the development of such an instrument. A beta-spectrometer has been developed consisting of three silicon surface barrier detectors with the thickness: 50μm/150μm/7000μm covered by a 2 μm thick titanium window. The spectrometer is capable of measuring electron energies from 50 keV to 3.5 MeV. The spectrometer is characterized by a compact low weight design, achieved by digital signal processing beginning at an early stage in the signal chain. 255 channels are available for each of the three detectors. The spectrometer is controlled by a laptop computer, which also handles all subsequent data analysis. By use of coincidence/anti-coincidence considerations of the absorbed energy in the three detector elements, counts caused by electrons are separated from those originating from photons. The electron energy distribution is multiplied by a set of conversion coefficients to obtain the dose at 0.07 mm tissue. Monte Carlo calculations has been used to derive the conversion coefficients and to investigate the influence of noise and the design of detector assembly on the performance of the spectrometer. This report describes the development of the spectrometer and its mode of operation, followed by a description of the Monte Carlo calculations carried out to obtain the conversion coefficients. Finally is the capability of the telescope spectrometer to measure beta and photon spectra as well as beta dose rates in pure beta and mixed beta/photon radiation fields described. (au)

  4. Development of a portable triple silicon detector telescope for beta spectroscopy and skin dosimetry

    Energy Technology Data Exchange (ETDEWEB)

    Helt-Hansen, J

    2000-11-01

    It is now recognized that beta radiation can be a significant radiation problem for exposure of the skin. There is thus a need for a portable and rugged active beta dosemeter-spectrometer to carry out immediate measurements of doses and energies of beta particles even in the presence of photon radiation. The main objective of this report is to describe the development of such an instrument. A beta-spectrometer has been developed consisting of three silicon surface barrier detectors with the thickness: 50{mu}m/150{mu}m/7000{mu}m covered by a 2 {mu}m thick titanium window. The spectrometer is capable of measuring electron energies from 50 keV to 3.5 MeV. The spectrometer is characterized by a compact low weight design, achieved by digital signal processing beginning at an early stage in the signal chain. 255 channels are available for each of the three detectors. The spectrometer is controlled by a laptop computer, which also handles all subsequent data analysis. By use of coincidence/anti-coincidence considerations of the absorbed energy in the three detector elements, counts caused by electrons are separated from those originating from photons. The electron energy distribution is multiplied by a set of conversion coefficients to obtain the dose at 0.07 mm tissue. Monte Carlo calculations has been used to derive the conversion coefficients and to investigate the influence of noise and the design of detector assembly on the performance of the spectrometer. This report describes the development of the spectrometer and its mode of operation, followed by a description of the Monte Carlo calculations carried out to obtain the conversion coefficients. Finally is the capability of the telescope spectrometer to measure beta and photon spectra as well as beta dose rates in pure beta and mixed beta/photon radiation fields described. (au)

  5. (beta-HC CG) in

    African Journals Online (AJOL)

    raoul

    Urothelial tumour samples were obtained from all the 86 patients requiring surgical ..... and/or urine beta HCG appears to be an efficient diagnostic marker for the ..... collected all urothelial tumour specimens for storage, cutting and staining.

  6. Beta-glucans and cholesterol

    Czech Academy of Sciences Publication Activity Database

    Šíma, Petr; Vannucci, Luca; Větvička, V.

    2017-01-01

    Roč. 41, č. 4 (2017), s. 1799-1808 ISSN 1107-3756 Institutional support: RVO:61388971 Keywords : cholesterol * beta-glucans * diet Subject RIV: EE - Microbiology, Virology OBOR OECD: Microbiology Impact factor: 2.341, year: 2016

  7. Radioisotope indicator, type BETA 2

    International Nuclear Information System (INIS)

    Duszanski, M.; Pankow, A.; Skwarczynski, B.

    1975-01-01

    The authors describe a radioisotope indicator, type BETA 2, constructed in the ZKMPW Works to be employed in mines for counting, checking, signalling the presence and positioning of cars, as well as monitoring the state of some other equipment. (author)

  8. Anomalous Seebeck coefficient in boron carbides

    International Nuclear Information System (INIS)

    Aselage, T.L.; Emin, D.; Wood, C.; Mackinnon, I.D.R.; Howard, I.A.

    1987-01-01

    Boron carbides exhibit an anomalously large Seebeck coefficient with a temperature coefficient that is characteristic of polaronic hopping between inequivalent sites. The inequivalence in the sites is associated with disorder in the solid. The temperature dependence of the Seebeck coefficient for materials prepared by different techniques provides insight into the nature of the disorder

  9. Soccer Ball Lift Coefficients via Trajectory Analysis

    Science.gov (United States)

    Goff, John Eric; Carre, Matt J.

    2010-01-01

    We performed experiments in which a soccer ball was launched from a machine while two high-speed cameras recorded portions of the trajectory. Using the trajectory data and published drag coefficients, we extracted lift coefficients for a soccer ball. We determined lift coefficients for a wide range of spin parameters, including several spin…

  10. Symmetry chains and adaptation coefficients

    International Nuclear Information System (INIS)

    Fritzer, H.P.; Gruber, B.

    1985-01-01

    Given a symmetry chain of physical significance it becomes necessary to obtain states which transform properly with respect to the symmetries of the chain. In this article we describe a method which permits us to calculate symmetry-adapted quantum states with relative ease. The coefficients for the symmetry-adapted linear combinations are obtained, in numerical form, in terms of the original states of the system and can thus be represented in the form of numerical tables. In addition, one also obtains automatically the matrix elements for the operators of the symmetry groups which are involved, and thus for any physical operator which can be expressed either as an element of the algebra or of the enveloping algebra. The method is well suited for computers once the physically relevant symmetry chain, or chains, have been defined. While the method to be described is generally applicable to any physical system for which semisimple Lie algebras play a role we choose here a familiar example in order to illustrate the method and to illuminate its simplicity. We choose the nuclear shell model for the case of two nucleons with orbital angular momentum l = 1. While the states of the entire shell transform like the smallest spin representation of SO(25) we restrict our attention to its subgroup SU(6) x SU(2)/sub T/. We determine the symmetry chains which lead to total angular momentum SU(2)/sub J/ and obtain the symmetry-adapted states for these chains

  11. Confirmation of selected milk and meat radionuclide-transfer coefficients

    International Nuclear Information System (INIS)

    Ward, G.M.; Johnson, J.E.

    1983-01-01

    The elements selected for study of their transfer coefficients to eggs, poultry meat, milk and beef were Mo, Tc, Te, and Ba. The radionuclides used in the study were the gamma-emitting radionuclides 99 Mo, /sup 123m/Te and 133 Ba. 133 Ba was selected because 140 Ba- 140 La is produced infrequently and availability was uncertain. 133 Ba has a great advantage for our type of experiment because of its longer physical half-life. 99 Tc is a pure beta-emitter and was used in the first three animal experiments because we could not obtain the gamma-emitting /sup 95m/Tc. A supply of this nuclide was recently obtained, however, for the second cow experiment

  12. Principal component regression analysis with SPSS.

    Science.gov (United States)

    Liu, R X; Kuang, J; Gong, Q; Hou, X L

    2003-06-01

    The paper introduces all indices of multicollinearity diagnoses, the basic principle of principal component regression and determination of 'best' equation method. The paper uses an example to describe how to do principal component regression analysis with SPSS 10.0: including all calculating processes of the principal component regression and all operations of linear regression, factor analysis, descriptives, compute variable and bivariate correlations procedures in SPSS 10.0. The principal component regression analysis can be used to overcome disturbance of the multicollinearity. The simplified, speeded up and accurate statistical effect is reached through the principal component regression analysis with SPSS.

  13. The intermediate endpoint effect in logistic and probit regression

    Science.gov (United States)

    MacKinnon, DP; Lockwood, CM; Brown, CH; Wang, W; Hoffman, JM

    2010-01-01

    Background An intermediate endpoint is hypothesized to be in the middle of the causal sequence relating an independent variable to a dependent variable. The intermediate variable is also called a surrogate or mediating variable and the corresponding effect is called the mediated, surrogate endpoint, or intermediate endpoint effect. Clinical studies are often designed to change an intermediate or surrogate endpoint and through this intermediate change influence the ultimate endpoint. In many intermediate endpoint clinical studies the dependent variable is binary, and logistic or probit regression is used. Purpose The purpose of this study is to describe a limitation of a widely used approach to assessing intermediate endpoint effects and to propose an alternative method, based on products of coefficients, that yields more accurate results. Methods The intermediate endpoint model for a binary outcome is described for a true binary outcome and for a dichotomization of a latent continuous outcome. Plots of true values and a simulation study are used to evaluate the different methods. Results Distorted estimates of the intermediate endpoint effect and incorrect conclusions can result from the application of widely used methods to assess the intermediate endpoint effect. The same problem occurs for the proportion of an effect explained by an intermediate endpoint, which has been suggested as a useful measure for identifying intermediate endpoints. A solution to this problem is given based on the relationship between latent variable modeling and logistic or probit regression. Limitations More complicated intermediate variable models are not addressed in the study, although the methods described in the article can be extended to these more complicated models. Conclusions Researchers are encouraged to use an intermediate endpoint method based on the product of regression coefficients. A common method based on difference in coefficient methods can lead to distorted

  14. Energy coefficients for a propeller series

    DEFF Research Database (Denmark)

    Olsen, Anders Smærup

    2004-01-01

    The efficiency for a propeller is calculated by energy coefficients. These coefficients are related to four types of losses, i.e. the axial, the rotational, the frictional, and the finite blade number loss, and one gain, i.e. the axial gain. The energy coefficients are derived by use...... of the potential theory with the propeller modelled as an actuator disk. The efficiency based on the energy coefficients is calculated for a propeller series. The results show a good agreement between the efficiency based on the energy coefficients and the efficiency obtained by a vortex-lattice method....

  15. Tables of double beta decay data

    Energy Technology Data Exchange (ETDEWEB)

    Tretyak, V.I. [AN Ukrainskoj SSR, Kiev (Ukraine)]|[Strasbourg-1 Univ., 67 (France). Centre de Recherches Nucleaires; Zdesenko, Y.G. [AN Ukrainskoj SSR, Kiev (Ukraine)

    1995-12-31

    A compilation of experimental data on double beta decay is presented. The tables contain the most stringent known experimental limits or positive results of 2{beta} transitions of 69 natural nuclides to ground and excited states of daughter nuclei for different channels (2{beta}{sup -}; 2{beta}{sup +}; {epsilon}{beta}{sup +}; 2{epsilon}) and modes (0{nu}; 2{nu}; 0{nu}M) of decay. (authors). 189 refs., 9 figs., 3 tabs.

  16. Beta Instability and Stochastic Market Weights

    OpenAIRE

    David H. Goldenberg

    1985-01-01

    An argument is given for individual firm beta instability based upon the stochastic character of the market weights defining the market portfolio and the constancy of its beta. This argument is generalized to market weighted portfolios and the form of the stochastic process generating betas is linked to that of the market return process. The implications of this analysis for adequacy of models of beta nonstationarity and estimation of betas are considered in light of the available empirical e...

  17. PENGARUH PENGUNGKAPAN CORPORATE SOCIAL RESPONSIBILITY TERHADAP EARNING RESPONSE COEFFICIENT

    Directory of Open Access Journals (Sweden)

    MI Mitha Dwi Restuti

    2012-03-01

    Full Text Available Tujuan penelitian ini adalah untuk mengetahui pengaruh negatif pengungkapan Corporate Sosial Responsibility (CSR disclosure terhadap Earning Response Coefficient (ERC. Alat analisis yang digunakan dalam penelitian ini menggunakan metode analisis regresi berganda.Sampel yang digunakan adalah sebanyak 150 perusahaan yang terdaftar pada Bursa Efek Indonesia pada tahun 2010. Berdasarkan hasil penelitian ditemukan bahwa pengungkapan Corporate Social Responsibility tidak berpengaruh terhadap Earning Response Coefficient (ERC. Hal ini dapat dikatakan bahwa investor belum memperhatikan informasi-informasi sosial yang diungkapkan dalam laporan tahunan perusahaan sebagai informasi yang dapat mempengaruhi investor dalam melakukan keputusan investasi. Investor masih mengganggap informasi laba lebih bermanfaat dalam menilai perusahaan dan dianggap lebih mampu memberikan informasi untuk mendapatkan return saham yang diharapkan oleh investor dibandingkan dengan informasi sosial yang diungkapkan oleh perusahaan.The purpose of this study is to determine the negative effect of Corporate Social Responsibility disclosure (CSR disclosure of Earnings Response Coefficient (ERC. Multiple regressions were used to analyze the data. The samples were 150 companies listed on the Indonesia Stock Exchange in 2010. Based on the research, the result was the disclosures of Corporate Social Responsibility did not influence Earning Response Coefficient (ECR. It can be said that investors did not pay attention to social information that was disclosed in the company’s annual report as information that could affect investors in making investment decisions. Investor did not consider sosial information; they only consider profit information to assess the company value and their investment return

  18. Methodology update for determination of the erosion coefficient(Z

    Directory of Open Access Journals (Sweden)

    Tošić Radislav

    2012-01-01

    Full Text Available The research and mapping the intensity of mechanical water erosion that have begun with the empirical methodology of S. Gavrilović during the mid-twentieth century last, by various intensity, until the present time. A many decades work on the research of these issues pointed to some shortcomings of the existing methodology, and thus the need for its innovation. In this sense, R. Lazarević made certain adjustments of the empirical methodology of S. Gavrilović by changing the tables for determination of the coefficients Φ, X and Y, that is, the tables for determining the mean erosion coefficient (Z. The main objective of this paper is to update the existing methodology for determining the erosion coefficient (Z with the empirical methodology of S. Gavrilović and amendments made by R. Lazarević (1985, but also with better adjustments to the information technologies and the needs of modern society. The proposed procedure, that is, the model to determine the erosion coefficient (Z in this paper is the result of ten years of scientific research and project work in mapping the intensity of mechanical water erosion and its modeling using various models of erosion in the Republic of Srpska and Serbia. By analyzing the correlation of results obtained by regression models and results obtained during the mapping of erosion on the territory of the Republic of Srpska, a high degree of correlation (R² = 0.9963 was established, which is essentially a good assessment of the proposed models.

  19. Marginal regression analysis of recurrent events with coarsened censoring times.

    Science.gov (United States)

    Hu, X Joan; Rosychuk, Rhonda J

    2016-12-01

    Motivated by an ongoing pediatric mental health care (PMHC) study, this article presents weakly structured methods for analyzing doubly censored recurrent event data where only coarsened information on censoring is available. The study extracted administrative records of emergency department visits from provincial health administrative databases. The available information of each individual subject is limited to a subject-specific time window determined up to concealed data. To evaluate time-dependent effect of exposures, we adapt the local linear estimation with right censored survival times under the Cox regression model with time-varying coefficients (cf. Cai and Sun, Scandinavian Journal of Statistics 2003, 30, 93-111). We establish the pointwise consistency and asymptotic normality of the regression parameter estimator, and examine its performance by simulation. The PMHC study illustrates the proposed approach throughout the article. © 2016, The International Biometric Society.

  20. Regularized multivariate regression models with skew-t error distributions

    KAUST Repository

    Chen, Lianfu

    2014-06-01

    We consider regularization of the parameters in multivariate linear regression models with the errors having a multivariate skew-t distribution. An iterative penalized likelihood procedure is proposed for constructing sparse estimators of both the regression coefficient and inverse scale matrices simultaneously. The sparsity is introduced through penalizing the negative log-likelihood by adding L1-penalties on the entries of the two matrices. Taking advantage of the hierarchical representation of skew-t distributions, and using the expectation conditional maximization (ECM) algorithm, we reduce the problem to penalized normal likelihood and develop a procedure to minimize the ensuing objective function. Using a simulation study the performance of the method is assessed, and the methodology is illustrated using a real data set with a 24-dimensional response vector. © 2014 Elsevier B.V.

  1. Bayesian median regression for temporal gene expression data

    Science.gov (United States)

    Yu, Keming; Vinciotti, Veronica; Liu, Xiaohui; 't Hoen, Peter A. C.

    2007-09-01

    Most of the existing methods for the identification of biologically interesting genes in a temporal expression profiling dataset do not fully exploit the temporal ordering in the dataset and are based on normality assumptions for the gene expression. In this paper, we introduce a Bayesian median regression model to detect genes whose temporal profile is significantly different across a number of biological conditions. The regression model is defined by a polynomial function where both time and condition effects as well as interactions between the two are included. MCMC-based inference returns the posterior distribution of the polynomial coefficients. From this a simple Bayes factor test is proposed to test for significance. The estimation of the median rather than the mean, and within a Bayesian framework, increases the robustness of the method compared to a Hotelling T2-test previously suggested. This is shown on simulated data and on muscular dystrophy gene expression data.

  2. Two SPSS programs for interpreting multiple regression results.

    Science.gov (United States)

    Lorenzo-Seva, Urbano; Ferrando, Pere J; Chico, Eliseo

    2010-02-01

    When multiple regression is used in explanation-oriented designs, it is very important to determine both the usefulness of the predictor variables and their relative importance. Standardized regression coefficients are routinely provided by commercial programs. However, they generally function rather poorly as indicators of relative importance, especially in the presence of substantially correlated predictors. We provide two user-friendly SPSS programs that implement currently recommended techniques and recent developments for assessing the relevance of the predictors. The programs also allow the user to take into account the effects of measurement error. The first program, MIMR-Corr.sps, uses a correlation matrix as input, whereas the second program, MIMR-Raw.sps, uses the raw data and computes bootstrap confidence intervals of different statistics. The SPSS syntax, a short manual, and data files related to this article are available as supplemental materials from http://brm.psychonomic-journals.org/content/supplemental.

  3. Correlation of Cadmium Distribution Coefficients to Soil Characteristics

    DEFF Research Database (Denmark)

    Holm, Peter Engelund; Rootzen, Helle; Borggaard, Ole K.

    2003-01-01

    on whole soil samples have shown that pH is the main parameter controlling the distribution. To identify further the components that are important for Cd binding in soil we measured Cd distribution coefficients (K-d) at two fixed pH values and at low Cd loadings for 49 soils sampled in Denmark. The Kd...... values for Cd ranged from 5 to 3000 L kg(-1). The soils were described pedologically and characterized in detail (22 parameters) including determination of contents of the various minerals in the clay fraction. Correlating parameters were grouped and step-wise regression analysis revealed...... interlayered clay minerals [HIM], chlorite, quartz, microcline, plagioclase) were significant in explaining the Cd distribution coefficient....

  4. Unbalanced Regressions and the Predictive Equation

    DEFF Research Database (Denmark)

    Osterrieder, Daniela; Ventosa-Santaulària, Daniel; Vera-Valdés, J. Eduardo

    Predictive return regressions with persistent regressors are typically plagued by (asymptotically) biased/inconsistent estimates of the slope, non-standard or potentially even spurious statistical inference, and regression unbalancedness. We alleviate the problem of unbalancedness in the theoreti......Predictive return regressions with persistent regressors are typically plagued by (asymptotically) biased/inconsistent estimates of the slope, non-standard or potentially even spurious statistical inference, and regression unbalancedness. We alleviate the problem of unbalancedness...

  5. Semiparametric regression during 2003–2007

    KAUST Repository

    Ruppert, David; Wand, M.P.; Carroll, Raymond J.

    2009-01-01

    Semiparametric regression is a fusion between parametric regression and nonparametric regression that integrates low-rank penalized splines, mixed model and hierarchical Bayesian methodology – thus allowing more streamlined handling of longitudinal and spatial correlation. We review progress in the field over the five-year period between 2003 and 2007. We find semiparametric regression to be a vibrant field with substantial involvement and activity, continual enhancement and widespread application.

  6. Gaussian process regression analysis for functional data

    CERN Document Server

    Shi, Jian Qing

    2011-01-01

    Gaussian Process Regression Analysis for Functional Data presents nonparametric statistical methods for functional regression analysis, specifically the methods based on a Gaussian process prior in a functional space. The authors focus on problems involving functional response variables and mixed covariates of functional and scalar variables.Covering the basics of Gaussian process regression, the first several chapters discuss functional data analysis, theoretical aspects based on the asymptotic properties of Gaussian process regression models, and new methodological developments for high dime

  7. Adjusting for Confounding in Early Postlaunch Settings: Going Beyond Logistic Regression Models.

    Science.gov (United States)

    Schmidt, Amand F; Klungel, Olaf H; Groenwold, Rolf H H

    2016-01-01

    Postlaunch data on medical treatments can be analyzed to explore adverse events or relative effectiveness in real-life settings. These analyses are often complicated by the number of potential confounders and the possibility of model misspecification. We conducted a simulation study to compare the performance of logistic regression, propensity score, disease risk score, and stabilized inverse probability weighting methods to adjust for confounding. Model misspecification was induced in the independent derivation dataset. We evaluated performance using relative bias confidence interval coverage of the true effect, among other metrics. At low events per coefficient (1.0 and 0.5), the logistic regression estimates had a large relative bias (greater than -100%). Bias of the disease risk score estimates was at most 13.48% and 18.83%. For the propensity score model, this was 8.74% and >100%, respectively. At events per coefficient of 1.0 and 0.5, inverse probability weighting frequently failed or reduced to a crude regression, resulting in biases of -8.49% and 24.55%. Coverage of logistic regression estimates became less than the nominal level at events per coefficient ≤5. For the disease risk score, inverse probability weighting, and propensity score, coverage became less than nominal at events per coefficient ≤2.5, ≤1.0, and ≤1.0, respectively. Bias of misspecified disease risk score models was 16.55%. In settings with low events/exposed subjects per coefficient, disease risk score methods can be useful alternatives to logistic regression models, especially when propensity score models cannot be used. Despite better performance of disease risk score methods than logistic regression and propensity score models in small events per coefficient settings, bias, and coverage still deviated from nominal.

  8. Regression Analysis by Example. 5th Edition

    Science.gov (United States)

    Chatterjee, Samprit; Hadi, Ali S.

    2012-01-01

    Regression analysis is a conceptually simple method for investigating relationships among variables. Carrying out a successful application of regression analysis, however, requires a balance of theoretical results, empirical rules, and subjective judgment. "Regression Analysis by Example, Fifth Edition" has been expanded and thoroughly…

  9. A Seemingly Unrelated Poisson Regression Model

    OpenAIRE

    King, Gary

    1989-01-01

    This article introduces a new estimator for the analysis of two contemporaneously correlated endogenous event count variables. This seemingly unrelated Poisson regression model (SUPREME) estimator combines the efficiencies created by single equation Poisson regression model estimators and insights from "seemingly unrelated" linear regression models.

  10. In-trap decay spectroscopy for {beta}{beta} decays

    Energy Technology Data Exchange (ETDEWEB)

    Brunner, Thomas

    2011-01-18

    The presented work describes the implementation of a new technique to measure electron-capture (EC) branching ratios (BRs) of intermediate nuclei in {beta}{beta} decays. This technique has been developed at TRIUMF in Vancouver, Canada. It facilitates one of TRIUMF's Ion Traps for Atomic and Nuclear science (TITAN), the Electron Beam Ion Trap (EBIT) that is used as a spectroscopy Penning trap. Radioactive ions, produced at the radioactive isotope facility ISAC, are injected and stored in the spectroscopy Penning trap while their decays are observed. A key feature of this technique is the use of a strong magnetic field, required for trapping. It radially confines electrons from {beta} decays along the trap axis while X-rays, following an EC, are emitted isotropically. This provides spatial separation of X-ray and {beta} detection with almost no {beta}-induced background at the X-ray detector, allowing weak EC branches to be measured. Furthermore, the combination of several traps allows one to isobarically clean the sample prior to the in-trap decay spectroscopy measurement. This technique has been developed to measure ECBRs of transition nuclei in {beta}{beta} decays. Detailed knowledge of these electron capture branches is crucial for a better understanding of the underlying nuclear physics in {beta}{beta} decays. These branches are typically of the order of 10{sup -5} and therefore difficult to measure. Conventional measurements suffer from isobaric contamination and a dominating {beta} background at theX-ray detector. Additionally, X-rays are attenuated by the material where the radioactive sample is implanted. To overcome these limitations, the technique of in-trap decay spectroscopy has been developed. In this work, the EBIT was connected to the TITAN beam line and has been commissioned. Using the developed beam diagnostics, ions were injected into the Penning trap and systematic studies on injection and storage optimization were performed. Furthermore, Ge

  11. Multiple Response Regression for Gaussian Mixture Models with Known Labels.

    Science.gov (United States)

    Lee, Wonyul; Du, Ying; Sun, Wei; Hayes, D Neil; Liu, Yufeng

    2012-12-01

    Multiple response regression is a useful regression technique to model multiple response variables using the same set of predictor variables. Most existing methods for multiple response regression are designed for modeling homogeneous data. In many applications, however, one may have heterogeneous data where the samples are divided into multiple groups. Our motivating example is a cancer dataset where the samples belong to multiple cancer subtypes. In this paper, we consider modeling the data coming from a mixture of several Gaussian distributions with known group labels. A naive approach is to split the data into several groups according to the labels and model each group separately. Although it is simple, this approach ignores potential common structures across different groups. We propose new penalized methods to model all groups jointly in which the common and unique structures can be identified. The proposed methods estimate the regression coefficient matrix, as well as the conditional inverse covariance matrix of response variables. Asymptotic properties of the proposed methods are explored. Through numerical examples, we demonstrate that both estimation and prediction can be improved by modeling all groups jointly using the proposed methods. An application to a glioblastoma cancer dataset reveals some interesting common and unique gene relationships across different cancer subtypes.

  12. Boosting structured additive quantile regression for longitudinal childhood obesity data.

    Science.gov (United States)

    Fenske, Nora; Fahrmeir, Ludwig; Hothorn, Torsten; Rzehak, Peter; Höhle, Michael

    2013-07-25

    Childhood obesity and the investigation of its risk factors has become an important public health issue. Our work is based on and motivated by a German longitudinal study including 2,226 children with up to ten measurements on their body mass index (BMI) and risk factors from birth to the age of 10 years. We introduce boosting of structured additive quantile regression as a novel distribution-free approach for longitudinal quantile regression. The quantile-specific predictors of our model include conventional linear population effects, smooth nonlinear functional effects, varying-coefficient terms, and individual-specific effects, such as intercepts and slopes. Estimation is based on boosting, a computer intensive inference method for highly complex models. We propose a component-wise functional gradient descent boosting algorithm that allows for penalized estimation of the large variety of different effects, particularly leading to individual-specific effects shrunken toward zero. This concept allows us to flexibly estimate the nonlinear age curves of upper quantiles of the BMI distribution, both on population and on individual-specific level, adjusted for further risk factors and to detect age-varying effects of categorical risk factors. Our model approach can be regarded as the quantile regression analog of Gaussian additive mixed models (or structured additive mean regression models), and we compare both model classes with respect to our obesity data.

  13. Geographically Weighted Logistic Regression Applied to Credit Scoring Models

    Directory of Open Access Journals (Sweden)

    Pedro Henrique Melo Albuquerque

    Full Text Available Abstract This study used real data from a Brazilian financial institution on transactions involving Consumer Direct Credit (CDC, granted to clients residing in the Distrito Federal (DF, to construct credit scoring models via Logistic Regression and Geographically Weighted Logistic Regression (GWLR techniques. The aims were: to verify whether the factors that influence credit risk differ according to the borrower’s geographic location; to compare the set of models estimated via GWLR with the global model estimated via Logistic Regression, in terms of predictive power and financial losses for the institution; and to verify the viability of using the GWLR technique to develop credit scoring models. The metrics used to compare the models developed via the two techniques were the AICc informational criterion, the accuracy of the models, the percentage of false positives, the sum of the value of false positive debt, and the expected monetary value of portfolio default compared with the monetary value of defaults observed. The models estimated for each region in the DF were distinct in their variables and coefficients (parameters, with it being concluded that credit risk was influenced differently in each region in the study. The Logistic Regression and GWLR methodologies presented very close results, in terms of predictive power and financial losses for the institution, and the study demonstrated viability in using the GWLR technique to develop credit scoring models for the target population in the study.

  14. Intermediate and advanced topics in multilevel logistic regression analysis.

    Science.gov (United States)

    Austin, Peter C; Merlo, Juan

    2017-09-10

    Multilevel data occur frequently in health services, population and public health, and epidemiologic research. In such research, binary outcomes are common. Multilevel logistic regression models allow one to account for the clustering of subjects within clusters of higher-level units when estimating the effect of subject and cluster characteristics on subject outcomes. A search of the PubMed database demonstrated that the use of multilevel or hierarchical regression models is increasing rapidly. However, our impression is that many analysts simply use multilevel regression models to account for the nuisance of within-cluster homogeneity that is induced by clustering. In this article, we describe a suite of analyses that can complement the fitting of multilevel logistic regression models. These ancillary analyses permit analysts to estimate the marginal or population-average effect of covariates measured at the subject and cluster level, in contrast to the within-cluster or cluster-specific effects arising from the original multilevel logistic regression model. We describe the interval odds ratio and the proportion of opposed odds ratios, which are summary measures of effect for cluster-level covariates. We describe the variance partition coefficient and the median odds ratio which are measures of components of variance and heterogeneity in outcomes. These measures allow one to quantify the magnitude of the general contextual effect. We describe an R 2 measure that allows analysts to quantify the proportion of variation explained by different multilevel logistic regression models. We illustrate the application and interpretation of these measures by analyzing mortality in patients hospitalized with a diagnosis of acute myocardial infarction. © 2017 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. © 2017 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.

  15. Regression with Sparse Approximations of Data

    DEFF Research Database (Denmark)

    Noorzad, Pardis; Sturm, Bob L.

    2012-01-01

    We propose sparse approximation weighted regression (SPARROW), a method for local estimation of the regression function that uses sparse approximation with a dictionary of measurements. SPARROW estimates the regression function at a point with a linear combination of a few regressands selected...... by a sparse approximation of the point in terms of the regressors. We show SPARROW can be considered a variant of \\(k\\)-nearest neighbors regression (\\(k\\)-NNR), and more generally, local polynomial kernel regression. Unlike \\(k\\)-NNR, however, SPARROW can adapt the number of regressors to use based...

  16. Spontaneous regression of a congenital melanocytic nevus

    Directory of Open Access Journals (Sweden)

    Amiya Kumar Nath

    2011-01-01

    Full Text Available Congenital melanocytic nevus (CMN may rarely regress which may also be associated with a halo or vitiligo. We describe a 10-year-old girl who presented with CMN on the left leg since birth, which recently started to regress spontaneously with associated depigmentation in the lesion and at a distant site. Dermoscopy performed at different sites of the regressing lesion demonstrated loss of epidermal pigments first followed by loss of dermal pigments. Histopathology and Masson-Fontana stain demonstrated lymphocytic infiltration and loss of pigment production in the regressing area. Immunohistochemistry staining (S100 and HMB-45, however, showed that nevus cells were present in the regressing areas.

  17. SPSS macros to compare any two fitted values from a regression model.

    Science.gov (United States)

    Weaver, Bruce; Dubois, Sacha

    2012-12-01

    In regression models with first-order terms only, the coefficient for a given variable is typically interpreted as the change in the fitted value of Y for a one-unit increase in that variable, with all other variables held constant. Therefore, each regression coefficient represents the difference between two fitted values of Y. But the coefficients represent only a fraction of the possible fitted value comparisons that might be of interest to researchers. For many fitted value comparisons that are not captured by any of the regression coefficients, common statistical software packages do not provide the standard errors needed to compute confidence intervals or carry out statistical tests-particularly in more complex models that include interactions, polynomial terms, or regression splines. We describe two SPSS macros that implement a matrix algebra method for comparing any two fitted values from a regression model. The !OLScomp and !MLEcomp macros are for use with models fitted via ordinary least squares and maximum likelihood estimation, respectively. The output from the macros includes the standard error of the difference between the two fitted values, a 95% confidence interval for the difference, and a corresponding statistical test with its p-value.

  18. FITTING OF THE DATA FOR DIFFUSION COEFFICIENTS IN UNSATURATED POROUS MEDIA

    Energy Technology Data Exchange (ETDEWEB)

    B. Bullard

    1999-05-01

    The purpose of this calculation is to evaluate diffusion coefficients in unsaturated porous media for use in the TSPA-VA analyses. Using experimental data, regression techniques were used to curve fit the diffusion coefficient in unsaturated porous media as a function of volumetric water content. This calculation substantiates the model fit used in Total System Performance Assessment-1995 An Evaluation of the Potential Yucca Mountain Repository (TSPA-1995), Section 6.5.4.

  19. FITTING OF THE DATA FOR DIFFUSION COEFFICIENTS IN UNSATURATED POROUS MEDIA

    International Nuclear Information System (INIS)

    B. Bullard

    1999-01-01

    The purpose of this calculation is to evaluate diffusion coefficients in unsaturated porous media for use in the TSPA-VA analyses. Using experimental data, regression techniques were used to curve fit the diffusion coefficient in unsaturated porous media as a function of volumetric water content. This calculation substantiates the model fit used in Total System Performance Assessment-1995 An Evaluation of the Potential Yucca Mountain Repository (TSPA-1995), Section 6.5.4

  20. A drying coefficient for building materials

    DEFF Research Database (Denmark)

    Scheffler, Gregor Albrecht; Plagge, Rudolf

    2009-01-01

    coefficient is defined which can be determined based on measured drying data. The correlation of this coefficient with the water absorption and the vapour diffusion coefficient is analyzed and its additional information content is critically challenged. As result, a drying coefficient has been derived......The drying experiment is an important element of the hygrothermal characterisation of building materials. Contrary to other moisture transport experiments as the vapour diffusion and the water absorption test, it is until now not possible to derive a simple coefficient for the drying. However......, in many cases such a coefficient would be highly appreciated, e.g. in interaction of industry and research or for the distinction and selection of suitable building materials throughout design and practise. This article first highlights the importance of drying experiments for hygrothermal...

  1. Beta-lactamase detection in Staphylococcus aureus and coagulase-negative Staphylococcus isolated from bovine mastitis

    Directory of Open Access Journals (Sweden)

    Bruno F. Robles

    2014-04-01

    Full Text Available The objectives of the study were to evaluate the presence/production of beta-lactamases by both phenotypic and genotypic methods, verify whether results are dependent of bacteria type (Staphylococcus aureus versus coagulase-negative Staphylococcus - CNS and verify the agreement between tests. A total of 200 bacteria samples from 21 different herds were enrolled, being 100 CNS and 100 S. aureus. Beta-lactamase presence/detection was performed by different tests (PCR, clover leaf test - CLT, Nitrocefin disk, and in vitro resistance to penicillin. Results of all tests were not dependent of bacteria type (CNS or S. aureus. Several S. aureus beta-lactamase producing isolates were from the same herd. Phenotypic tests excluding in vitro resistance to penicillin showed a strong association measured by the kappa coefficient for both bacteria species. Nitrocefin and CLT are more reliable tests for detecting beta-lactamase production in staphylococci.

  2. Smart Beta or Smart Alpha

    DEFF Research Database (Denmark)

    Winther, Kenneth Lillelund; Steenstrup, Søren Resen

    2016-01-01

    that smart beta investing probably will do better than passive market capitalization investing over time, we believe many are coming to a conclusion too quickly regarding active managers. Institutional investors are able to guide managers through benchmarks and risk frameworks toward the same well......Smart beta has become the flavor of the decade in the investment world with its low fees, easy access to rewarded risk premiums, and appearance of providing good investment results relative to both traditional passive benchmarks and actively managed funds. Although we consider it well documented......-documented smart beta risk premiums and still motivate active managers to avoid value traps, too highly priced small caps, defensives, etc. By constructing the equity portfolios of active managers that resemble the most widely used risk premiums, we show that the returns and risk-adjusted returns measures...

  3. Apparatus for measurement of coefficient of friction

    Science.gov (United States)

    Slifka, A. J.; Siegwarth, J. D.; Sparks, L. L.; Chaudhuri, Dilip K.

    1990-01-01

    An apparatus designed to measure the coefficient of friction in certain controlled atmospheres is described. The coefficient of friction observed during high-load tests was nearly constant, with an average value of 0.56. This value is in general agreement with that found in the literature and also with the initial friction coefficient value of 0.67 measured during self-mated friction of 440C steel in an oxygen environment.

  4. New definition of the cell diffusion coefficient

    International Nuclear Information System (INIS)

    Koehler, P.

    1975-01-01

    As was shown in a recent work by Gelbard, the usually applied Benoist definition of the cell diffusion coefficient gives two different values if two different definitions of the cell are made. A new definition is proposed that preserves the neutron balance for the homogenized lattice and that is independent of the cell definition. The resulting diffusion coefficient is identical with the main term of Benoist's diffusion coefficient

  5. Sample size determination for logistic regression on a logit-normal distribution.

    Science.gov (United States)

    Kim, Seongho; Heath, Elisabeth; Heilbrun, Lance

    2017-06-01

    Although the sample size for simple logistic regression can be readily determined using currently available methods, the sample size calculation for multiple logistic regression requires some additional information, such as the coefficient of determination ([Formula: see text]) of a covariate of interest with other covariates, which is often unavailable in practice. The response variable of logistic regression follows a logit-normal distribution which can be generated from a logistic transformation of a normal distribution. Using this property of logistic regression, we propose new methods of determining the sample size for simple and multiple logistic regressions using a normal transformation of outcome measures. Simulation studies and a motivating example show several advantages of the proposed methods over the existing methods: (i) no need for [Formula: see text] for multiple logistic regression, (ii) available interim or group-sequential designs, and (iii) much smaller required sample size.

  6. Realisation of a {beta} spectrometer solenoidal and a double {beta} spectrometer at coincidence; Realisation d'un spectrometre {beta} solenoidal et d'un double spectrometre {beta} a coincidence

    Energy Technology Data Exchange (ETDEWEB)

    Moreau, J [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires

    1955-06-15

    The two spectrometers have been achieved to tackle numerous problems of nuclear spectrometry. They possess different fields of application that complete themselves. The solenoidal spectrometer permits the determination of the energy limits of {beta} spectra and of their shape; it also permits the determination of the coefficients of internal conversion and reports {alpha}{sub K} / {alpha}{sub L} and it is especially efficient for the accurate energy levels of the {gamma} rays by photoelectric effect. The double coincidence spectrometer has been conceived to get a good efficiency in coincidence: indeed, the sum of the solid angles used for the {beta} and {gamma} emission is rather little lower to 4{pi} steradians. To get this efficiency, one should have sacrificed a little the resolution that is lower to the one obtained with the solenoidal spectrometer for a same brightness. Each of the elements of the double spectrometer can also be adapted to the study of angular correlations {beta}{gamma} and e{sup -}{gamma}. In this use, it is superior to the thin magnetic lens used up to here. The double spectrometer also permits the survey of the coincidences e{sup -}e{sup -}, e{sup -}{beta} of a equivalent way to a double lens; it can also be consider some adaptation for the survey of the angular correlations e{sup -}e{sup -}, e{sup -}{beta}. Finally, we applied the methods by simple spectrometry and by coincidence spectrometry, to the study of the radiances of the following radioelements: {sup 76}As (26 h), {sup 122}Sb (2,8 j), {sup 124}Sb (60 j), {sup 125}Sb (2,7 years). (M.B.) [French] Les deux spectrometres qui ont ete realises permettent d'aborder un grand nombre de problemes de spectrometrie nucleaire. Ils possedent des champs d'application tres differents qui se completent. Le spectrometre solenoidal permet la determination des energies limites des spectres {beta} et de leur forme; il permet aussi la determination des coefficients de conversion interne et des rapports

  7. Mean centering, multicollinearity, and moderators in multiple regression: The reconciliation redux.

    Science.gov (United States)

    Iacobucci, Dawn; Schneider, Matthew J; Popovich, Deidre L; Bakamitsos, Georgios A

    2017-02-01

    In this article, we attempt to clarify our statements regarding the effects of mean centering. In a multiple regression with predictors A, B, and A × B (where A × B serves as an interaction term), mean centering A and B prior to computing the product term can clarify the regression coefficients (which is good) and the overall model fit R 2 will remain undisturbed (which is also good).

  8. Transfer coefficients in ultracold strongly coupled plasma

    Science.gov (United States)

    Bobrov, A. A.; Vorob'ev, V. S.; Zelener, B. V.

    2018-03-01

    We use both analytical and molecular dynamic methods for electron transfer coefficients in an ultracold plasma when its temperature is small and the coupling parameter characterizing the interaction of electrons and ions exceeds unity. For these conditions, we use the approach of nearest neighbor to determine the average electron (ion) diffusion coefficient and to calculate other electron transfer coefficients (viscosity and electrical and thermal conductivities). Molecular dynamics simulations produce electronic and ionic diffusion coefficients, confirming the reliability of these results. The results compare favorably with experimental and numerical data from earlier studies.

  9. Comparing linear probability model coefficients across groups

    DEFF Research Database (Denmark)

    Holm, Anders; Ejrnæs, Mette; Karlson, Kristian Bernt

    2015-01-01

    of the following three components: outcome truncation, scale parameters and distributional shape of the predictor variable. These results point to limitations in using linear probability model coefficients for group comparisons. We also provide Monte Carlo simulations and real examples to illustrate......This article offers a formal identification analysis of the problem in comparing coefficients from linear probability models between groups. We show that differences in coefficients from these models can result not only from genuine differences in effects, but also from differences in one or more...... these limitations, and we suggest a restricted approach to using linear probability model coefficients in group comparisons....

  10. Beta decay of 22O

    International Nuclear Information System (INIS)

    Hubert, F.; Dufour, J.P.; Moral, R. del; Fleury, A.; Jean, D.; Pravikoff, M.S.; Delagrange, H.; Geissel, H.; Schmidt, K.H.; Hanelt, E.

    1989-01-01

    The beta-gamma spectroscopic study of 22 O is presented. This nucleus, produced as a projectile-like fragment from the interaction of a 60 MeV/n 40 Ar beam with a Be target, has been separated by the LISE spectrometer. Several gamma rays from 22 O decay have been observed, from which a half-life of (2.25±0.15) s has been determined. Accurate excitation energies have been deduced for several states in 22 F. A partial beta decay scheme of 22 O has been established. Experimental results have been compared with shell model calculations. (orig.)

  11. The Beta Transmuted Weibull Distribution

    Directory of Open Access Journals (Sweden)

    Manisha Pal

    2014-06-01

    Full Text Available The paper introduces a beta transmuted Weibull distribution, which contains a number ofdistributions as special cases. The properties of the distribution are discussed and explicit expressions are derived for the mean deviations, Bonferroni and Lorenz curves, and reliability. The distribution and moments of order statistics are also studied. Estimation of the model parameters by the method of maximum likelihood is discussed. The log beta transmuted Weibull model is introduced to analyze censored data. Finally, the usefulness of the new distribution in analyzing positive data is illustrated.

  12. Beta activity of enriched uranium

    International Nuclear Information System (INIS)

    Nambiar, P.P.V.J.; Ramachandran, V.

    1975-01-01

    Use of enriched uranium as reactor fuel necessitates its handling in various forms. For purposes of planning and organising radiation protection measures in enriched uranium handling facilities, it is necessary to have a basic knowledge of the radiation status of enriched uranium systems. The theoretical variations in beta activity and energy with U 235 enrichment are presented. Depletion is considered separately. Beta activity build up is also studied for two specific enrichments, in respect of which experimental values for specific alpha activity are available. (author)

  13. A {beta} - {gamma} coincidence; Metodo de coincidencias {beta} - {gamma}

    Energy Technology Data Exchange (ETDEWEB)

    Agullo, F

    1960-07-01

    A {beta} - {gamma} coincidence method for absolute counting is given. The fundamental principles are revised and the experimental part is detailed. The results from {sup 1}98 Au irradiated in the JEN 1 Swimming pool reactor are given. The maximal accuracy is 1 per cent. (Author) 11 refs.

  14. Using support vector machine to predict beta- and gamma-turns in proteins.

    Science.gov (United States)

    Hu, Xiuzhen; Li, Qianzhong

    2008-09-01

    By using the composite vector with increment of diversity, position conservation scoring function, and predictive secondary structures to express the information of sequence, a support vector machine (SVM) algorithm for predicting beta- and gamma-turns in the proteins is proposed. The 426 and 320 nonhomologous protein chains described by Guruprasad and Rajkumar (Guruprasad and Rajkumar J. Biosci 2000, 25,143) are used for training and testing the predictive model of the beta- and gamma-turns, respectively. The overall prediction accuracy and the Matthews correlation coefficient in 7-fold cross-validation are 79.8% and 0.47, respectively, for the beta-turns. The overall prediction accuracy in 5-fold cross-validation is 61.0% for the gamma-turns. These results are significantly higher than the other algorithms in the prediction of beta- and gamma-turns using the same datasets. In addition, the 547 and 823 nonhomologous protein chains described by Fuchs and Alix (Fuchs and Alix Proteins: Struct Funct Bioinform 2005, 59, 828) are used for training and testing the predictive model of the beta- and gamma-turns, and better results are obtained. This algorithm may be helpful to improve the performance of protein turns' prediction. To ensure the ability of the SVM method to correctly classify beta-turn and non-beta-turn (gamma-turn and non-gamma-turn), the receiver operating characteristic threshold independent measure curves are provided. (c) 2008 Wiley Periodicals, Inc.

  15. Rizikovost tržní pozice a její vliv na hodnotu beta koeficientu

    Directory of Open Access Journals (Sweden)

    Marek Zinecker

    2014-03-01

    Full Text Available Purpose of the article: This study tackles the questions, whether the betas of companies, that are in a riskier market position, are higher, lower or approximately the same compared to companies in a less risky position and whether the values of beta of companies in riskier and in less risky positions are over 1, within the interval ⟨0; 1⟩, or negative. Methodology/methods: There are used secondary data from financial statements of selected companies acting in the Czech automotive industry from the period 2002–2010. The corporate- and market life cycle is identified according to the model by Reiners (2004 and the beta coefficient is calculated by an alternative way using the accounting earnings. Scientific aim: The research should answer the question, whether the betas of companies, that are in a riskier market position, are higher, lower or approximately the same compared to companies in a less risky position and whether the values of beta of companies in riskier and in less risky positions are over 1, within the interval ⟨0; 1⟩, or negative. Findings: The beta coefficient reaches by market drivers mostly values over 1 and by two other positions mostly the values within the interval ⟨0; 1⟩. Among market pioneers, beta of one company reaches an extreme value –22.89 and so the average value of beta for this position is much lower than for market drivers. According to the median value, the beta for market pioneers is lower than for market drivers, too. Only one company within the sample holds the position of market follower and its beta reaches the value 0.41. All companies are in market positions with a high level of risk in most periods. The positions with a low risk are held maximally in 3 periods, which is typical especially for companies with a value of beta over 1. Conclusions: From these findings about betas can be derived, that cost of equity, which is the expected return of owners, will be higher for market drivers than

  16. N-Benzylhydroxylamine addition to beta-aryl enoates. Enantioselective synthesis of beta-aryl-beta-amino acid precursors

    Science.gov (United States)

    Sibi; Liu

    2000-10-19

    Chiral Lewis acid catalyzed N-benzylhydroxylamine addition to pyrrolidinone-derived enoates afforded beta-aryl-beta-amino acid derivatives in high enantiomeric purity with moderate to very good chemical efficiency.

  17. The Effect of Multicollinearity and the Violation of the Assumption of Normality on the Testing of Hypotheses in Regression Analysis.

    Science.gov (United States)

    Vasu, Ellen S.; Elmore, Patricia B.

    The effects of the violation of the assumption of normality coupled with the condition of multicollinearity upon the outcome of testing the hypothesis Beta equals zero in the two-predictor regression equation is investigated. A monte carlo approach was utilized in which three differenct distributions were sampled for two sample sizes over…

  18. Applied regression analysis a research tool

    CERN Document Server

    Pantula, Sastry; Dickey, David

    1998-01-01

    Least squares estimation, when used appropriately, is a powerful research tool. A deeper understanding of the regression concepts is essential for achieving optimal benefits from a least squares analysis. This book builds on the fundamentals of statistical methods and provides appropriate concepts that will allow a scientist to use least squares as an effective research tool. Applied Regression Analysis is aimed at the scientist who wishes to gain a working knowledge of regression analysis. The basic purpose of this book is to develop an understanding of least squares and related statistical methods without becoming excessively mathematical. It is the outgrowth of more than 30 years of consulting experience with scientists and many years of teaching an applied regression course to graduate students. Applied Regression Analysis serves as an excellent text for a service course on regression for non-statisticians and as a reference for researchers. It also provides a bridge between a two-semester introduction to...

  19. Resummed coefficient function for the shape function

    OpenAIRE

    Aglietti, U.

    2001-01-01

    We present a leading evaluation of the resummed coefficient function for the shape function. It is also shown that the coefficient function is short-distance-dominated. Our results allow relating the shape function computed on the lattice to the physical QCD distributions.

  20. Problems with Discontinuous Diffusion/Dispersion Coefficients

    Directory of Open Access Journals (Sweden)

    Stefano Ferraris

    2012-01-01

    accurate on smooth solutions and based on a special numerical treatment of the diffusion/dispersion coefficients that makes its application possible also when such coefficients are discontinuous. Numerical experiments confirm the convergence of the numerical approximation and show a good behavior on a set of benchmark problems in two space dimensions.

  1. Meta-Analysis of Coefficient Alpha

    Science.gov (United States)

    Rodriguez, Michael C.; Maeda, Yukiko

    2006-01-01

    The meta-analysis of coefficient alpha across many studies is becoming more common in psychology by a methodology labeled reliability generalization. Existing reliability generalization studies have not used the sampling distribution of coefficient alpha for precision weighting and other common meta-analytic procedures. A framework is provided for…

  2. Alternatives to Pearson's and Spearman's Correlation Coefficients

    OpenAIRE

    Smarandache, Florentin

    2008-01-01

    This article presents several alternatives to Pearson's correlation coefficient and many examples. In the samples where the rank in a discrete variable counts more than the variable values, the mixtures that we propose of Pearson's and Spearman's correlation coefficients give better results.

  3. Anomaly coefficients: Their calculation and congruences

    International Nuclear Information System (INIS)

    Braden, H.W.

    1988-01-01

    A new method for the calculation of anomaly coefficients is presented. For su(n) some explicit and general expressions are given for these. In particular, certain congruences are discovered and investigated among the leading anomaly coefficients. As an application of these congruences, the absence of global six-dimensional gauge anomalies is shown

  4. Prediction of friction coefficients for gases

    Science.gov (United States)

    Taylor, M. F.

    1969-01-01

    Empirical relations are used for correlating laminar and turbulent friction coefficients for gases, with large variations in the physical properties, flowing through smooth tubes. These relations have been used to correlate friction coefficients for hydrogen, helium, nitrogen, carbon dioxide and air.

  5. A gain-coefficient switched Alexandrite laser

    International Nuclear Information System (INIS)

    Lee, Chris J; Van der Slot, Peter J M; Boller, Klaus-J

    2013-01-01

    We report on a gain-coefficient switched Alexandrite laser. An electro-optic modulator is used to switch between high and low gain states by making use of the polarization dependent gain of Alexandrite. In gain-coefficient switched mode, the laser produces 85 ns pulses with a pulse energy of 240 mJ at a repetition rate of 5 Hz.

  6. Helioseismic Solar Cycle Changes and Splitting Coefficients

    Indian Academy of Sciences (India)

    tribpo

    Abstract. Using the GONG data for a period over four years, we have studied the variation of frequencies and splitting coefficients with solar cycle. Frequencies and even-order coefficients are found to change signi- ficantly with rising phase of the solar cycle. We also find temporal varia- tions in the rotation rate near the solar ...

  7. Implications of NGA for NEHRP site coefficients

    Science.gov (United States)

    Borcherdt, Roger D.

    2012-01-01

    Three proposals are provided to update tables 11.4-1 and 11.4-2 of Minimum Design Loads for Buildings and Other Structures (7-10), by the American Society of Civil Engineers (2010) (ASCE/SEI 7-10), with site coefficients implied directly by NGA (Next Generation Attenuation) ground motion prediction equations (GMPEs). Proposals include a recommendation to use straight-line interpolation to infer site coefficients at intermediate values of ̅vs (average shear velocity). Site coefficients are recommended to ensure consistency with ASCE/SEI 7-10 MCER (Maximum Considered Earthquake) seismic-design maps and simplified site-specific design spectra procedures requiring site classes with associated tabulated site coefficients and a reference site class with unity site coefficients. Recommended site coefficients are confirmed by independent observations of average site amplification coefficients inferred with respect to an average ground condition consistent with that used for the MCER maps. The NGA coefficients recommended for consideration are implied directly by the NGA GMPEs and do not require introduction of additional models.

  8. Regression models of reactor diagnostic signals

    International Nuclear Information System (INIS)

    Vavrin, J.

    1989-01-01

    The application is described of an autoregression model as the simplest regression model of diagnostic signals in experimental analysis of diagnostic systems, in in-service monitoring of normal and anomalous conditions and their diagnostics. The method of diagnostics is described using a regression type diagnostic data base and regression spectral diagnostics. The diagnostics is described of neutron noise signals from anomalous modes in the experimental fuel assembly of a reactor. (author)

  9. PENGARUH ADOPSI PSAK NO.24 TERHADAP EARNINGS RESPONSE COEFFICIENT

    Directory of Open Access Journals (Sweden)

    Ilha Refyal

    2012-05-01

    Full Text Available This study aims to analyze the influence of PSAK No.24(Revisi 2004 adoption on earningsresponse coefficient (ERC. This study focuses discussion on the differences of ERC between theperiod before to the period after the adoption, the influence of changes in the post-employmentbenefits account (due to revision to the ERC, and the influence of the difference in time ofadoption to the ERC. This study is divided into two tests, which are panel data regression testingand Multiple Cross-section Regression testing. ERC in the period after the adoption of the PSAK24 revision is greater than the period before the adoption of PSAK 24 revision. By usingmanufacturing companies during that adoped PSAK 24 during 2004 or 2005, the research findthat changes in post-employment benefits liability have a significant positive effect on ERC. Thecompanies that adopt the standard earlier (early adopter have a greater ERC compare to thecompanies that adopt at the end of the mandatory time (late adopter The study also supportsprevious research on factors affecting the ERC, which are the capital structure and size. Keywords:Earnings Response Coefficient, Revision PSAK 24, Post-employment Benefits Liability,Adoption Timing.

  10. Electret dosemeter for beta radiation

    International Nuclear Information System (INIS)

    Campos, L.L.; Caldas, L.V.E.; Mascarenhas, S.

    The response characteristics of an electret dosemeter for beta radiation are studied. Experiments were performed using different geometries and walls, and it was verified for which geometry the dosemeter sensitivity is greater. Sources of 90 Sr - 90 Y, 204 Tl and 85 Kr were used in the experiments. (I.C.R.) [pt

  11. Personnel monitoring for beta rays

    International Nuclear Information System (INIS)

    Piesch, E.; Johns, T.F.

    1983-01-01

    The practical considerations which have to be taken into account in the design of personnel monitors intended to measure doses resulting from exposure to beta rays are discussed. These include the measurement of doses in situations involving either fairly uniform or non-uniform irradiation and of doses to the male gonads. (UK)

  12. Constraining neutrinoless double beta decay

    International Nuclear Information System (INIS)

    Dorame, L.; Meloni, D.; Morisi, S.; Peinado, E.; Valle, J.W.F.

    2012-01-01

    A class of discrete flavor-symmetry-based models predicts constrained neutrino mass matrix schemes that lead to specific neutrino mass sum-rules (MSR). We show how these theories may constrain the absolute scale of neutrino mass, leading in most of the cases to a lower bound on the neutrinoless double beta decay effective amplitude.

  13. Beta Cell Workshop 2013 Kyoto

    DEFF Research Database (Denmark)

    Heller, R Scott; Madsen, Ole D; Nielsen, Jens Høiriis

    2013-01-01

    The very modern Kyoto International Conference Center provided the site for the 8th workshop on Beta cells on April 23-26, 2013. The preceding workshops were held in Boston, USA (1991); Kyoto, Japan (1994); Helsingør, Denmark (1997); Helsinki, Finland (2003); El Perello, Spain (2006); Peebles...

  14. Gini coefficient as a life table function

    Directory of Open Access Journals (Sweden)

    2003-06-01

    Full Text Available This paper presents a toolkit for measuring and analyzing inter-individual inequality in length of life by Gini coefficient. Gini coefficient and four other inequality measures are defined on the length-of-life distribution. Properties of these measures and their empirical testing on mortality data suggest a possibility for different judgements about the direction of changes in the degree of inequality by using different measures. A new computational procedure for the estimation of Gini coefficient from life tables is developed and tested on about four hundred real life tables. The estimates of Gini coefficient are precise enough even for abridged life tables with the final age group of 85+. New formulae have been developed for the decomposition of differences between Gini coefficients by age and cause of death. A new method for decomposition of age-components into effects of mortality and composition of population by group is developed. Temporal changes in the effects of elimination of causes of death on Gini coefficient are analyzed. Numerous empirical examples show: Lorenz curves for Sweden, Russia and Bangladesh in 1995, proportional changes in Gini coefficient and four other measures of inequality for the USA in 1950-1995 and for Russia in 1959-2000. Further shown are errors of estimates of Gini coefficient when computed from various types of mortality data of France, Japan, Sweden and the USA in 1900-95, decompositions of the USA-UK difference in life expectancies and Gini coefficients by age and cause of death in 1997. As well, effects of elimination of major causes of death in the UK in 1951-96 on Gini coefficient, age-specific effects of mortality and educational composition of the Russian population on changes in life expectancy and Gini coefficient between 1979 and 1989. Illustrated as well are variations in life expectancy and Gini coefficient across 32 countries in 1996-1999 and associated changes in life expectancy and Gini

  15. Normalization Ridge Regression in Practice I: Comparisons Between Ordinary Least Squares, Ridge Regression and Normalization Ridge Regression.

    Science.gov (United States)

    Bulcock, J. W.

    The problem of model estimation when the data are collinear was examined. Though the ridge regression (RR) outperforms ordinary least squares (OLS) regression in the presence of acute multicollinearity, it is not a problem free technique for reducing the variance of the estimates. It is a stochastic procedure when it should be nonstochastic and it…

  16. Determination of the surface drag coefficient

    DEFF Research Database (Denmark)

    Mahrt, L.; Vickers, D.; Sun, J.L.

    2001-01-01

    This study examines the dependence of the surface drag coefficient on stability, wind speed, mesoscale modulation of the turbulent flux and method of calculation of the drag coefficient. Data sets over grassland, sparse grass, heather and two forest sites are analyzed. For significantly unstable...... conditions, the drag coefficient does not depend systematically on z/L but decreases with wind speed for fixed intervals of z/L, where L is the Obukhov length. Even though the drag coefficient for weak wind conditions is sensitive to the exact method of calculation and choice of averaging time, the decrease...... of the drag coefficient with wind speed occurs for all of the calculation methods. A classification of flux calculation methods is constructed, which unifies the most common previous approaches. The roughness length corresponding to the usual Monin-Obukhov stability functions decreases with increasing wind...

  17. Diffusion coefficients of paracetamol in aqueous solutions

    International Nuclear Information System (INIS)

    Ribeiro, Ana C.F.; Barros, Marisa C.F.; Veríssimo, Luís M.P.; Santos, Cecilia I.A.V.; Cabral, Ana M.T.D.P.V.; Gaspar, Gualter D.; Esteso, Miguel A.

    2012-01-01

    Highlights: ► Mutual diffusion coefficients of paracetamol in aqueous dilute solutions. ► Influence of the thermodynamic factors on the variation of their mutual diffusion coefficients. ► Estimation of the mutual limiting diffusion coefficients of the molecular, D m 0 , and ionized forms, D ± 0 , of this drug. - Abstract: Binary mutual diffusion coefficients measured by the Taylor dispersion method, for aqueous solutions of paracetamol (PA) at concentrations from (0.001 to 0.050) mol·dm −3 at T = 298.15 K, are reported. From the Nernst–Hartley equation and our experimental results, the limiting diffusion coefficient of this drug and its thermodynamic factors are estimated, thereby contributing in this way to a better understanding of the structure of such systems and of their thermodynamic behaviour in aqueous solution at different concentrations.

  18. Estimation of the simple correlation coefficient.

    Science.gov (United States)

    Shieh, Gwowen

    2010-11-01

    This article investigates some unfamiliar properties of the Pearson product-moment correlation coefficient for the estimation of simple correlation coefficient. Although Pearson's r is biased, except for limited situations, and the minimum variance unbiased estimator has been proposed in the literature, researchers routinely employ the sample correlation coefficient in their practical applications, because of its simplicity and popularity. In order to support such practice, this study examines the mean squared errors of r and several prominent formulas. The results reveal specific situations in which the sample correlation coefficient performs better than the unbiased and nearly unbiased estimators, facilitating recommendation of r as an effect size index for the strength of linear association between two variables. In addition, related issues of estimating the squared simple correlation coefficient are also considered.

  19. Multivariate Regression Analysis and Slaughter Livestock,

    Science.gov (United States)

    AGRICULTURE, *ECONOMICS), (*MEAT, PRODUCTION), MULTIVARIATE ANALYSIS, REGRESSION ANALYSIS , ANIMALS, WEIGHT, COSTS, PREDICTIONS, STABILITY, MATHEMATICAL MODELS, STORAGE, BEEF, PORK, FOOD, STATISTICAL DATA, ACCURACY

  20. Beta calibration and dosimetry at IPEN

    International Nuclear Information System (INIS)

    Caldas, L.V.E.

    1983-01-01

    A commercial extrapolation chamber (PTW, Germany) was tested in different beta radiation fields and its properties investigated. Its usefullness for beta radiation calibration and dosimetry was demonstrated. (Author) [pt

  1. The Evaluation of the 0.07 and 3 mm Dose Equivalent with a Portable Beta Spectrometer

    Science.gov (United States)

    Hoshi, Katsuya; Yoshida, Tadayoshi; Tsujimura, Norio; Okada, Kazuhiko

    Beta spectra of various nuclide species were measured using a commercially available compact spectrometer. The shape of the spectra obtained via the spectrometer was almost similar to that of the theoretical spectra. The beta dose equivalent at any depth was obtained as a product of the measured pulse height spectra and the appropriate conversion coefficients of ICRP Publication 74. The dose rates evaluated from the spectra were comparable with the reference dose rates of standard beta calibration sources. In addition, we were able to determine the dose equivalents with a relative error of indication of 10% without the need for complicated correction.

  2. The evaluation of the 0.07 mm and 3 mm dose equivalent with a portable beta spectrometer

    International Nuclear Information System (INIS)

    Hoshi, Katsuya; Yoshida, Tadayoshi; Tsujimura, Norio; Okada, Kazuhiko

    2016-01-01

    Beta spectra of various nuclide species were measured using a commercially available compact spectrometer. The shape of the spectra obtained via the spectrometer was almost similar to that of the theoretical spectra. The beta dose equivalent at any depth was obtained as a product of the measured pulse height spectra and the appropriate conversion coefficients of ICRP Publication 74. The dose rates evaluated from the spectra were comparable with the reference dose rates of standard beta calibration sources. In addition, we were able to determine the dose equivalents with a relative error of indication of 10% without the need for complicated correction. (author)

  3. Experimental Investigation of Discharge Coefficient in Mesh Panel Bottom Intakes

    Directory of Open Access Journals (Sweden)

    keivan bina

    2012-04-01

    Full Text Available Bottom racks is a hydraulic structure which is placed in the bed of stream through which, part of flow in the main channel is diverted. These structures have very wide application in industry, irrigation, drainage and etc. Of course much attention had been paid to the study of such structures, but characteristics of flow through bottom racks are complex. The present study was directed to estimate the discharge coefficient of a new kind of bottom racks including both transverse and longitudinal bars that named "mesh panel racks" without considering any solids in the fluid. This kind of bottom intake has advantages from structural point of view and has less deformation under static and dynamic loads. Laboratory setup with three mesh panel intakes was built and the effects of various parameters such as racks slope, porosity and geometry were explored. A dimensional analysis using Buckingham theory showed the effective hydraulic and geometric factors that affect the discharge coefficient (Cd of bottom racks. Then, a statistical approach to determine the discharge coefficient of a rack structure was developed with linear and nonlinear regression using SPSS software. The efficiency of the proposed technique is high enough that the associated error is limited to 10%. Finally, hydraulic performance of mesh panel intakes was compared with regular type of bottom intakes, which consist of longitudinal bars. For this purpose, diverted discharge through both type of intakes calculated in same situation

  4. Cronbach's [Alpha], Revelle's [Beta], and McDonald's [Omega][sub H]: Their Relations with Each Other and Two Alternative Conceptualizations of Reliability

    Science.gov (United States)

    Zinbarg, Richard E.; Revelle, William; Yovel, Iftah; Li, Wen

    2005-01-01

    We make theoretical comparisons among five coefficients--Cronbach's [alpha], Revelle's [beta], McDonald's [omega][sub h], and two alternative conceptualizations of reliability. Though many end users and psychometricians alike may not distinguish among these five coefficients, we demonstrate formally their nonequivalence. Specifically, whereas…

  5. Analysis of Satellite Drag Coefficient Based on Wavelet Transform

    Science.gov (United States)

    Liu, Wei; Wang, Ronglan; Liu, Siqing

    Abstract: Drag coefficient sequence was obtained by solving Tiangong1 continuous 55days GPS orbit data with different arc length. The same period solar flux f10.7 and geomagnetic index Ap ap series were high and low frequency multi-wavelet decomposition. Statistical analysis results of the layers sliding correlation between space environmental parameters and decomposition of Cd, showed that the satellite drag coefficient sequence after wavelet decomposition and the corresponding level of f10.7 Ap sequence with good lag correlation. It also verified that the Cd prediction is feasible. Prediction residuals of Cd with different regression models and different sample length were analysed. The results showed that the case was best when setting sample length 20 days and f10.7 regression model were used. It also showed that NRLMSIS-00 model's response in the region of 350km (Tiangong's altitude) and low-middle latitude (Tiangong's inclination) is excessive in ascent stage of geomagnetic activity Ap and is inadequate during fall off segment. Additionally, the low-frequency decomposition components NRLMSIS-00 model's response is appropriate in f10.7 rising segment. High frequency decomposition section, Showed NRLMSIS-00 model's response is small-scale inadequate during f10.7 ascent segment and is reverse in decline of f10.7. Finally, the potential use of a summary and outlook were listed; This method has an important reference value to improve the spacecraft orbit prediction accuracy. Key words: wavelet transform; drag coefficient; lag correlation; Tiangong1;space environment

  6. Beta-blocker therapy and cardiac events among patients with newly diagnosed coronary heart disease

    DEFF Research Database (Denmark)

    Andersson, Charlotte; Shilane, David; Go, Alan S

    2014-01-01

    BACKGROUND: The effectiveness of beta-blockers for preventing cardiac events has been questioned for patients who have coronary heart disease (CHD) without a prior myocardial infarction (MI). OBJECTIVES: The purpose of this study was to assess the association of beta-blockers with outcomes among...... patients with new-onset CHD. METHODS: We studied consecutive patients discharged after the first CHD event (acute coronary syndrome or coronary revascularization) between 2000 and 2008 in an integrated healthcare delivery system who did not use beta-blockers in the year before entry. We used time......-varying Cox regression models to determine the hazard ratio (HR) associated with beta-blocker treatment and used treatment-by-covariate interaction tests (pint) to determine whether the association differed for patients with or without a recent MI. RESULTS: A total of 26,793 patients were included, 19...

  7. On macroeconomic values investigation using fuzzy linear regression analysis

    Directory of Open Access Journals (Sweden)

    Richard Pospíšil

    2017-06-01

    Full Text Available The theoretical background for abstract formalization of the vague phenomenon of complex systems is the fuzzy set theory. In the paper, vague data is defined as specialized fuzzy sets - fuzzy numbers and there is described a fuzzy linear regression model as a fuzzy function with fuzzy numbers as vague parameters. To identify the fuzzy coefficients of the model, the genetic algorithm is used. The linear approximation of the vague function together with its possibility area is analytically and graphically expressed. A suitable application is performed in the tasks of the time series fuzzy regression analysis. The time-trend and seasonal cycles including their possibility areas are calculated and expressed. The examples are presented from the economy field, namely the time-development of unemployment, agricultural production and construction respectively between 2009 and 2011 in the Czech Republic. The results are shown in the form of the fuzzy regression models of variables of time series. For the period 2009-2011, the analysis assumptions about seasonal behaviour of variables and the relationship between them were confirmed; in 2010, the system behaved fuzzier and the relationships between the variables were vaguer, that has a lot of causes, from the different elasticity of demand, through state interventions to globalization and transnational impacts.

  8. On Fuzzy {beta}-I-open sets and Fuzzy {beta}-I-continuous functions

    Energy Technology Data Exchange (ETDEWEB)

    Keskin, Aynur [Department of Mathematics, Faculty of Science and Arts, Selcuk University, Campus, 42075 Konya (Turkey)], E-mail: akeskin@selcuk.edu.tr

    2009-11-15

    In this paper, first of all we obtain some properties and characterizations of fuzzy {beta}-I-open sets. After that, we also define the notion of {beta}-I-closed sets and obtain some properties. Lastly, we introduce the notions of fuzzy {beta}-I-continuity with the help of fuzzy {beta}-I-open sets to obtain decomposition of fuzzy continuity.

  9. Beta-Catenin Stability in Breast Cancer

    National Research Council Canada - National Science Library

    Baswaran, Vijay

    1999-01-01

    .... beta-catenin also binds the adenomatous polyposis coli protein (APC). The tumor suppressor function of APC is suggested to depend in part on its ability to bind beta-catenin and to facilitate beta-catenin degradation by an unknown mechanism...

  10. Beta-lactamases in Enterobacteriaceae in broilers

    NARCIS (Netherlands)

    Dierikx, C.M.

    2013-01-01

    Resistance to cephalosprins due to the production of extended spectrum beta-lactamases (ESBLs) or plasmid mediated AmpC beta-lactamases is increasingly found in infections in humans outside the hospital. The genes encoding for these beta-lactamases are located on mobile DNA (plasmids), which can be

  11. Application of support vector regression (SVR) for stream flow prediction on the Amazon basin

    CSIR Research Space (South Africa)

    Du Toit, Melise

    2016-10-01

    Full Text Available regression technique is used in this study to analyse historical stream flow occurrences and predict stream flow values for the Amazon basin. Up to twelve month predictions are made and the coefficient of determination and root-mean-square error are used...

  12. Analysis of interactive fixed effects dynamic linear panel regression with measurement error

    OpenAIRE

    Nayoung Lee; Hyungsik Roger Moon; Martin Weidner

    2011-01-01

    This paper studies a simple dynamic panel linear regression model with interactive fixed effects in which the variable of interest is measured with error. To estimate the dynamic coefficient, we consider the least-squares minimum distance (LS-MD) estimation method.

  13. Calculation of U, Ra, Th and K contents in uranium ore by multiple linear regression method

    International Nuclear Information System (INIS)

    Lin Chao; Chen Yingqiang; Zhang Qingwen; Tan Fuwen; Peng Guanghui

    1991-01-01

    A multiple linear regression method was used to compute γ spectra of uranium ore samples and to calculate contents of U, Ra, Th, and K. In comparison with the inverse matrix method, its advantage is that no standard samples of pure U, Ra, Th and K are needed for obtaining response coefficients

  14. A LATENT CLASS POISSON REGRESSION-MODEL FOR HETEROGENEOUS COUNT DATA

    NARCIS (Netherlands)

    WEDEL, M; DESARBO, WS; BULT, [No Value; RAMASWAMY, [No Value

    1993-01-01

    In this paper an approach is developed that accommodates heterogeneity in Poisson regression models for count data. The model developed assumes that heterogeneity arises from a distribution of both the intercept and the coefficients of the explanatory variables. We assume that the mixing

  15. Constrained statistical inference : sample-size tables for ANOVA and regression

    NARCIS (Netherlands)

    Vanbrabant, Leonard; Van De Schoot, Rens; Rosseel, Yves

    2015-01-01

    Researchers in the social and behavioral sciences often have clear expectations about the order/direction of the parameters in their statistical model. For example, a researcher might expect that regression coefficient β1 is larger than β2 and β3. The corresponding hypothesis is H: β1 > {β2, β3} and

  16. Development of database on the distribution coefficient. 1. Collection of the distribution coefficient data

    Energy Technology Data Exchange (ETDEWEB)

    Takebe, Shinichi; Abe, Masayoshi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    2001-03-01

    The distribution coefficient is very important parameter for environmental impact assessment on the disposal of radioactive waste arising from research institutes. The literature survey in the country was mainly carried out for the purpose of selecting the reasonable distribution coefficient value on the utilization of this value in the safety evaluation. This report was arranged much informations on the distribution coefficient for inputting to the database for each literature, and was summarized as a literature information data on the distribution coefficient. (author)

  17. Predicting beta-turns and their types using predicted backbone dihedral angles and secondary structures.

    Science.gov (United States)

    Kountouris, Petros; Hirst, Jonathan D

    2010-07-31

    Beta-turns are secondary structure elements usually classified as coil. Their prediction is important, because of their role in protein folding and their frequent occurrence in protein chains. We have developed a novel method that predicts beta-turns and their types using information from multiple sequence alignments, predicted secondary structures and, for the first time, predicted dihedral angles. Our method uses support vector machines, a supervised classification technique, and is trained and tested on three established datasets of 426, 547 and 823 protein chains. We achieve a Matthews correlation coefficient of up to 0.49, when predicting the location of beta-turns, the highest reported value to date. Moreover, the additional dihedral information improves the prediction of beta-turn types I, II, IV, VIII and "non-specific", achieving correlation coefficients up to 0.39, 0.33, 0.27, 0.14 and 0.38, respectively. Our results are more accurate than other methods. We have created an accurate predictor of beta-turns and their types. Our method, called DEBT, is available online at http://comp.chem.nottingham.ac.uk/debt/.

  18. Measurement of the beta-neutrino correlation in laser trapped 21Na

    Energy Technology Data Exchange (ETDEWEB)

    Scielzo, Nicholas David [Univ. of California, Berkeley, CA (United States)

    2003-01-01

    Trapped radioactive atoms are an appealing source for precise measurements of the beta-neutrino correlation coefficient, a, since the momentum of the neutrino can be inferred from the detection of the unperturbed low-energy recoil daughter nucleus. Sodium-21 is produced on-line at the 88'' cyclotron at Lawrence Berkeley National Laboratory, and 8e5 atoms have been maintained in a magneto-optical trap. A static electric field draws daughter Neon-21 ions to a microchannel plate detector and betas are detected in coincidence with a plastic scintillator beta detector. The Neon-21 time-of-flight distribution determines the beta neutrino correlation coefficient, a. The resulting charge-state distribution is compared to a simple model based on the sudden approximation which suggests a small but important contribution from nuclear recoil-induced ionization. A larger than expected fraction of the daughters are detected in positive charge-states, but no dependence on either the beta or recoil nucleus energy was observed. We find a = 0.5243 plus or minus 0.0092, which is in 3.6 sigma disagreement with the Standard Model prediction of a = 0.559 plus or minus 0.003. Aside from a deviation from the Standard Model, a possible explanation for the discrepancy is that the branching ratio to the first excited state is in error.

  19. RAWS II: A MULTIPLE REGRESSION ANALYSIS PROGRAM,

    Science.gov (United States)

    This memorandum gives instructions for the use and operation of a revised version of RAWS, a multiple regression analysis program. The program...of preprocessed data, the directed retention of variable, listing of the matrix of the normal equations and its inverse, and the bypassing of the regression analysis to provide the input variable statistics only. (Author)

  20. A Simulation Investigation of Principal Component Regression.

    Science.gov (United States)

    Allen, David E.

    Regression analysis is one of the more common analytic tools used by researchers. However, multicollinearity between the predictor variables can cause problems in using the results of regression analyses. Problems associated with multicollinearity include entanglement of relative influences of variables due to reduced precision of estimation,…

  1. Hierarchical regression analysis in structural Equation Modeling

    NARCIS (Netherlands)

    de Jong, P.F.

    1999-01-01

    In a hierarchical or fixed-order regression analysis, the independent variables are entered into the regression equation in a prespecified order. Such an analysis is often performed when the extra amount of variance accounted for in a dependent variable by a specific independent variable is the main

  2. Categorical regression dose-response modeling

    Science.gov (United States)

    The goal of this training is to provide participants with training on the use of the U.S. EPA’s Categorical Regression soft¬ware (CatReg) and its application to risk assessment. Categorical regression fits mathematical models to toxicity data that have been assigned ord...

  3. Variable importance in latent variable regression models

    NARCIS (Netherlands)

    Kvalheim, O.M.; Arneberg, R.; Bleie, O.; Rajalahti, T.; Smilde, A.K.; Westerhuis, J.A.

    2014-01-01

    The quality and practical usefulness of a regression model are a function of both interpretability and prediction performance. This work presents some new graphical tools for improved interpretation of latent variable regression models that can also assist in improved algorithms for variable

  4. Stepwise versus Hierarchical Regression: Pros and Cons

    Science.gov (United States)

    Lewis, Mitzi

    2007-01-01

    Multiple regression is commonly used in social and behavioral data analysis. In multiple regression contexts, researchers are very often interested in determining the "best" predictors in the analysis. This focus may stem from a need to identify those predictors that are supportive of theory. Alternatively, the researcher may simply be interested…

  5. Suppression Situations in Multiple Linear Regression

    Science.gov (United States)

    Shieh, Gwowen

    2006-01-01

    This article proposes alternative expressions for the two most prevailing definitions of suppression without resorting to the standardized regression modeling. The formulation provides a simple basis for the examination of their relationship. For the two-predictor regression, the author demonstrates that the previous results in the literature are…

  6. Gibrat’s law and quantile regressions

    DEFF Research Database (Denmark)

    Distante, Roberta; Petrella, Ivan; Santoro, Emiliano

    2017-01-01

    The nexus between firm growth, size and age in U.S. manufacturing is examined through the lens of quantile regression models. This methodology allows us to overcome serious shortcomings entailed by linear regression models employed by much of the existing literature, unveiling a number of important...

  7. Regression Analysis and the Sociological Imagination

    Science.gov (United States)

    De Maio, Fernando

    2014-01-01

    Regression analysis is an important aspect of most introductory statistics courses in sociology but is often presented in contexts divorced from the central concerns that bring students into the discipline. Consequently, we present five lesson ideas that emerge from a regression analysis of income inequality and mortality in the USA and Canada.

  8. Repeated Results Analysis for Middleware Regression Benchmarking

    Czech Academy of Sciences Publication Activity Database

    Bulej, Lubomír; Kalibera, T.; Tůma, P.

    2005-01-01

    Roč. 60, - (2005), s. 345-358 ISSN 0166-5316 R&D Projects: GA ČR GA102/03/0672 Institutional research plan: CEZ:AV0Z10300504 Keywords : middleware benchmarking * regression benchmarking * regression testing Subject RIV: JD - Computer Applications, Robotics Impact factor: 0.756, year: 2005

  9. Principles of Quantile Regression and an Application

    Science.gov (United States)

    Chen, Fang; Chalhoub-Deville, Micheline

    2014-01-01

    Newer statistical procedures are typically introduced to help address the limitations of those already in practice or to deal with emerging research needs. Quantile regression (QR) is introduced in this paper as a relatively new methodology, which is intended to overcome some of the limitations of least squares mean regression (LMR). QR is more…

  10. ON REGRESSION REPRESENTATIONS OF STOCHASTIC-PROCESSES

    NARCIS (Netherlands)

    RUSCHENDORF, L; DEVALK, [No Value

    We construct a.s. nonlinear regression representations of general stochastic processes (X(n))n is-an-element-of N. As a consequence we obtain in particular special regression representations of Markov chains and of certain m-dependent sequences. For m-dependent sequences we obtain a constructive

  11. Isoproterenol reduces ischemia-reperfusion lung injury despite beta-blockade.

    Science.gov (United States)

    Takashima, Seiki; Schlidt, Scott A; Koukoulis, Giovanna; Sevala, Mayura; Egan, Thomas M

    2005-06-01

    If lungs could be retrieved from non-heart-beating donors (NHBDs), the shortage of lungs for transplantation could be alleviated. The use of lungs from NHBDs is associated with a mandatory warm ischemic interval, which results in ischemia-reperfusion injury upon reperfusion. In an earlier study, rat lungs retrieved 2-h postmortem from NHBDs had reduced capillary leak measured by filtration coefficient (Kfc) when reperfused with isoproterenol (iso), associated with an increase in lung tissue levels of cyclic AMP (cAMP). The objective was to determine if this decrease in Kfc was because of beta-stimulation, or would persist despite beta-blockade. Donor rats were treated intraperitoneally with beta-blockade (propranolol or pindolol) or carrier, sacrificed, and lungs were retrieved immediately or 2 h postmortem. The lungs were reperfused with or without iso and the beta-blockers in the reperfusate. Outcome measures were Kfc, wet:dry weight ratio (W/D), lung levels of adenine nucleotides and cAMP. Lungs retrieved immediately after death had normal Kfc and W/D. After 2 h of ischemia, Kfc and W/D were markedly elevated in controls (no drug) and lungs reperfused with beta-blockers alone. Isoproterenol-reperfusion decreased Kfc and W/D significantly (P < 0.01) even in the presence of beta-blockade. Lung cAMP levels were increased only with iso in the absence of beta-blockade. The attenuation of ischemia-reperfusion injury because of iso occurs even in the presence of beta-blockade, and may not be a result of beta-stimulated increased cAMP.

  12. Variation in aerodynamic coefficients with altitude

    Directory of Open Access Journals (Sweden)

    Faiza Shahid

    Full Text Available Precise aerodynamics performance prediction plays key role for a flying vehicle to get its mission completed within desired accuracy. Aerodynamic coefficients for same Mach number can be different at different altitude due to difference in Reynolds number. Prediction of these aerodynamics coefficients can be made through experiments, analytical solution or Computational Fluid Dynamics (CFD. Advancements in computational power have generated the concept of using CFD as a virtual Wind Tunnel (WT, hence aerodynamic performance prediction in present study is based upon CFD (numerical test rig. Simulations at different altitudes for a range of Mach numbers with zero angle of attack are performed to predict axial force coefficient behavior with altitude (Reynolds number. Similar simulations for a fixed Mach number ‘3’ and a range of angle of attacks are also carried out to envisage the variation in normal force and pitching moment coefficients with altitude (Reynolds number. Results clearly depict that the axial force coefficient is a function of altitude (Reynolds number and increase as altitude increases, especially for subsonic region. Variation in axial force coefficient with altitude (Reynolds number slightly increases for larger values of angle of attacks. Normal force and pitching moment coefficients do not depend on altitude (Reynolds number at smaller values of angle of attacks but show slight decrease as altitude increases. Present study suggests that variation of normal force and pitching moment coefficients with altitude can be neglected but the variation of axial force coefficient with altitude should be considered for vehicle fly in dense atmosphere. It is recommended to continue this study to more complex configurations for various Mach numbers with side slip and real gas effects. Keywords: Mach number, Reynolds number, Blunt body, Altitude effect, Angle of attacks

  13. Variation in aerodynamic coefficients with altitude

    Science.gov (United States)

    Shahid, Faiza; Hussain, Mukkarum; Baig, Mirza Mehmood; Haq, Ihtram ul

    Precise aerodynamics performance prediction plays key role for a flying vehicle to get its mission completed within desired accuracy. Aerodynamic coefficients for same Mach number can be different at different altitude due to difference in Reynolds number. Prediction of these aerodynamics coefficients can be made through experiments, analytical solution or Computational Fluid Dynamics (CFD). Advancements in computational power have generated the concept of using CFD as a virtual Wind Tunnel (WT), hence aerodynamic performance prediction in present study is based upon CFD (numerical test rig). Simulations at different altitudes for a range of Mach numbers with zero angle of attack are performed to predict axial force coefficient behavior with altitude (Reynolds number). Similar simulations for a fixed Mach number '3' and a range of angle of attacks are also carried out to envisage the variation in normal force and pitching moment coefficients with altitude (Reynolds number). Results clearly depict that the axial force coefficient is a function of altitude (Reynolds number) and increase as altitude increases, especially for subsonic region. Variation in axial force coefficient with altitude (Reynolds number) slightly increases for larger values of angle of attacks. Normal force and pitching moment coefficients do not depend on altitude (Reynolds number) at smaller values of angle of attacks but show slight decrease as altitude increases. Present study suggests that variation of normal force and pitching moment coefficients with altitude can be neglected but the variation of axial force coefficient with altitude should be considered for vehicle fly in dense atmosphere. It is recommended to continue this study to more complex configurations for various Mach numbers with side slip and real gas effects.

  14. Designing Predictive Models for Beta-Lactam Allergy Using the Drug Allergy and Hypersensitivity Database.

    Science.gov (United States)

    Chiriac, Anca Mirela; Wang, Youna; Schrijvers, Rik; Bousquet, Philippe Jean; Mura, Thibault; Molinari, Nicolas; Demoly, Pascal

    Beta-lactam antibiotics represent the main cause of allergic reactions to drugs, inducing both immediate and nonimmediate allergies. The diagnosis is well established, usually based on skin tests and drug provocation tests, but cumbersome. To design predictive models for the diagnosis of beta-lactam allergy, based on the clinical history of patients with suspicions of allergic reactions to beta-lactams. The study included a retrospective phase, in which records of patients explored for a suspicion of beta-lactam allergy (in the Allergy Unit of the University Hospital of Montpellier between September 1996 and September 2012) were used to construct predictive models based on a logistic regression and decision tree method; a prospective phase, in which we performed an external validation of the chosen models in patients with suspicion of beta-lactam allergy recruited from 3 allergy centers (Montpellier, Nîmes, Narbonne) between March and November 2013. Data related to clinical history and allergy evaluation results were retrieved and analyzed. The retrospective and prospective phases included 1991 and 200 patients, respectively, with a different prevalence of confirmed beta-lactam allergy (23.6% vs 31%, P = .02). For the logistic regression method, performances of the models were similar in both samples: sensitivity was 51% (vs 60%), specificity 75% (vs 80%), positive predictive value 40% (vs 57%), and negative predictive value 83% (vs 82%). The decision tree method reached a sensitivity of 29.5% (vs 43.5%), specificity of 96.4% (vs 94.9%), positive predictive value of 71.6% (vs 79.4%), and negative predictive value of 81.6% (vs 81.3%). Two different independent methods using clinical history predictors were unable to accurately predict beta-lactam allergy and replace a conventional allergy evaluation for suspected beta-lactam allergy. Copyright © 2017 American Academy of Allergy, Asthma & Immunology. Published by Elsevier Inc. All rights reserved.

  15. Regression of environmental noise in LIGO data

    International Nuclear Information System (INIS)

    Tiwari, V; Klimenko, S; Mitselmakher, G; Necula, V; Drago, M; Prodi, G; Frolov, V; Yakushin, I; Re, V; Salemi, F; Vedovato, G

    2015-01-01

    We address the problem of noise regression in the output of gravitational-wave (GW) interferometers, using data from the physical environmental monitors (PEM). The objective of the regression analysis is to predict environmental noise in the GW channel from the PEM measurements. One of the most promising regression methods is based on the construction of Wiener–Kolmogorov (WK) filters. Using this method, the seismic noise cancellation from the LIGO GW channel has already been performed. In the presented approach the WK method has been extended, incorporating banks of Wiener filters in the time–frequency domain, multi-channel analysis and regulation schemes, which greatly enhance the versatility of the regression analysis. Also we present the first results on regression of the bi-coherent noise in the LIGO data. (paper)

  16. Pathological assessment of liver fibrosis regression

    Directory of Open Access Journals (Sweden)

    WANG Bingqiong

    2017-03-01

    Full Text Available Hepatic fibrosis is the common pathological outcome of chronic hepatic diseases. An accurate assessment of fibrosis degree provides an important reference for a definite diagnosis of diseases, treatment decision-making, treatment outcome monitoring, and prognostic evaluation. At present, many clinical studies have proven that regression of hepatic fibrosis and early-stage liver cirrhosis can be achieved by effective treatment, and a correct evaluation of fibrosis regression has become a hot topic in clinical research. Liver biopsy has long been regarded as the gold standard for the assessment of hepatic fibrosis, and thus it plays an important role in the evaluation of fibrosis regression. This article reviews the clinical application of current pathological staging systems in the evaluation of fibrosis regression from the perspectives of semi-quantitative scoring system, quantitative approach, and qualitative approach, in order to propose a better pathological evaluation system for the assessment of fibrosis regression.

  17. Should metacognition be measured by logistic regression?

    Science.gov (United States)

    Rausch, Manuel; Zehetleitner, Michael

    2017-03-01

    Are logistic regression slopes suitable to quantify metacognitive sensitivity, i.e. the efficiency with which subjective reports differentiate between correct and incorrect task responses? We analytically show that logistic regression slopes are independent from rating criteria in one specific model of metacognition, which assumes (i) that rating decisions are based on sensory evidence generated independently of the sensory evidence used for primary task responses and (ii) that the distributions of evidence are logistic. Given a hierarchical model of metacognition, logistic regression slopes depend on rating criteria. According to all considered models, regression slopes depend on the primary task criterion. A reanalysis of previous data revealed that massive numbers of trials are required to distinguish between hierarchical and independent models with tolerable accuracy. It is argued that researchers who wish to use logistic regression as measure of metacognitive sensitivity need to control the primary task criterion and rating criteria. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. Development and applications of beta and near beta titanium alloys

    International Nuclear Information System (INIS)

    Takemura, A.; Ohyama, H.; Nishimura, T.; Abumiya, T.

    1993-01-01

    In this report the authors introduced application of beta and near beta titanium alloys also development and processing of these alloys at Kobe Steel LTD. Ti-15Mo-5Zr-3Al is an alloy developed by Kobe Steel which has been applied for variety of sporting goods, also used as an erosion shield of steam turbine blades. Ti-15Mo-5Zr-3Al high strength wire for valve springs is under development. New beta alloys(Ti-V-Nb-Sn-Al) are under development which have lower flow stress at room temperature than Ti 15V-3Cr-3Sn-3Al, expected to improve productivity of cold forging. NNS forging and thermo mechanical treatment of Ti-10V-2Fe-3Al were studied. Ti-10V-2Fe3Al steam turbine blades and structural parts for aircraft were developed. Fine grain cold strips of Ti 15V-3Cr-3Sn-3Al are produced by annealing and pickling process. These cold strips are used for parts of a fishing rod

  19. Heat transfer coefficient for boiling carbon dioxide

    DEFF Research Database (Denmark)

    Knudsen, Hans Jørgen Høgaard; Jensen, Per Henrik

    1998-01-01

    Heat transfer coefficient and pressure drop for boiling carbon dioxide (R744) flowing in a horizontal pipe has been measured. The calculated heat transfer coeeficient has been compared with the Chart correlation of Shah. The Chart Correlation predits too low heat transfer coefficient but the ratio...... between the measured and the calculated heat transfer coefficient is nearly constant and equal 1.9. With this factor the correlation predicts the measured data within 14% (RMS). The pressure drop is of the same order as the measuring uncertainty and the pressure drop has not been compared with correlation's....

  20. Virial Coefficients for the Liquid Argon

    Science.gov (United States)

    Korth, Micheal; Kim, Saesun

    2014-03-01

    We begin with a geometric model of hard colliding spheres and calculate probability densities in an iterative sequence of calculations that lead to the pair correlation function. The model is based on a kinetic theory approach developed by Shinomoto, to which we added an interatomic potential for argon based on the model from Aziz. From values of the pair correlation function at various values of density, we were able to find viral coefficients of liquid argon. The low order coefficients are in good agreement with theoretical hard sphere coefficients, but appropriate data for argon to which these results might be compared is difficult to find.